Published onApril 10, 2026Optimizing Ollama Storage: Running LLMs from an External USB-C DriveAIOllamaHardwareStorageOptimizationM1LLMLocal-AIApple-SiliconA guide on moving local LLM models to external storage to reclaim internal SSD space on a Mac while maintaining inference performance.
Published onMarch 30, 2026Local AI Development Strategies: The First Step to Choosing the Right LLM for Your Workflow.AILLMLocal-AIOllamaApple-SiliconMistralQwenDeepSeekSome thoughts on local LLM development and choosing the right model for your workflow.