<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Apple Silicon on Bruce on AI Engineering</title><link>http://www.heyuan110.com/tags/apple-silicon/</link><description>Recent content in Apple Silicon on Bruce on AI Engineering</description><generator>Hugo</generator><language>en</language><lastBuildDate>Tue, 14 Apr 2026 10:00:00 +0800</lastBuildDate><atom:link href="http://www.heyuan110.com/tags/apple-silicon/index.xml" rel="self" type="application/rss+xml"/><item><title>Apple Silicon AI Workstation 2026: M4 Pro vs M3 Max for Local LLM &amp; Image Gen</title><link>http://www.heyuan110.com/posts/ai/2026-04-14-mac-apple-silicon-ai-workstation/</link><pubDate>Tue, 14 Apr 2026 10:00:00 +0800</pubDate><guid>http://www.heyuan110.com/posts/ai/2026-04-14-mac-apple-silicon-ai-workstation/</guid><description>&lt;p&gt;&lt;img src="http://www.heyuan110.com/posts/ai/2026-04-14-mac-apple-silicon-ai-workstation/cover.webp"
 alt="Apple Silicon AI workstation comparison: M4 Pro vs M3 Max running Ollama, MLX, ComfyUI, and Draw Things locally"
 
 loading="lazy"
 decoding="async"
 fetchpriority="low"
 width="1200"
 height="630"
/&gt;
&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve spent the last six months running &lt;strong&gt;Apple Silicon as my primary local AI workstation&lt;/strong&gt; — Ollama, MLX, ComfyUI, Draw Things, llama.cpp, all day every day, across an M4 Pro Mac mini (48GB), an M3 Max MacBook Pro (64GB), and a friend&amp;rsquo;s M3 Max Mac Studio (128GB). The conclusion is not what the Apple keynotes would suggest.&lt;/p&gt;</description></item></channel></rss>