[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-qwen36-35b-a3b-open-source-coding-model-zh":3,"tags-qwen36-35b-a3b-open-source-coding-model-zh":32,"related-lang-qwen36-35b-a3b-open-source-coding-model-zh":33,"related-posts-qwen36-35b-a3b-open-source-coding-model-zh":37,"series-model-release-1c99e395-4b38-4793-9604-1de54b9f2897":74},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":21,"translated_content":10,"views":22,"is_premium":23,"created_at":24,"updated_at":24,"cover_image":11,"published_at":25,"rewrite_status":26,"rewrite_error":10,"rewritten_from_id":27,"slug":28,"category":29,"related_article_id":30,"status":31,"google_indexed_at":10,"x_posted_at":10},"1c99e395-4b38-4793-9604-1de54b9f2897","Qwen3.6-35B-A3B 打開開源寫碼新路線","\u003Cp>說真的，這次 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.6-35B-A3B\" target=\"_blank\" rel=\"noopener\">Qwen3.6-35B-A3B\u003C\u002Fa> 很有看頭。它有 350 億總參數，推論時只啟用 30 億。這種 MoE 設計，講白了就是想把成本壓下來，還保住寫碼能力。\u003C\u002Fp>\u003Cp>更猛的是，它直接對準 agentic coding。官方還放出 \u003Ca href=\"https:\u002F\u002Fchat.qwen.ai\u002F\" target=\"_blank\" rel=\"noopener\">Qwen Studio\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fmodelscope.cn\u002Fmodels\u002FQwen\u002FQwen3.6-35B-A3B\" target=\"_blank\" rel=\"noopener\">ModelScope\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.6-35B-A3B\" target=\"_blank\" rel=\"noopener\">Hugging Face\u003C\u002Fa> 三條路。對開發者來說，這種部署彈性比口號實在多了。\u003C\u002Fp>\u003Ch2>這次釋出為什麼重要\u003C\u002Fh2>\u003Cp>先講白話。Qwen3.6-35B-A3B 是稀疏 MoE 模型。它保留 35B 的總容量，但每次只喚醒一小部分參數。這代表它不是靠蠻力硬推，而是靠架構設計省算力。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776643431808-tti7.png\" alt=\"Qwen3.6-35B-A3B 打開開源寫碼新路線\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>對寫程式的人來說，這件事很實際。你要的是回應快、成本低、上下文穩。不是每次都把整台伺服器燒得像在跑渲染。\u003C\u002Fp>\u003Cp>Alibaba 的說法也很直白。它想把這個模型放進 terminal-based coding assistant。也就是說，目標不是聊天而已，是直接幫你改 repo、看錯誤、接工具。\u003C\u002Fp>\u003Cul>\u003Cli>35B 總參數\u003C\u002Fli>\u003Cli>3B 啟用參數\u003C\u002Fli>\u003Cli>開權重，可下載可自架\u003C\u002Fli>\u003Cli>可透過 \u003Ca href=\"https:\u002F\u002Fwww.alibabacloud.com\u002Fproduct\u002Fmodel-studio\" target=\"_blank\" rel=\"noopener\">Alibaba Cloud Model Studio\u003C\u002Fa> 走 API\u003C\u002Fli>\u003C\u002Ful>\u003Cp>這四點合起來，就很像在打實戰。不是只拼榜單，而是拼你能不能真的拿去用。這點我覺得比單純刷分有意思多了。\u003C\u002Fp>\u003Cp>而且它不是孤島。你可以直接在官方平台玩，也能拉進自己的工作流。這對團隊導入很重要，因為遷移成本通常死在細節，不死在模型名字。\u003C\u002Fp>\u003Ch2>多模態與推理模式，才是它的底氣\u003C\u002Fh2>\u003Cp>Qwen3.6-35B-A3B 支援 thinking 和 non-thinking 兩種模式。這代表它可以在不同任務下切換策略。簡單問答不用太多推理，複雜除錯再拉高思考深度。\u003C\u002Fp>\u003Cp>它也支援多模態輸入。這點很適合現在的開發場景。你在 IDE 裡看錯誤訊息，瀏覽器裡看 UI 截圖，還有設計稿、流程圖、log 圖。模型能看圖，幫助就不只停在文字層。\u003C\u002Fp>\u003Cp>官方還提到它在視覺語言基準上表現不差，某些項目甚至貼近 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-sonnet-4-5\" target=\"_blank\" rel=\"noopener\">Claude Sonnet 4.5\u003C\u002Fa>。像 RefCOCO 92.0、ODInW13 50.8 這種數字，至少說明它在定位與辨識任務上有料。\u003C\u002Fp>\u003Cblockquote>“We are committed to making AI accessible and useful for everyone.” — Sam Altman, OpenAI\u003C\u002Fblockquote>\u003Cp>這句話不是 Alibaba 講的，但很貼切。現在模型競爭早就不是只看參數。你能不能讓人真的用，才是重點。\u003C\u002Fp>\u003Cp>對工程師來說，最有感的地方是跨工具協作。模型如果能看圖、讀錯誤、接 API，再把結果回寫到程式碼，很多來回溝通就少一半。這才像工具，不像玩具。\u003C\u002Fp>\u003Ch2>工具相容性，才是這顆模型的主菜\u003C\u002Fh2>\u003Cp>我覺得這次最聰明的設計，是 API 相容性。Alibaba 說 Qwen API 支援 Anthropic API 格式。這代表原本為 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa> 做的工具，有機會直接接到 Qwen 後端。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776643434539-u7da.png\" alt=\"Qwen3.6-35B-A3B 打開開源寫碼新路線\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這件事很關鍵，因為多數團隊不是缺模型，而是缺整合時間。你要改 SDK、改環境變數、改認證、改提示詞，最後還要測 agent 行為。每一步都會吃掉工時。\u003C\u002Fp>\u003Cp>它也能接到 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FQwenLM\u002Fqwen-code\" target=\"_blank\" rel=\"noopener\">Qwen Code\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fopencrawl\u002Fopencrawl\" target=\"_blank\" rel=\"noopener\">OpenClaw\u003C\u002Fa> 這類工具。換句話說，它不是只在簡報上好看，而是真的能塞進現有流程。\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fchat.qwen.ai\u002F\" target=\"_blank\" rel=\"noopener\">Qwen Studio\u003C\u002Fa>：直接對話與測試\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.alibabacloud.com\u002Fproduct\u002Fmodel-studio\" target=\"_blank\" rel=\"noopener\">Alibaba Cloud Model Studio\u003C\u002Fa>：API 入口\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FQwenLM\u002Fqwen-code\" target=\"_blank\" rel=\"noopener\">Qwen Code\u003C\u002Fa>：終端機工作流\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa>：Anthropic 風格 API 相容\u003C\u002Fli>\u003C\u002Ful>\u003Cp>還有一個細節很實用，叫 preserve_thinking。它會保留前一輪推理脈絡。對 agent 來說，這比多 1 分 benchmark 更重要。因為 agent 最常死在「忘了自己剛剛在幹嘛」。\u003C\u002Fp>\u003Cp>所以這顆模型的定位很清楚。它不是只給你聊天框。它是要進 IDE、進 shell、進自動化流程。這種定位，才會讓開源模型真的進到日常開發。\u003C\u002Fp>\u003Ch2>跟其他模型比，差在哪裡\u003C\u002Fh2>\u003Cp>先看最重要的數字。Qwen3.6-35B-A3B 總參數 35B，但每次只啟用約 3B。這讓它在吞吐、延遲、成本上，都有機會比同級 dense model 更好看。\u003C\u002Fp>\u003Cp>官方也拿它去對比 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.5-35B-A3B\" target=\"_blank\" rel=\"noopener\">Qwen3.5-35B-A3B\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.5-27B\" target=\"_blank\" rel=\"noopener\">Qwen3.5-27B\u003C\u002Fa>，還有 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-3-27b-it\" target=\"_blank\" rel=\"noopener\">Gemma 3 27B\u003C\u002Fa>。重點不是誰名字比較大，而是誰在 agentic coding 裡更省錢。\u003C\u002Fp>\u003Cp>如果你是自己架推論服務，這差異會很有感。因為實際成本看的是活躍參數，不是標題上的總參數。這也是 MoE 會讓人關注的原因。\u003C\u002Fp>\u003Cul>\u003Cli>Qwen3.6-35B-A3B：35B 總參數，3B 啟用\u003C\u002Fli>\u003Cli>Qwen3.5-35B-A3B：前代版本\u003C\u002Fli>\u003Cli>Qwen3.5-27B：較小的 dense 模型\u003C\u002Fli>\u003Cli>Gemma 3 27B：同級開源參考\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-sonnet-4-5\" target=\"_blank\" rel=\"noopener\">Claude Sonnet 4.5\u003C\u002Fa>：閉源強力對照組\u003C\u002Fli>\u003C\u002Ful>\u003Cp>我會這樣看。若 benchmark 差距不大，但成本低很多，那開發團隊通常會選前者。因為產品不是在跑分，是在燒預算。\u003C\u002Fp>\u003Cp>而且開權重還有一個優勢。你能自己看模型、自己調整部署、自己做內部評估。這對企業或新創都很重要，尤其是要把模型接進內部工具時。\u003C\u002Fp>\u003Ch2>開源寫碼模型的背景，其實正在變\u003C\u002Fh2>\u003Cp>這波不是單一模型的故事。它反映的是整個開源 LLM 走向實用化。以前大家比誰參數大，現在大家比誰能接工具、能跑 agent、能處理圖文混合任務。\u003C\u002Fp>\u003Cp>另一個變化是，開發者開始在意「相容性」勝過「品牌」。你如果已經有 \u003Ca href=\"\u002Fnews\u002Fclaude-design-codebase-aware-system-zh\">Clau\u003C\u002Fa>de Code 的流程，現在只要換後端就能試 Qwen，這種切換成本低很多。對團隊來說，這比重新發明一套介面更有吸引力。\u003C\u002Fp>\u003Cp>再來是成本壓力。模型不是只有訓練成本。推論成本、維運成本、快取策略、上下文長度，都會直接影響產品毛利。這也是為什麼 3B active 這種數字會讓人眼睛一亮。\u003C\u002Fp>\u003Cp>如果你回頭看過去兩年，很多開源模型都在補這幾個洞：工具調用、長上下文、多模態、API 相容。Qwen3.6-35B-A3B 只是把這些需求一次打包，然後丟到開發者面前。\u003C\u002Fp>\u003Ch2>我會怎麼看這顆模型\u003C\u002Fh2>\u003Cp>我覺得它最可能的落點，不是取代所有閉源模型。它更像是給團隊一個可控、可改、可自架的 coding backend。這對想做內部 agent、程式碼審查、repo 操作自動化的人，很有吸引力。\u003C\u002Fp>\u003Cp>接下來最值得觀察的，不是官方宣傳，而是第三方實測。尤其是多步驟 repo 編輯、視覺除錯、長任務記憶，這三種情境最能看出它是不是只會答題。\u003C\u002Fp>\u003Cp>如果你問我會不會試，我會。至少先拿它跟現有的 \u003Ca href=\"\u002Fnews\u002Fclaude-design-features-guide-zh\">Clau\u003C\u002Fa>de Code 流程對接，看看切換成本有多低。若真的能少改很多程式，這顆模型就不只是新聞，而是可以進產品線的選項。\u003C\u002Fp>\u003Cp>我的預測很直接。接下來 6 到 12 個月，開源 coding model 競爭會更像\u003Ca href=\"\u002Fnews\u002Fclaude-design-vs-figma-canva-zh\">工具戰\u003C\u002Fa>，不像榜單戰。誰能讓開發者少改設定、少換介面、少燒算力，誰就更容易被採用。你如果在做 AI coding 工具，現在就該開始測它了。\u003C\u002Fp>","Qwen3.6-35B-A3B 以 35B 總參數、3B 啟用參數和 Anthropic API 相容性，直接瞄準 Claude Code 工作流。這款開源 MoE 模型想把效能、成本和工具整合一次做到位。","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2028415749698385113",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776643431808-tti7.png",[13,14,15,16,17,18,19,20],"Qwen3.6-35B-A3B","開源模型","MoE","agentic coding","Claude Code","Anthropic API","多模態 AI","LLM","zh",0,false,"2026-04-20T00:03:37.398827+00:00","2026-04-20T00:03:37.374+00:00","done","599fd29d-1933-4102-a71c-7203ad0d1440","qwen36-35b-a3b-open-source-coding-model-zh","model-release","26e34d8d-efd4-4253-a791-cca1b1803567","published",[],{"id":30,"slug":34,"title":35,"language":36},"qwen36-35b-a3b-open-source-coding-model-en","Qwen3.6-35B-A3B opens a new open-source coding lane","en",[38,44,50,56,62,68],{"id":39,"slug":40,"title":41,"cover_image":42,"image_url":42,"created_at":43,"category":29},"7859c2b7-e2b2-4a74-84b6-7d4d3b59ae23","claude-design-anthropic-launch-zh","Claude Design 上線：Anthropic 推 AI 設計工具挑戰 Figma","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776601823491-fri0.png","2026-04-19T12:27:01.135769+00:00",{"id":45,"slug":46,"title":47,"cover_image":48,"image_url":48,"created_at":49,"category":29},"0e194b22-718e-45f9-9ad6-2649b49cd21e","gemini-app-release-notes-latest-updates-zh","Gemini最新更新總整理","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776514185258-7g5s.png","2026-04-18T12:09:31.74909+00:00",{"id":51,"slug":52,"title":53,"cover_image":54,"image_url":54,"created_at":55,"category":29},"a3c7f0ae-6f45-4912-8617-66382a90a390","linux-7-0-rust-ai-bug-finding-zh","Linux 7.0 上線：Rust 與 AI 找蟲","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776427615139-lfqs.png","2026-04-17T12:06:36.315425+00:00",{"id":57,"slug":58,"title":59,"cover_image":60,"image_url":60,"created_at":61,"category":29},"b875d3ed-f892-43a8-a51e-920729e85b1e","gpt-5-4-benchmarks-2026-scores-rankings-zh","GPT-5.4 知識測驗拿 97.6 分","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776082194973-cyii.png","2026-04-13T12:09:40.301446+00:00",{"id":63,"slug":64,"title":65,"cover_image":66,"image_url":66,"created_at":67,"category":29},"bf25b91d-035f-418f-bedd-4b066c1e1259","openai-revenue-valuation-funding-2026-zh","OpenAI 2026 營收、估值與募資解析","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775995438254-rxea.png","2026-04-12T12:03:39.268699+00:00",{"id":69,"slug":70,"title":71,"cover_image":72,"image_url":72,"created_at":73,"category":29},"d4969444-3b01-40cb-8411-c422b535cdf1","kimi-k25-moonshot-open-model-elite-zh","Kimi K2.5 上線：開源模型打進第一梯隊","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775272373330-xkt6.png","2026-04-04T03:12:36.705829+00:00",[75,80,85,90,95,100,105,110,115,120],{"id":76,"slug":77,"title":78,"created_at":79},"58b64033-7eb6-49b9-9aab-01cf8ae1b2f2","nvidia-rubin-six-chips-one-ai-supercomputer-zh","NVIDIA Rubin 把六顆晶片塞進 AI 機櫃","2026-03-26T07:18:45.861277+00:00",{"id":81,"slug":82,"title":83,"created_at":84},"0dcc2c61-c2a6-480d-adb8-dd225fc68914","march-2026-ai-model-news-what-mattered-zh","2026 年 3 月 AI 模型新聞重點","2026-03-26T07:32:08.386348+00:00",{"id":86,"slug":87,"title":88,"created_at":89},"214ab08b-5ce5-4b5c-8b72-47619d8675dd","why-small-models-are-winning-on-device-ai-zh","小模型為何吃下裝置端 AI","2026-03-26T07:36:30.488966+00:00",{"id":91,"slug":92,"title":93,"created_at":94},"785624b2-0355-4b82-adc3-de5e45eecd88","midjourney-v8-faster-images-higher-costs-zh","Midjourney V8 變快了，也變貴了","2026-03-26T07:52:03.562971+00:00",{"id":96,"slug":97,"title":98,"created_at":99},"cda76b92-d209-4134-86c1-a60f5bc7b128","xiaomi-mimo-trio-agents-robots-voice-zh","小米 MiMo 三模型瞄準代理、機器人與語音","2026-03-28T03:05:08.779489+00:00",{"id":101,"slug":102,"title":103,"created_at":104},"9e1044b4-946d-47fe-9e2a-c2ee032e1164","xiaomi-mimo-v2-pro-1t-moe-agents-zh","小米 MiMo-V2-Pro 登場：1T MoE 模型","2026-03-28T03:06:19.002353+00:00",{"id":106,"slug":107,"title":108,"created_at":109},"d68e59a2-55eb-4a8f-95d6-edc8fcbff581","cursor-composer-2-started-from-kimi-zh","Cursor Composer 2 其實從 Kimi 起步","2026-03-28T03:11:58.893796+00:00",{"id":111,"slug":112,"title":113,"created_at":114},"c4b6186f-bd84-4598-997e-c6e31d543c0d","cursor-composer-2-agentic-coding-model-zh","Cursor Composer 2 走向代理式寫碼","2026-03-28T03:13:06.422716+00:00",{"id":116,"slug":117,"title":118,"created_at":119},"45812c46-99fc-4b1f-aae1-56f64f5c9024","openai-shuts-down-sora-video-app-api-zh","OpenAI 關閉 Sora App 與 API","2026-03-29T04:47:48.974108+00:00",{"id":121,"slug":122,"title":123,"created_at":124},"e112e76f-ec3b-408f-810e-e93ae21a888a","apple-siri-gemini-distilled-models-zh","Apple Siri 牽手 Gemini 的真相","2026-03-29T04:52:57.886544+00:00"]