[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-qwen36-35b-a3b-open-source-coding-model-en":3,"tags-qwen36-35b-a3b-open-source-coding-model-en":29,"related-lang-qwen36-35b-a3b-open-source-coding-model-en":30,"related-posts-qwen36-35b-a3b-open-source-coding-model-en":34,"series-model-release-26e34d8d-efd4-4253-a791-cca1b1803567":71},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":10,"x_posted_at":10},"26e34d8d-efd4-4253-a791-cca1b1803567","Qwen3.6-35B-A3B opens a new open-source coding lane","\u003Cp>\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.6-35B-A3B\" target=\"_blank\" rel=\"noopener\">Qwen3.6-35B-A3B\u003C\u002Fa> is the kind of model release that makes infrastructure people perk up. It has 35 billion total parameters, only 3 billion active at inference time, and Alibaba says it can hold its own in \u003Ca href=\"\u002Fnews\u002Fqwen36-35b-a3b-open-source-agentic-coding-en\">agentic coding\u003C\u002Fa> against much larger dense models.\u003C\u002Fp>\u003Cp>The headline is simple: this is an open-weight MoE model that tries to hit a sweet spot between capability and cost. It is already available in \u003Ca href=\"https:\u002F\u002Fchat.qwen.ai\u002F\" target=\"_blank\" rel=\"noopener\">Qwen Studio\u003C\u002Fa>, downloadable from \u003Ca href=\"https:\u002F\u002Fmodelscope.cn\u002Fmodels\u002FQwen\u002FQwen3.6-35B-A3B\" target=\"_blank\" rel=\"noopener\">ModelScope\u003C\u002Fa>, and published on \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.6-35B-A3B\" target=\"_blank\" rel=\"noopener\">Hugging Face\u003C\u002Fa>.\u003C\u002Fp>\u003Ch2>Why this release matters\u003C\u002Fh2>\u003Cp>Qwen3.6-35B-A3B is a sparse mixture-of-experts model, which means it keeps the full parameter count high while activating only a small slice for each token. That matters because the model can stay relatively efficient while still packing enough capacity for coding, reasoning, and multimodal work.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776643430778-2if7.png\" alt=\"Qwen3.6-35B-A3B opens a new open-source coding lane\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>Alibaba’s pitch is that this model is stronger than \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.5-35B-A3B\" target=\"_blank\" rel=\"noopener\">Qwen3.5-35B-A3B\u003C\u002Fa> in agentic coding, and competitive with denser models such as \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.5-27B\" target=\"_blank\" rel=\"noopener\">Qwen3.5-27B\u003C\u002Fa> and \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-3-27b-it\" target=\"_blank\" rel=\"noopener\">Gemma 3 27B\u003C\u002Fa>. The interesting part is not just raw size. It is the fact that the model is positioned as a practical tool for terminal-based coding assistants.\u003C\u002Fp>\u003Cul>\u003Cli>35B total parameters\u003C\u002Fli>\u003Cli>3B active parameters per token\u003C\u002Fli>\u003Cli>Open weights on Hugging Face and ModelScope\u003C\u002Fli>\u003Cli>API access planned through \u003Ca href=\"https:\u002F\u002Fwww.alibabacloud.com\u002Fproduct\u002Fmodel-studio\" target=\"_blank\" rel=\"noopener\">Alibaba Cloud Model Studio\u003C\u002Fa> under the qwen3.6-flash name\u003C\u002Fli>\u003C\u002Ful>\u003Cp>That combination puts it in a very specific lane. It is not trying to be the biggest model on the market. It is trying to be the one you can actually run, wire into tools, and afford to use often.\u003C\u002Fp>\u003Ch2>Multimodal support is part of the pitch\u003C\u002Fh2>\u003Cp>Qwen3.6-35B-A3B is built with both thinking and non-thinking modes, and it supports multimodal input out of the box. That matters because a lot of coding work now includes screenshots, diagrams, UI references, and visual debugging. A model that can read images and reason across them has a better shot at helping with modern development tasks.\u003C\u002Fp>\u003Cp>Alibaba says the model performs strongly on visual-language benchmarks and reaches parity with \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-sonnet-4-5\" target=\"_blank\" rel=\"noopener\">Claude Sonnet 4.5\u003C\u002Fa> on many of them, with some wins in spatial tasks. The company highlights RefCOCO at 92.0 and ODInW13 at 50.8, which are the kinds of numbers that matter when a model is asked to identify or localize objects in images.\u003C\u002Fp>\u003Cblockquote>“We are committed to making AI accessible and useful for everyone.” — Sam Altman, OpenAI\u003C\u002Fblockquote>\u003Cp>That quote is from OpenAI, not Alibaba, but it captures the same pressure shaping this release: model makers now need to prove that access, cost, and utility can coexist. Open-weight systems are no longer side projects. They are part of the main competition.\u003C\u002Fp>\u003Cp>For developers, the practical takeaway is more interesting than the benchmark chatter. If a model can inspect a screenshot, reason about the UI, and help patch the code behind it, that cuts a lot of back-and-forth between IDE, browser, and terminal.\u003C\u002Fp>\u003Ch2>Tooling compatibility is the real story\u003C\u002Fh2>\u003Cp>One of the smartest details in this release is protocol compatibility. Alibaba says Qwen API supports the Anthropic API format, which means tools built for \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa> can work with Qwen-backed endpoints. That is a big deal because the friction in model adoption often lives in tooling, not benchmarks.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776643433903-xh7o.png\" alt=\"Qwen3.6-35B-A3B opens a new open-source coding lane\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>The model also plugs into \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fopencrawl\u002Fopencrawl\" target=\"_blank\" rel=\"noopener\">OpenClaw\u003C\u002Fa>, \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FQwenLM\u002Fqwen-code\" target=\"_blank\" rel=\"noopener\">Qwen Code\u003C\u002Fa>, and Claude Code. In other words, this is not a model you admire from a distance. It is one you can actually drop into workflows.\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fchat.qwen.ai\u002F\" target=\"_blank\" rel=\"noopener\">Qwen Studio\u003C\u002Fa> for direct interaction\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.alibabacloud.com\u002Fproduct\u002Fmodel-studio\" target=\"_blank\" rel=\"noopener\">Alibaba Cloud Model Studio\u003C\u002Fa> for API access\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FQwenLM\u002Fqwen-code\" target=\"_blank\" rel=\"noopener\">Qwen Code\u003C\u002Fa> for terminal workflows\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa> compatibility through Anthropic-style APIs\u003C\u002Fli>\u003C\u002Ful>\u003Cp>There is also a useful feature called preserve_thinking, which keeps prior reasoning context across turns. For agentic tasks, that can matter more than a small benchmark gain because agents fail when they lose track of what they were doing two steps ago.\u003C\u002Fp>\u003Cp>That kind of design choice tells you who this model is for: people building coding agents, not just chatting with a general assistant. The release is less about a demo and more about putting a usable backend under real developer tools.\u003C\u002Fp>\u003Ch2>How it compares in practice\u003C\u002Fh2>\u003Cp>The most notable comparison is between active parameters and outcome quality. Qwen3.6-35B-A3B activates only about 3 billion parameters, yet Alibaba says it outperforms the denser \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.5-27B\" target=\"_blank\" rel=\"noopener\">Qwen3.5-27B\u003C\u002Fa> on several programming benchmarks. If that holds up in independent testing, it is a strong argument for sparse models in agentic coding.\u003C\u002Fp>\u003Cp>Here is the practical comparison developers will care about:\u003C\u002Fp>\u003Cul>\u003Cli>Qwen3.6-35B-A3B: 35B total, 3B active, open weights\u003C\u002Fli>\u003Cli>Qwen3.5-35B-A3B: older direct predecessor, larger active footprint in practice\u003C\u002Fli>\u003Cli>Qwen3.5-27B: denser model with 27B parameters\u003C\u002Fli>\u003Cli>Claude Sonnet 4.5: stronger closed-model benchmark reference for multimodal work\u003C\u002Fli>\u003C\u002Ful>\u003Cp>If you run models locally or through hosted inference, active parameters affect throughput, latency, and cost more than the headline total count. That is why a 35B MoE model with 3B active parameters can matter more than a dense 27B model in day-to-day use.\u003C\u002Fp>\u003Cp>The open-weight part matters just as much. You can inspect the model, deploy it in your own stack, and avoid depending on a single vendor’s interface. For teams building internal coding assistants, that flexibility often matters more than a few benchmark points.\u003C\u002Fp>\u003Ch2>What this means for developers\u003C\u002Fh2>\u003Cp>This release is a sign that open models are getting more serious about agentic coding, and that the gap between open and closed tooling is shrinking in the places developers feel most: terminal workflows, API compatibility, and multimodal context. If Qwen3.6-35B-A3B holds up under broader use, it could become a default choice for teams that want Claude Code-style workflows without tying everything to one provider.\u003C\u002Fp>\u003Cp>The next thing I would watch is independent evaluation on real coding tasks, especially multi-step repository edits and visual debugging. Benchmarks are useful, but the real test is whether the model can keep state, follow instructions, and make clean changes inside a live codebase.\u003C\u002Fp>\u003Cp>My bet: the most important adoption metric will not be raw benchmark rank. It will be how often teams swap it into existing agent stacks because the API shape and open weights make that easy. If that happens, Qwen3.6-35B-A3B will matter less as a single model and more as a template for how open coding agents should be built.\u003C\u002Fp>","Qwen3.6-35B-A3B ships with 35B total params, 3B active params, and Anthropic API compatibility for Claude Code workflows.","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2028415749698385113",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776643430778-2if7.png",[13,14,15,16,17],"Qwen3.6-35B-A3B","MoE model","agentic coding","Claude Code","open-weight model","en",0,false,"2026-04-20T00:03:37.697765+00:00","2026-04-20T00:03:37.671+00:00","done","599fd29d-1933-4102-a71c-7203ad0d1440","qwen36-35b-a3b-open-source-coding-model-en","model-release","1c99e395-4b38-4793-9604-1de54b9f2897","published",[],{"id":27,"slug":31,"title":32,"language":33},"qwen36-35b-a3b-open-source-coding-model-zh","Qwen3.6-35B-A3B 打開開源寫碼新路線","zh",[35,41,47,53,59,65],{"id":36,"slug":37,"title":38,"cover_image":39,"image_url":39,"created_at":40,"category":26},"9245be69-14ae-4bf3-91ef-d2366b08e460","claude-design-anthropic-launch","Claude Design Launches: Anthropic's AI Design Tool Enters Beta","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776605423840-xgho.png","2026-04-19T12:48:27.45788+00:00",{"id":42,"slug":43,"title":44,"cover_image":45,"image_url":45,"created_at":46,"category":26},"67c09a9c-ca15-45fd-926b-948e3d91827e","gemini-app-release-notes-latest-updates-en","Geminiの最新アップデート総まとめ","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776514190078-juel.png","2026-04-18T12:09:32.166891+00:00",{"id":48,"slug":49,"title":50,"cover_image":51,"image_url":51,"created_at":52,"category":26},"928b50fb-6d24-4229-b88a-cb3caa66a6e8","linux-7-0-rust-ai-bug-finding-en","Linux 7.0 lands with Rust and AI-finding bugs","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776427617312-y00q.png","2026-04-17T12:06:36.898067+00:00",{"id":54,"slug":55,"title":56,"cover_image":57,"image_url":57,"created_at":58,"category":26},"c1fac97f-de34-4254-b62e-eddcab4b6ef3","openai-limits-gpt-54-cyber-trusted-firms-en","OpenAI Limits GPT-5.4-Cyber to Trusted Firms","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776297833412-wlma.png","2026-04-16T00:03:29.403078+00:00",{"id":60,"slug":61,"title":62,"cover_image":63,"image_url":63,"created_at":64,"category":26},"bd3ea20a-829f-4c46-90f3-dc75d961ca01","openai-launches-gpt-54-cyber-defense-work-en","OpenAI launches GPT-5.4-Cyber for defense work","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776254645112-5qsk.png","2026-04-15T12:03:43.901089+00:00",{"id":66,"slug":67,"title":68,"cover_image":69,"image_url":69,"created_at":70,"category":26},"cb45188a-2e6e-4ac7-95f0-39cbd2f7d7a2","gpt-5-4-benchmarks-2026-scores-rankings-en","GPT-5.4 Scores 97.6 in Knowledge Benchmarks","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776082204490-nq2r.png","2026-04-13T12:09:40.792366+00:00",[72,77,82,87,92,97,102,107,112,117],{"id":73,"slug":74,"title":75,"created_at":76},"d4cffde7-9b50-4cc7-bb68-8bc9e3b15477","nvidia-rubin-ai-supercomputer-en","NVIDIA Unveils Rubin: A Leap in AI Supercomputing","2026-03-25T16:24:35.155565+00:00",{"id":78,"slug":79,"title":80,"created_at":81},"eab919b9-fbac-4048-89fc-afad6749ccef","google-gemini-ai-innovations-2026-en","Google's AI Leap with Gemini Innovations in 2026","2026-03-25T16:27:18.841838+00:00",{"id":83,"slug":84,"title":85,"created_at":86},"5f5cfc67-3384-4816-a8f6-19e44d90113d","gap-google-gemini-ai-checkout-en","Gap Teams Up with Google Gemini for AI-Driven Checkout","2026-03-25T16:27:46.483272+00:00",{"id":88,"slug":89,"title":90,"created_at":91},"f6d04567-47f6-49ec-804c-52e61ab91225","ai-model-release-wave-march-2026-en","Navigating the AI Model Release Wave of March 2026","2026-03-25T16:28:45.409716+00:00",{"id":93,"slug":94,"title":95,"created_at":96},"895c150c-569e-4fdf-939d-dade785c990e","small-language-models-transform-ai-en","Small Language Models: Llama 3.2 and Phi-3 Transform AI","2026-03-25T16:30:26.688313+00:00",{"id":98,"slug":99,"title":100,"created_at":101},"38eb1d26-d961-4fd3-ae12-9c4089680f5f","midjourney-v8-alpha-features-pricing-en","Midjourney V8 Alpha: A Deep Dive into Its Features and Pricing","2026-03-26T01:25:36.387587+00:00",{"id":103,"slug":104,"title":105,"created_at":106},"bf36bb9e-3444-4fb8-ab19-0df6bc9d8271","rag-2026-indispensable-ai-bridge-en","RAG in 2026: The Indispensable AI Bridge","2026-03-26T01:28:34.472046+00:00",{"id":108,"slug":109,"title":110,"created_at":111},"60881d6d-2310-44ef-b1fb-7f98e9dd2f0e","xiaomi-mimo-trio-agents-robots-voice-en","Xiaomi’s MiMo trio targets agents, robots, and voice","2026-03-28T03:05:08.899895+00:00",{"id":113,"slug":114,"title":115,"created_at":116},"f063d8d1-41d1-4de4-8ebc-6c40511b9369","xiaomi-mimo-v2-pro-1t-moe-agents-en","Xiaomi MiMo-V2-Pro: 1T MoE Model for Agents","2026-03-28T03:06:19.238032+00:00",{"id":118,"slug":119,"title":120,"created_at":121},"a1379e9a-6785-4ff5-9b0a-8cff55f8264f","cursor-composer-2-started-from-kimi-en","Cursor’s Composer 2 started from Kimi","2026-03-28T03:11:59.132398+00:00"]