[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-qwen36-35b-a3b-open-source-agentic-coding-en":3,"tags-qwen36-35b-a3b-open-source-agentic-coding-en":29,"related-lang-qwen36-35b-a3b-open-source-agentic-coding-en":30,"related-posts-qwen36-35b-a3b-open-source-agentic-coding-en":34,"series-model-release-fede422a-0961-4b9a-a0ce-215f0b1e18b3":53},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":10,"x_posted_at":10},"fede422a-0961-4b9a-a0ce-215f0b1e18b3","Qwen3.6-35B-A3B Opens Up for Agentic Coding","\u003Cp>\u003Ca href=\"https:\u002F\u002Fqwenlm.github.io\u002F\" target=\"_blank\" rel=\"noopener\">Qwen\u003C\u002Fa> just opened the door on a model that looks small on paper and punchy in practice: \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.6-35B-A3B\" target=\"_blank\" rel=\"noopener\">Qwen3.6-35B-A3B\u003C\u002Fa>, a sparse MoE model with 35 billion total parameters and only 3 billion active per token. That active-parameter number matters because it is the part that actually does the work on each step, so the model can stay lighter than dense peers while still aiming high on coding and reasoning.\u003C\u002Fp>\u003Cp>What makes this release worth attention is the target. Qwen says the model is tuned for agentic coding, multimodal reasoning, and tool use, and it is already available in \u003Ca href=\"https:\u002F\u002Fchat.qwen.ai\u002F\" target=\"_blank\" rel=\"noopener\">Qwen Studio\u003C\u002Fa>, on \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.6-35B-A3B\" target=\"_blank\" rel=\"noopener\">Hugging Face\u003C\u002Fa>, and on \u003Ca href=\"https:\u002F\u002Fmodelscope.cn\u002Fmodels\u002FQwen\u002FQwen3.6-35B-A3B\" target=\"_blank\" rel=\"noopener\">ModelScope\u003C\u002Fa>. For developers, the interesting question is simple: can a 3B-active model behave like a much larger one when it is inside an agent loop?\u003C\u002Fp>\u003Ch2>A sparse model built for agent loops\u003C\u002Fh2>\u003Cp>Qwen3.6-35B-A3B is a mixture-of-experts model, which means it routes each token through selected expert blocks instead of waking up the whole network. That design usually trades some simplicity for efficiency, and here the balance is clear: 35B total parameters, 3B active, and an explicit push toward coding tasks that involve planning, tool calls, and multi-step edits.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776600415348-wrfu.png\" alt=\"Qwen3.6-35B-A3B Opens Up for Agentic Coding\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>The company says the model improves on \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.5-35B-A3B\" target=\"_blank\" rel=\"noopener\">Qwen3.5-35B-A3B\u003C\u002Fa> and can compete with denser models such as \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\u002FQwen3.5-27B\" target=\"_blank\" rel=\"noopener\">Qwen3.5-27B\u003C\u002Fa> and \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fgoogle\u002Fgemma-3-27b-it\" target=\"_blank\" rel=\"noopener\">Gemma 3 27B\u003C\u002Fa> in several benchmarks. That is a strong claim, but the more interesting part is that Qwen is pairing the model with practical access paths rather than treating it like a lab-only release.\u003C\u002Fp>\u003Cul>\u003Cli>Total parameters: 35B\u003C\u002Fli>\u003Cli>Active parameters per token: 3B\u003C\u002Fli>\u003Cli>Model type: sparse MoE\u003C\u002Fli>\u003Cli>Availability: Qwen Studio, Hugging Face, ModelScope\u003C\u002Fli>\u003Cli>API naming in \u003Ca href=\"https:\u002F\u002Fwww.alibabacloud.com\u002Fproduct\u002Fmodel-studio\" target=\"_blank\" rel=\"noopener\">Alibaba Cloud Model Studio\u003C\u002Fa>: qwen3.6-flash\u003C\u002Fli>\u003Cli>Modes: thinking and non-thinking\u003C\u002Fli>\u003C\u002Ful>\u003Cp>That last point matters for agent work. A model that can switch between thinking and non-thinking modes gives teams a way to separate fast chat from slower, more deliberate planning. In practice, that can reduce latency for ordinary prompts while keeping deeper reasoning available when the task needs it.\u003C\u002Fp>\u003Ch2>Why the benchmark claims are interesting\u003C\u002Fh2>\u003Cp>Qwen says Qwen3.6-35B-A3B outperforms its direct predecessor on agentic coding and reasoning, and that it beats the denser 27B sibling on several code benchmarks despite activating only 3B parameters per token. If those results hold in real developer workflows, the model could matter for teams that want lower inference cost without giving up too much capability.\u003C\u002Fp>\u003Cp>The multimodal numbers are the other headline. Qwen reports that the model matches or exceeds \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-sonnet-4-5\" target=\"_blank\" rel=\"noopener\">Claude Sonnet 4.5\u003C\u002Fa> on most visual-language benchmarks, with especially strong spatial results: RefCOCO at 92.0 and ODInW13 at 50.8. Those are not casual bragging rights; they suggest the model is useful for UI understanding, image-grounded editing, and agent workflows that need to inspect screenshots or diagrams.\u003C\u002Fp>\u003Cblockquote>“The model can achieve strong agentic coding and reasoning performance with only 3B active parameters.” — Qwen release post on Zhihu\u003C\u002Fblockquote>\u003Cp>That quote is worth reading literally. Qwen is not saying the model is the biggest or the most expensive. It is saying the opposite: a smaller active footprint can still do serious work if the routing, training, and task alignment are good enough.\u003C\u002Fp>\u003Cp>For developers, the benchmark story matters in a practical way. If a model with 3B active parameters can keep up with larger dense competitors on code tasks, then local deployment, faster iteration, and lower serving cost all become more realistic. That is especially relevant for teams building coding agents that call tools repeatedly and spend a lot of time in intermediate reasoning steps.\u003C\u002Fp>\u003Ch2>How to use it in real tools\u003C\u002Fh2>\u003Cp>Qwen3.6-35B-A3B is already wired into several developer workflows. Qwen says it can integrate with \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FQwenLM\u002Fqwen-code\" target=\"_blank\" rel=\"noopener\">Qwen Code\u003C\u002Fa>, \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FAll-Hands-AI\u002FOpenHands\" target=\"_blank\" rel=\"noopener\">OpenClaw\u003C\u002Fa> after the project’s rename from Moltbot\u002FClawdbot, and \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fanthropics\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa> through Anthropic-compatible API support. That is a smart distribution strategy because it meets developers where they already work: terminal tools, agent shells, and existing API clients.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776600425818-eto1.png\" alt=\"Qwen3.6-35B-A3B Opens Up for Agentic Coding\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>The API side is also more flexible than a one-off demo endpoint. Qwen says \u003Ca href=\"https:\u002F\u002Fwww.alibabacloud.com\u002Fproduct\u002Fmodel-studio\" target=\"_blank\" rel=\"noopener\">Alibaba Cloud Model Studio\u003C\u002Fa> supports OpenAI-style chat completions and responses APIs, plus Anthropic-style interfaces. It also adds a preserve_thinking option, which keeps prior reasoning traces in the message history for agent tasks. That is exactly the sort of feature that matters when a model has to remember a plan across multiple tool calls.\u003C\u002Fp>\u003Cul>\u003Cli>OpenAI-compatible chat completions and responses APIs\u003C\u002Fli>\u003Cli>Anthropic-compatible API interface\u003C\u002Fli>\u003Cli>preserve_thinking for multi-turn agent memory\u003C\u002Fli>\u003Cli>Terminal-first workflows through Qwen Code and Claude Code\u003C\u002Fli>\u003Cli>Local weight downloads for offline or self-hosted use\u003C\u002Fli>\u003C\u002Ful>\u003Cp>There is a real developer advantage here. If a team already has scripts for OpenAI-compatible endpoints, the migration path is easier. If they use Claude Code, the Anthropic-compatible layer lowers friction. If they want full control, the open weights on Hugging Face and ModelScope make local testing possible without waiting for a hosted product team to bless the workflow.\u003C\u002Fp>\u003Ch2>What this says about open models now\u003C\u002Fh2>\u003Cp>The most interesting part of this release is not that Qwen launched another large model. It is that the company is making a case for sparse models as practical agent engines, especially when the task is coding. That matters because agentic software does not behave like a single prompt-response chat. It needs memory, tool use, retries, and the ability to keep a plan alive across multiple turns.\u003C\u002Fp>\u003Cp>Qwen3.6-35B-A3B also shows how open models are getting more specialized without becoming narrow. The release combines code strength, multimodal reasoning, and flexible deployment options in a single package. For teams choosing between a dense 27B model and a sparse 35B MoE model, the tradeoff is no longer just parameter count. It is whether the model can stay useful when it is embedded in an actual workflow.\u003C\u002Fp>\u003Cp>If Qwen’s benchmark claims survive wider community testing, the next round of adoption will likely come from teams building coding assistants, screenshot-aware agents, and internal automation tools that need decent reasoning without a huge serving bill. The key question now is whether developers will see the same behavior outside the benchmark suite, in messy repos and half-finished tickets where agent tools usually earn their keep.\u003C\u002Fp>\u003Cp>My read: the next release cycle will be judged less by raw scale and more by how well models like Qwen3.6-35B-A3B keep context, call tools, and recover from mistakes. If you are building an agent today, this is a model worth benchmarking against your own workloads rather than your favorite leaderboard screenshot.\u003C\u002Fp>","Qwen3.6-35B-A3B packs 35B total params, 3B active, and stronger agentic coding than Qwen3.5-35B-A3B.","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2028227606244245698",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776600415348-wrfu.png",[13,14,15,16,17],"Qwen3.6-35B-A3B","MoE","agentic coding","open source model","multimodal reasoning","en",0,false,"2026-04-19T12:06:35.156729+00:00","2026-04-19T12:06:35.009+00:00","done","347a9c61-9e33-4c2b-a6ff-433c4eac1e3e","qwen36-35b-a3b-open-source-agentic-coding-en","model-release","e205910b-f3c7-45bc-9f9d-65119cce411a","published",[],{"id":27,"slug":31,"title":32,"language":33},"qwen36-35b-a3b-open-source-agentic-coding-zh","Qwen3.6-35B-A3B 打開 Agentic Co…","zh",[35,41,47],{"id":36,"slug":37,"title":38,"cover_image":39,"image_url":39,"created_at":40,"category":26},"9245be69-14ae-4bf3-91ef-d2366b08e460","claude-design-anthropic-launch","Claude Design Launches: Anthropic's AI Design Tool Enters Beta","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776605423840-xgho.png","2026-04-19T12:48:27.45788+00:00",{"id":42,"slug":43,"title":44,"cover_image":45,"image_url":45,"created_at":46,"category":26},"67c09a9c-ca15-45fd-926b-948e3d91827e","gemini-app-release-notes-latest-updates-en","Geminiの最新アップデート総まとめ","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776514190078-juel.png","2026-04-18T12:09:32.166891+00:00",{"id":48,"slug":49,"title":50,"cover_image":51,"image_url":51,"created_at":52,"category":26},"928b50fb-6d24-4229-b88a-cb3caa66a6e8","linux-7-0-rust-ai-bug-finding-en","Linux 7.0 lands with Rust and AI-finding bugs","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776427617312-y00q.png","2026-04-17T12:06:36.898067+00:00",[54,59,64,69,74,79,84,89,94,99],{"id":55,"slug":56,"title":57,"created_at":58},"d4cffde7-9b50-4cc7-bb68-8bc9e3b15477","nvidia-rubin-ai-supercomputer-en","NVIDIA Unveils Rubin: A Leap in AI Supercomputing","2026-03-25T16:24:35.155565+00:00",{"id":60,"slug":61,"title":62,"created_at":63},"eab919b9-fbac-4048-89fc-afad6749ccef","google-gemini-ai-innovations-2026-en","Google's AI Leap with Gemini Innovations in 2026","2026-03-25T16:27:18.841838+00:00",{"id":65,"slug":66,"title":67,"created_at":68},"5f5cfc67-3384-4816-a8f6-19e44d90113d","gap-google-gemini-ai-checkout-en","Gap Teams Up with Google Gemini for AI-Driven Checkout","2026-03-25T16:27:46.483272+00:00",{"id":70,"slug":71,"title":72,"created_at":73},"f6d04567-47f6-49ec-804c-52e61ab91225","ai-model-release-wave-march-2026-en","Navigating the AI Model Release Wave of March 2026","2026-03-25T16:28:45.409716+00:00",{"id":75,"slug":76,"title":77,"created_at":78},"895c150c-569e-4fdf-939d-dade785c990e","small-language-models-transform-ai-en","Small Language Models: Llama 3.2 and Phi-3 Transform AI","2026-03-25T16:30:26.688313+00:00",{"id":80,"slug":81,"title":82,"created_at":83},"38eb1d26-d961-4fd3-ae12-9c4089680f5f","midjourney-v8-alpha-features-pricing-en","Midjourney V8 Alpha: A Deep Dive into Its Features and Pricing","2026-03-26T01:25:36.387587+00:00",{"id":85,"slug":86,"title":87,"created_at":88},"bf36bb9e-3444-4fb8-ab19-0df6bc9d8271","rag-2026-indispensable-ai-bridge-en","RAG in 2026: The Indispensable AI Bridge","2026-03-26T01:28:34.472046+00:00",{"id":90,"slug":91,"title":92,"created_at":93},"60881d6d-2310-44ef-b1fb-7f98e9dd2f0e","xiaomi-mimo-trio-agents-robots-voice-en","Xiaomi’s MiMo trio targets agents, robots, and voice","2026-03-28T03:05:08.899895+00:00",{"id":95,"slug":96,"title":97,"created_at":98},"f063d8d1-41d1-4de4-8ebc-6c40511b9369","xiaomi-mimo-v2-pro-1t-moe-agents-en","Xiaomi MiMo-V2-Pro: 1T MoE Model for Agents","2026-03-28T03:06:19.238032+00:00",{"id":100,"slug":101,"title":102,"created_at":103},"a1379e9a-6785-4ff5-9b0a-8cff55f8264f","cursor-composer-2-started-from-kimi-en","Cursor’s Composer 2 started from Kimi","2026-03-28T03:11:59.132398+00:00"]