[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-claude-opus-4-7-release-workflow-vision-en":3,"tags-claude-opus-4-7-release-workflow-vision-en":29,"related-lang-claude-opus-4-7-release-workflow-vision-en":30,"related-posts-claude-opus-4-7-release-workflow-vision-en":34,"series-model-release-2ab61916-02e3-47f5-8131-9d69cb770f03":71},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":10,"x_posted_at":10},"2ab61916-02e3-47f5-8131-9d69cb770f03","Claude Opus 4.7 发布：更会干活了","\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002F\" target=\"_blank\" rel=\"noopener\">Anthropic\u003C\u002Fa> 发布 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-opus-4-7\" target=\"_blank\" rel=\"noopener\">Claude Opus 4.7\u003C\u002Fa>，这次升级的重点很明确：复杂任务执行、高清视觉理解、长链路工作流稳定性。官方给出的定位也很直接，它面向的是那些真的要把活做完的场景，而不是只会把答案说得漂亮。\u003C\u002Fp>\u003Cp>这次更新最值得注意的一点，是它把“模型会不会做事”摆到了“模型会不会聊天”前面。对于开发者、分析师、法务、研究人员来说，这种变化比单次跑分更有意义，因为它直接影响交付质量、返工次数和上下文管理成本。\u003C\u002Fp>\u003Cp>如果你平时会让模型改代码、读截图、整理材料、做演示文稿，Opus 4.7 不是那种看一眼参数就能忽略的小版本。它的变化很像一次面向办公场景和代理式工作流的升级，代价也很现实：更高分辨率输入和更长输出，都会更快消耗 Token。\u003C\u002Fp>\u003Ch2>这次升级，重点不在“更会聊”\u003C\u002Fh2>\u003Cp>Anthropic 把 Opus 4.7 的核心能力放在高级软件工程、长时间任务执行和更严格的指令遵循上。简单说，模型不再只是回答问题，而是更像一个能跟着步骤做完任务的执行者。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776859438392-675v.png\" alt=\"Claude Opus 4.7 发布：更会干活了\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>官方 API 说明里，它被描述为当前最强的通用可用模型之一，尤其适合复杂推理和代理式编码。这个方向很清楚：大模型竞争的焦点，已经从“答案像不像人”转向“事情做没做成”。\u003C\u002Fp>\u003Cp>从产品角度看，这意味着用户会更少遇到那种“前半段答得很好，后半段开始跑偏”的情况。对于长文档改写、跨文件整理、代码审查这类任务，稳定性比华丽措辞更重要。\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-opus-4-7\" target=\"_blank\" rel=\"noopener\">Claude Opus 4.7 官方发布页\u003C\u002Fa>强调复杂任务和长链路执行\u003C\u002Fli>\u003Cli>SWE-bench Multilingual 上，Opus 4.7 得分 80.5%，Opus 4.6 为 77.8%\u003C\u002Fli>\u003Cli>GraphWalks BFS 1M 场景中，Opus 4.7 从 41.2% 提升到 58.6%\u003C\u002Fli>\u003Cli>Vending-Bench 2 中，Opus 4.7 最终余额 10,937 美元，Opus 4.6 为 8,018 美元\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>视觉能力这次补得很猛\u003C\u002Fh2>\u003Cp>这次最容易被普通用户感知到的变化，是它看图更细了。Anthropic 提到，Opus 4.7 支持长边最高 2576 像素的图像输入，约 375 万像素，明显高于此前版本。对密集截图、复杂图表、流程图、产品原型图来说，这种能力提升很实用。\u003C\u002Fp>\u003Cp>过去很多模型在高分辨率界面里容易漏掉小字、按钮和局部结构。Opus 4.7 的变化在于，它更像是把“看得见”变成了“看得清”。这对 Computer Use 场景尤其重要，因为 UI 元素常常只占整张图很小一块面积。\u003C\u002Fp>\u003Cp>在 ScreenSpot-Pro 上，Opus 4.7 的表现也很亮眼。低分辨率且不带工具时，它拿到 69.0%，而 Opus 4.6 是 57.7%。切到高分辨率后，Opus 4.7 不带工具就达到 79.5%，叠加工具调用后升到 87.6%。\u003C\u002Fp>\u003Cblockquote>“The future is already here — it’s just not very evenly distributed.” — William Gibson\u003C\u002Fblockquote>\u003Cp>这句话放在今天的模型升级上很贴切。对一部分人来说，AI 还只是聊天工具；对另一部分人来说，它已经开始接手截图分析、界面定位和文档整理。Opus 4.7 让这个分界线又往前推了一步。\u003C\u002Fp>\u003Ch2>和老对手比，差距开始变得具体\u003C\u002Fh2>\u003Cp>如果只看自家版本迭代，Opus 4.7 只是比 Opus 4.6 更强一点。但把它放到同类模型里，差距就更容易看清。Artificial Analysis 基于 \u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fgdpval\u002F\" target=\"_blank\" rel=\"noopener\">OpenAI GDPval\u003C\u002Fa> 数据集做的 \u003Ca href=\"https:\u002F\u002Fartificialanalysis.ai\u002Fevals\u002Fgdpval-aa\" target=\"_blank\" rel=\"noopener\">GDPval-AA\u003C\u002Fa> 评估，覆盖 44 种知识工作职业和 9 大行业，任务来自平均 14 年经验的资深从业者。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776859438929-639x.png\" alt=\"Claude Opus 4.7 发布：更会干活了\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>在这项评估里，Opus 4.7 的 Elo 分数是 1753，Opus 4.6 是 1619，\u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fgpt-5-4\u002F\" target=\"_blank\" rel=\"noopener\">GPT-5.4\u003C\u002Fa> 是 1674，\u003Ca href=\"https:\u002F\u002Fwww.google.com\u002Fintl\u002Fen\u002Fai\u002Fgemini\u002F\" target=\"_blank\" rel=\"noopener\">Gemini 3.1 Pro\u003C\u002Fa> 是 1314。这个结果很直白：Opus 4.7 已经把不少“写得像样”但“做不完活”的模型甩在了后面。\u003C\u002Fp>\u003Cp>在企业文档推理场景里，差距更夸张。Databricks 的 \u003Ca href=\"https:\u002F\u002Fwww.databricks.com\u002Fblog\u002Fofficeqa-pro-benchmark\" target=\"_blank\" rel=\"noopener\">OfficeQA Pro\u003C\u002Fa> 测的是近 100 年美国财政部公报，语料有 8.9 万页 PDF 和 2600 万个数字。Opus 4.7 在这里拿到 80.6%，Opus 4.6 是 57.1%，GPT-5.4 是 51.1%，Gemini 3.1 Pro 是 42.9%。\u003C\u002Fp>\u003Cul>\u003Cli>GDPval-AA：Opus 4.7 1753，GPT-5.4 1674，Gemini 3.1 Pro 1314\u003C\u002Fli>\u003Cli>OfficeQA Pro：Opus 4.7 80.6%，Opus 4.6 57.1%，GPT-5.4 51.1%\u003C\u002Fli>\u003Cli>Structural Biology：Opus 4.7 74.0%，Opus 4.6 30.9%\u003C\u002Fli>\u003Cli>SWE-bench Multimodal：Opus 4.7 34.5%，Opus 4.6 27.1%\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>成本和安全，还是绕不开的话题\u003C\u002Fh2>\u003Cp>Opus 4.7 的提升不是白来的。Anthropic 明确提到，更高分辨率图像会消耗更多 Token，新的分词器也会让同样输入产生更多 Token，输出在高 Effort 模式下也会增加。对个人用户来说，这意味着额度可能更快见底；对团队和 API 用户来说，这就是实打实的成本变量。\u003C\u002Fp>\u003Cp>另一个不能忽视的点是安全。Anthropic 在发布前一周公布了 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fproject-glasswing\" target=\"_blank\" rel=\"noopener\">Project Glasswing\u003C\u002Fa>，讨论前沿模型在网络安全方向的风险和收益。Opus 4.7 是这套思路下第一个公开部署的模型，官方还提到它带有自动检测和拦截高风险网络安全请求的护栏。\u003C\u002Fp>\u003Cp>安全评估里，它和 Opus 4.6 的整体画像接近，在诚实性和抵抗提示词注入方面更强，但部分细项也有小幅波动。Anthropic 的态度很明确：这不是一次把所有风险都抹平的发布，而是一次把能力往前推、同时继续收紧边界的发布。\u003C\u002Fp>\u003Cp>对真正会付费的人来说，这些细节比宣传语更重要。因为你买到的不是“更聪明”，而是“更能干活，但也更吃资源”的模型。\u003C\u002Fp>\u003Ch2>结论：它会先改变谁的工作方式？\u003C\u002Fh2>\u003Cp>最先感受到 Opus 4.7 变化的人，大概率不是普通聊天用户，而是每天要处理代码、表格、截图、文档和长任务流的人。它的价值不在于每次回答都更有文采，而在于更少跑偏、更少返工、更少人工盯着它。\u003C\u002Fp>\u003Cp>我更愿意把这次更新理解成一次工作方式的微调：如果你的流程本来就依赖模型做初稿、做校对、做资料整合，那么 Opus 4.7 会让这条链路更值得信赖；如果你只是偶尔问个问题，体感变化未必有那么强。\u003C\u002Fp>\u003Cp>接下来值得观察的，不是它能不能继续刷高分，而是企业会不会真的把更多中间环节交给它。换句话说，问题已经不是“模型能不能写”，而是“它能不能在你的流程里少出错地写完”。\u003C\u002Fp>","Anthropic发布Claude Opus 4.7，长任务、视觉理解和代码工作流更强，但Token消耗也更高。","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2028396026466247335",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776859438392-675v.png",[13,14,15,16,17],"Claude Opus 4.7","Anthropic","computer use","vision","agentic coding","en",0,false,"2026-04-22T12:03:39.271461+00:00","2026-04-22T12:03:39.08+00:00","done","74314d17-f561-4c1f-a48b-36c51c645aca","claude-opus-4-7-release-workflow-vision-en","model-release","97f9c411-2849-4a70-9d1e-3bba0dde23bf","published",[],{"id":27,"slug":31,"title":32,"language":33},"claude-opus-4-7-release-workflow-vision-zh","Claude Opus 4.7 上線：更會做事了","zh",[35,41,47,53,59,65],{"id":36,"slug":37,"title":38,"cover_image":39,"image_url":39,"created_at":40,"category":26},"26e34d8d-efd4-4253-a791-cca1b1803567","qwen36-35b-a3b-open-source-coding-model-en","Qwen3.6-35B-A3B opens a new open-source coding lane","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776643430778-2if7.png","2026-04-20T00:03:37.697765+00:00",{"id":42,"slug":43,"title":44,"cover_image":45,"image_url":45,"created_at":46,"category":26},"9245be69-14ae-4bf3-91ef-d2366b08e460","claude-design-anthropic-launch","Claude Design Launches: Anthropic's AI Design Tool Enters Beta","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776605423840-xgho.png","2026-04-19T12:48:27.45788+00:00",{"id":48,"slug":49,"title":50,"cover_image":51,"image_url":51,"created_at":52,"category":26},"67c09a9c-ca15-45fd-926b-948e3d91827e","gemini-app-release-notes-latest-updates-en","Geminiの最新アップデート総まとめ","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776514190078-juel.png","2026-04-18T12:09:32.166891+00:00",{"id":54,"slug":55,"title":56,"cover_image":57,"image_url":57,"created_at":58,"category":26},"928b50fb-6d24-4229-b88a-cb3caa66a6e8","linux-7-0-rust-ai-bug-finding-en","Linux 7.0 lands with Rust and AI-finding bugs","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776427617312-y00q.png","2026-04-17T12:06:36.898067+00:00",{"id":60,"slug":61,"title":62,"cover_image":63,"image_url":63,"created_at":64,"category":26},"c1fac97f-de34-4254-b62e-eddcab4b6ef3","openai-limits-gpt-54-cyber-trusted-firms-en","OpenAI Limits GPT-5.4-Cyber to Trusted Firms","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776297833412-wlma.png","2026-04-16T00:03:29.403078+00:00",{"id":66,"slug":67,"title":68,"cover_image":69,"image_url":69,"created_at":70,"category":26},"bd3ea20a-829f-4c46-90f3-dc75d961ca01","openai-launches-gpt-54-cyber-defense-work-en","OpenAI launches GPT-5.4-Cyber for defense work","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776254645112-5qsk.png","2026-04-15T12:03:43.901089+00:00",[72,77,82,87,92,97,102,107,112,117],{"id":73,"slug":74,"title":75,"created_at":76},"d4cffde7-9b50-4cc7-bb68-8bc9e3b15477","nvidia-rubin-ai-supercomputer-en","NVIDIA Unveils Rubin: A Leap in AI Supercomputing","2026-03-25T16:24:35.155565+00:00",{"id":78,"slug":79,"title":80,"created_at":81},"eab919b9-fbac-4048-89fc-afad6749ccef","google-gemini-ai-innovations-2026-en","Google's AI Leap with Gemini Innovations in 2026","2026-03-25T16:27:18.841838+00:00",{"id":83,"slug":84,"title":85,"created_at":86},"5f5cfc67-3384-4816-a8f6-19e44d90113d","gap-google-gemini-ai-checkout-en","Gap Teams Up with Google Gemini for AI-Driven Checkout","2026-03-25T16:27:46.483272+00:00",{"id":88,"slug":89,"title":90,"created_at":91},"f6d04567-47f6-49ec-804c-52e61ab91225","ai-model-release-wave-march-2026-en","Navigating the AI Model Release Wave of March 2026","2026-03-25T16:28:45.409716+00:00",{"id":93,"slug":94,"title":95,"created_at":96},"895c150c-569e-4fdf-939d-dade785c990e","small-language-models-transform-ai-en","Small Language Models: Llama 3.2 and Phi-3 Transform AI","2026-03-25T16:30:26.688313+00:00",{"id":98,"slug":99,"title":100,"created_at":101},"38eb1d26-d961-4fd3-ae12-9c4089680f5f","midjourney-v8-alpha-features-pricing-en","Midjourney V8 Alpha: A Deep Dive into Its Features and Pricing","2026-03-26T01:25:36.387587+00:00",{"id":103,"slug":104,"title":105,"created_at":106},"bf36bb9e-3444-4fb8-ab19-0df6bc9d8271","rag-2026-indispensable-ai-bridge-en","RAG in 2026: The Indispensable AI Bridge","2026-03-26T01:28:34.472046+00:00",{"id":108,"slug":109,"title":110,"created_at":111},"60881d6d-2310-44ef-b1fb-7f98e9dd2f0e","xiaomi-mimo-trio-agents-robots-voice-en","Xiaomi’s MiMo trio targets agents, robots, and voice","2026-03-28T03:05:08.899895+00:00",{"id":113,"slug":114,"title":115,"created_at":116},"f063d8d1-41d1-4de4-8ebc-6c40511b9369","xiaomi-mimo-v2-pro-1t-moe-agents-en","Xiaomi MiMo-V2-Pro: 1T MoE Model for Agents","2026-03-28T03:06:19.238032+00:00",{"id":118,"slug":119,"title":120,"created_at":121},"a1379e9a-6785-4ff5-9b0a-8cff55f8264f","cursor-composer-2-started-from-kimi-en","Cursor’s Composer 2 started from Kimi","2026-03-28T03:11:59.132398+00:00"]