[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-claude-opus-4-7-release-workflow-vision-zh":3,"tags-claude-opus-4-7-release-workflow-vision-zh":32,"related-lang-claude-opus-4-7-release-workflow-vision-zh":33,"related-posts-claude-opus-4-7-release-workflow-vision-zh":37,"series-model-release-97f9c411-2849-4a70-9d1e-3bba0dde23bf":74},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":21,"translated_content":10,"views":22,"is_premium":23,"created_at":24,"updated_at":24,"cover_image":11,"published_at":25,"rewrite_status":26,"rewrite_error":10,"rewritten_from_id":27,"slug":28,"category":29,"related_article_id":30,"status":31,"google_indexed_at":10,"x_posted_at":10},"97f9c411-2849-4a70-9d1e-3bba0dde23bf","Claude Opus 4.7 上線：更會做事了","\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002F\" target=\"_blank\" rel=\"noopener\">Anthropic\u003C\u002Fa> 推出 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-opus-4-7\" target=\"_blank\" rel=\"noopener\">Claude Opus 4.7\u003C\u002Fa>。這次不是只拚聊天順不順，而是把長任務、視覺理解、工作流穩定性拉上來。官方很直白，這版就是要把事情做完。\u003C\u002Fp>\u003Cp>講白了，這種升級對開發者更有感。你拿它改程式、讀截圖、整理報告，差一點點的準確率，就可能少掉一輪返工。對企業來說，這差的不是面子，是工時。\u003C\u002Fp>\u003Cp>但代價也很現實。更高解析度輸入、更長輸出，Token 消耗都會上去。你如果是 API 使用者，帳單會先幫你記得這件事。\u003C\u002Fp>\u003Ch2>這次重點，不是更會聊\u003C\u002Fh2>\u003Cp>Anthropic 把 Opus 4.7 的主軸放在高階軟體工程、長時間任務、嚴格指令遵循。這代表模型不只會回你一句漂亮答案，還要能一路把步驟做完。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776859436510-gf5y.png\" alt=\"Claude Opus 4.7 上線：更會做事了\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這種方向很合理。現在很多人已經不缺「會說」的模型，缺的是「能收尾」的模型。尤其是文件整理、跨檔案修改、研究摘要這類工作，半路跑掉真的很煩。\u003C\u002Fp>\u003Cp>如果你有用過舊版 Claude，就知道它有時候前半段很穩，後半段開始飄。Opus 4.7 想解的，就是這種長鏈路任務的掉線問題。\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-opus-4-7\" target=\"_blank\" rel=\"noopener\">官方發布頁\u003C\u002Fa>主打長任務與代理式工作流\u003C\u002Fli>\u003Cli>SWE-bench Multilingual：80.5%\u003C\u002Fli>\u003Cli>GraphWalks BFS 1M：58.6%\u003C\u002Fli>\u003Cli>Vending-Bench 2：最終餘額 10,937 美元\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>看圖能力，這次補得很兇\u003C\u002Fh2>\u003Cp>這版另一個很實際的升級，是看圖更細。Anthropic 提到，Opus 4.7 支援長邊最高 2576 像素的圖像輸入，約 375 萬像素。對密集截圖、圖表、流程圖、介面原型圖，這很有用。\u003C\u002Fp>\u003Cp>以前很多模型一碰高解析 UI，就會漏小字、漏按鈕、漏局部結構。這次比較像是把「看得到」變成「看得清」。對 Computer Use 場景，這差很多。\u003C\u002Fp>\u003Cp>你如果做產品設計、QA、前端除錯，這種能力很實用。因為它不只是讀圖，而是要從圖裡抓出操作線索。\u003C\u002Fp>\u003Cblockquote>“The future is already here — it’s just not very evenly distributed.” — William Gibson\u003C\u002Fblockquote>\u003Cp>這句話放在 \u003Ca href=\"\u002Fnews\u002Fai-papers-of-the-week-ml-paper-roundup-zh\">AI\u003C\u002Fa> 很貼切。有人還在拿模型聊天，有人已經拿它讀截圖、找欄位、整理表格。Opus 4.7 讓這條線又往前挪了一點。\u003C\u002Fp>\u003Ch2>和競品比，差距開始變具體\u003C\u002Fh2>\u003Cp>只看單一版本，Opus 4.7 只是比 Opus 4.6 強一截。但把它放進同級比較，差距就很清楚了。\u003Ca href=\"https:\u002F\u002Fartificialanalysis.ai\u002Fevals\u002Fgdpval-aa\" target=\"_blank\" rel=\"noopener\">Artificial Analysis 的 GDPval-AA\u003C\u002Fa> 評估，涵蓋 44 種知識工作職業與 9 大產業，任務來自平均 14 年經驗的資深從業者。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776859440187-hq79.png\" alt=\"Claude Opus 4.7 上線：更會做事了\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>在這份評估裡，Opus 4.7 的 Elo 是 1753。Opus 4.6 是 1619。\u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fgpt-5-4\u002F\" target=\"_blank\" rel=\"noopener\">GPT-5.4\u003C\u002Fa> 是 1674。\u003Ca href=\"https:\u002F\u002Fwww.google.com\u002Fintl\u002Fen\u002Fai\u002Fgemini\u002F\" target=\"_blank\" rel=\"noopener\">Gemini 3.1 Pro\u003C\u002Fa> 是 1314。這組數字很直接，Opus 4.7 已經把不少只會寫漂亮話的模型甩開。\u003C\u002Fp>\u003Cp>企業文件推理的差距更誇張。\u003Ca href=\"https:\u002F\u002Fwww.databricks.com\u002Fblog\u002Fofficeqa-pro-benchmark\" target=\"_blank\" rel=\"noopener\">Databricks OfficeQA Pro\u003C\u002Fa> 測的是接近 100 年的美國財政部公報，資料有 8.9 萬頁 PDF 和 2600 萬個數字。這種題目很吃耐心，也很吃上下文管理。\u003C\u002Fp>\u003Cul>\u003Cli>GDPval-AA：Opus 4.7 1753，GPT-5.4 1674，Gemini 3.1 Pro 1314\u003C\u002Fli>\u003Cli>OfficeQA Pro：Opus 4.7 80.6%，Opus 4.6 57.1%\u003C\u002Fli>\u003Cli>Structural Biology：Opus 4.7 74.0%，Opus 4.6 30.9%\u003C\u002Fli>\u003Cli>SWE-bench Multimodal：Opus 4.7 34.5%，Opus 4.6 27.1%\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>成本和安全，還是不能跳過\u003C\u002Fh2>\u003Cp>Opus 4.7 不是白送的升級。Anthropic 明講了，高解析度圖像會吃更多 Token，新分詞器也可能讓同樣輸入變成更多 Token。高 Effort 模式下，輸出也會更長。\u003C\u002Fp>\u003Cp>這對個人用戶是額度問題。對團隊和 API 用戶，就是成本問題。你如果一天跑幾百次工作流，差一點點 Token，月底就會很有感。\u003C\u002Fp>\u003Cp>\u003Ca href=\"\u002Fnews\u002Fsafe-continual-rl-changing-environments-zh\">安全\u003C\u002Fa>面也不能忽略。Anthropic 在發布前提過 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fproject-glasswing\" target=\"_blank\" rel=\"noopener\">Project Glasswing\u003C\u002Fa>，談前沿模型在網安上的風險與收益。Opus 4.7 也帶有自動偵測與攔截高風險網安請求的護欄。\u003C\u002Fp>\u003Cp>這代表它不是只往能力衝，也在收\u003Ca href=\"\u002Fnews\u002Fedge-of-stability-generalization-zh\">邊界\u003C\u002Fa>。說真的，這比空喊口號實際多了。\u003C\u002Fp>\u003Ch2>這版會先影響誰？\u003C\u002Fh2>\u003Cp>先有感的，大概不是只拿來聊天的人，而是每天都在處理程式、表格、截圖、文件的人。因為它的價值不在文采，而在少跑偏、少返工、少人工盯著。\u003C\u002Fp>\u003Cp>如果你的流程本來就會讓模型先出草稿，再人工校對，Opus 4.7 會比較像一個能穩定接手中段工作的助手。它不一定每次都驚艷，但它可能更少搞砸。\u003C\u002Fp>\u003Cp>我覺得接下來真正值得看的是企業採用率。問題已經不是模型能不能寫，而是它能不能在你的流程裡，乖乖把事情寫完。\u003C\u002Fp>\u003Cp>如果你是開發者，現在就該測兩件事：長上下文穩不穩，Token 成本高不高。這兩個答案，會直接決定你要不要把它放進正式流程。\u003C\u002Fp>","Anthropic 推出 Claude Opus 4.7，強化長任務、視覺理解與程式工作流，但 Token 消耗也更高。","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2028396026466247335",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776859436510-gf5y.png",[13,14,15,16,17,18,19,20],"Claude Opus 4.7","Anthropic","LLM","Token","API","視覺理解","長任務","AI 工作流","zh",0,false,"2026-04-22T12:03:36.654584+00:00","2026-04-22T12:03:36.457+00:00","done","74314d17-f561-4c1f-a48b-36c51c645aca","claude-opus-4-7-release-workflow-vision-zh","model-release","2ab61916-02e3-47f5-8131-9d69cb770f03","published",[],{"id":30,"slug":34,"title":35,"language":36},"claude-opus-4-7-release-workflow-vision-en","Claude Opus 4.7 发布：更会干活了","en",[38,44,50,56,62,68],{"id":39,"slug":40,"title":41,"cover_image":42,"image_url":42,"created_at":43,"category":29},"1c99e395-4b38-4793-9604-1de54b9f2897","qwen36-35b-a3b-open-source-coding-model-zh","Qwen3.6-35B-A3B 打開開源寫碼新路線","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776643431808-tti7.png","2026-04-20T00:03:37.398827+00:00",{"id":45,"slug":46,"title":47,"cover_image":48,"image_url":48,"created_at":49,"category":29},"7859c2b7-e2b2-4a74-84b6-7d4d3b59ae23","claude-design-anthropic-launch-zh","Claude Design 上線：Anthropic 推 AI 設計工具挑戰 Figma","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776601823491-fri0.png","2026-04-19T12:27:01.135769+00:00",{"id":51,"slug":52,"title":53,"cover_image":54,"image_url":54,"created_at":55,"category":29},"0e194b22-718e-45f9-9ad6-2649b49cd21e","gemini-app-release-notes-latest-updates-zh","Gemini最新更新總整理","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776514185258-7g5s.png","2026-04-18T12:09:31.74909+00:00",{"id":57,"slug":58,"title":59,"cover_image":60,"image_url":60,"created_at":61,"category":29},"a3c7f0ae-6f45-4912-8617-66382a90a390","linux-7-0-rust-ai-bug-finding-zh","Linux 7.0 上線：Rust 與 AI 找蟲","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776427615139-lfqs.png","2026-04-17T12:06:36.315425+00:00",{"id":63,"slug":64,"title":65,"cover_image":66,"image_url":66,"created_at":67,"category":29},"b875d3ed-f892-43a8-a51e-920729e85b1e","gpt-5-4-benchmarks-2026-scores-rankings-zh","GPT-5.4 知識測驗拿 97.6 分","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776082194973-cyii.png","2026-04-13T12:09:40.301446+00:00",{"id":69,"slug":70,"title":71,"cover_image":72,"image_url":72,"created_at":73,"category":29},"bf25b91d-035f-418f-bedd-4b066c1e1259","openai-revenue-valuation-funding-2026-zh","OpenAI 2026 營收、估值與募資解析","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775995438254-rxea.png","2026-04-12T12:03:39.268699+00:00",[75,80,85,90,95,100,105,110,115,120],{"id":76,"slug":77,"title":78,"created_at":79},"58b64033-7eb6-49b9-9aab-01cf8ae1b2f2","nvidia-rubin-six-chips-one-ai-supercomputer-zh","NVIDIA Rubin 把六顆晶片塞進 AI 機櫃","2026-03-26T07:18:45.861277+00:00",{"id":81,"slug":82,"title":83,"created_at":84},"0dcc2c61-c2a6-480d-adb8-dd225fc68914","march-2026-ai-model-news-what-mattered-zh","2026 年 3 月 AI 模型新聞重點","2026-03-26T07:32:08.386348+00:00",{"id":86,"slug":87,"title":88,"created_at":89},"214ab08b-5ce5-4b5c-8b72-47619d8675dd","why-small-models-are-winning-on-device-ai-zh","小模型為何吃下裝置端 AI","2026-03-26T07:36:30.488966+00:00",{"id":91,"slug":92,"title":93,"created_at":94},"785624b2-0355-4b82-adc3-de5e45eecd88","midjourney-v8-faster-images-higher-costs-zh","Midjourney V8 變快了，也變貴了","2026-03-26T07:52:03.562971+00:00",{"id":96,"slug":97,"title":98,"created_at":99},"cda76b92-d209-4134-86c1-a60f5bc7b128","xiaomi-mimo-trio-agents-robots-voice-zh","小米 MiMo 三模型瞄準代理、機器人與語音","2026-03-28T03:05:08.779489+00:00",{"id":101,"slug":102,"title":103,"created_at":104},"9e1044b4-946d-47fe-9e2a-c2ee032e1164","xiaomi-mimo-v2-pro-1t-moe-agents-zh","小米 MiMo-V2-Pro 登場：1T MoE 模型","2026-03-28T03:06:19.002353+00:00",{"id":106,"slug":107,"title":108,"created_at":109},"d68e59a2-55eb-4a8f-95d6-edc8fcbff581","cursor-composer-2-started-from-kimi-zh","Cursor Composer 2 其實從 Kimi 起步","2026-03-28T03:11:58.893796+00:00",{"id":111,"slug":112,"title":113,"created_at":114},"c4b6186f-bd84-4598-997e-c6e31d543c0d","cursor-composer-2-agentic-coding-model-zh","Cursor Composer 2 走向代理式寫碼","2026-03-28T03:13:06.422716+00:00",{"id":116,"slug":117,"title":118,"created_at":119},"45812c46-99fc-4b1f-aae1-56f64f5c9024","openai-shuts-down-sora-video-app-api-zh","OpenAI 關閉 Sora App 與 API","2026-03-29T04:47:48.974108+00:00",{"id":121,"slug":122,"title":123,"created_at":124},"e112e76f-ec3b-408f-810e-e93ae21a888a","apple-siri-gemini-distilled-models-zh","Apple Siri 牽手 Gemini 的真相","2026-03-29T04:52:57.886544+00:00"]