[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-parallel-sft-code-rl-cross-language-transfer-zh":3,"tags-parallel-sft-code-rl-cross-language-transfer-zh":31,"related-lang-parallel-sft-code-rl-cross-language-transfer-zh":32,"related-posts-parallel-sft-code-rl-cross-language-transfer-zh":36,"series-research-b418bc8d-86c6-44d6-93f0-e26473db9649":73},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":19,"translated_content":10,"views":20,"is_premium":21,"created_at":22,"updated_at":22,"cover_image":11,"published_at":23,"rewrite_status":24,"rewrite_error":10,"rewritten_from_id":25,"slug":26,"category":27,"related_article_id":28,"status":29,"google_indexed_at":30,"x_posted_at":10},"b418bc8d-86c6-44d6-93f0-e26473db9649","Parallel-SFT 讓 code RL 更會跨語言","\u003Cp>很多 code model 在 Python、C++ 看起來很強，一換到低資源程式語言就掉速。\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.20835\">Parallel-SFT: Improving Zero-Shot Cross-Programming-Language Transfer for Code RL\u003C\u002Fa> 這篇論文想處理的，就是這個落差。作者的判斷不是「程式能力只屬於某一種語言」，而是現有訓練流程沒有把這些能力好好推向可轉移的表示。\u003C\u002Fp>\u003Cp>它的核心想法很直接：如果模型在 RL 之前，就先看過多種語言寫出的等價程式，或許能先學到比較語言無關的內部表徵。這樣一來，後面的 reinforcement learning 不會只把能力鎖在來源語言，而是更有機會往其他語言擴散。\u003C\u002Fp>\u003Ch2>這篇論文要解的痛點\u003C\u002Fh2>\u003Cp>這篇研究聚焦在 zero-shot cross-programming-language transfer for code RL。白話一點，就是先在某個來源語言上做 code generation 的強化學習，再看模型能不能把 RL 帶來的好處，直接轉到其他目標語言，而且不用再針對目標語言額外做 RL。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776924588963-c6d5.png\" alt=\"Parallel-SFT 讓 code RL 更會跨語言\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這件事很重要，因為現實世界的程式語言分布很不平均。常見語言像 Python、C++ 資料多、模型也比較熟；但很多低資源語言資料少，效果通常就差一截。論文把這個問題視為資料與訓練設定的組合問題：模型不是不會寫程式，而是它看到的訓練訊號太偏向少數語言。\u003C\u002Fp>\u003Cp>作者也指出一個關鍵現象：在 Llama-3.1 上，針對來源程式語言做 RL，並不會自動讓其他目標語言一起變好，甚至可能讓表現退步。也就是說，RL 的收益未必會自然跨語言傳遞，這正是 \u003Ca href=\"\u002Fnews\u002Fspeechparaling-bench-paralinguistic-speech-generation-zh\">Para\u003C\u002Fa>llel-SFT 想修補的缺口。\u003C\u002Fp>\u003Ch2>Parallel-SFT 到底怎麼做\u003C\u002Fh2>\u003Cp>這個方法不是直接改 RL 本身，而是先改 RL 前面的 su\u003Ca href=\"\u002Fnews\u002Fflorida-criminal-probe-openai-chatgpt-zh\">pe\u003C\u002Fa>rvised fine-tuning。作者的假設是：如果 SFT 階段就讓模型建立比較能跨語言泛化的初始化，後面的 RL 才比較容易把能力帶到別的語言。\u003C\u002Fp>\u003Cp>Parallel-SFT 的做法是把「parallel programs」混進 SFT 資料裡。這些程式在功能上等價，但分別用多種程式語言實作。模型不再只看單一語言版本，而是同時看見同一個任務在不同語法外觀下的對應關係。\u003C\u002Fp>\u003Cp>這個設計很像在幫模型建立「語意對齊」：不要先把每個語言當成獨立技能，而是先讓模型意識到，底層做的事情其實相同，只是表達方式不同。論文的主張也不是模型因此變成某種通用編譯器式表示，而是這種 SFT 初始化，能讓後續 RL 的轉移性更好。\u003C\u002Fp>\u003Cp>所以，Parallel-SFT 不是一個新的 RL 演算法。它比較像是前置訓練策略，目標是把模型帶到一個比較適合跨語言轉移的起點，再交給 RL 去放大效果。\u003C\u002Fp>\u003Ch2>論文實際證明了什麼\u003C\u002Fh2>\u003Cp>摘要裡最明確的結果，是方向性的：作者在 Parallel-SFT 模型上做 RL 後，觀察到對未見過的程式語言有更好的泛化，優於基準設定。不過摘要沒有公開完整 benchmark 細節，所以這裡沒有數字可直接對照。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776924604704-bx3r.png\" alt=\"Parallel-SFT 讓 code RL 更會跨語言\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>論文還做了內部表示分析。作者指出，Parallel-SFT 會讓 latent space 更偏向功能導向，也就是說，不同語言但功能相同的程式，會在表示空間裡更靠近。作者認為，這種更緊密的聚類，可能就是它能提升 RL 後跨語言轉移的原因之一。\u003C\u002Fp>\u003Cp>這點很值得注意。因為如果改善只是來自某種表面上的微調技巧，那影響可能很脆弱；但如果模型真的開始用「這段程式在做什麼」來組織表示，而不是只看語法長相，那跨語言任務就比較有機會受益。對 code RL 來說，這是很合理的方向。\u003C\u002Fp>\u003Cp>不過，摘要能支持的證據也就到這裡。它告訴我們方法有效，但沒有交代完整評估任務、提升幅度、涵蓋哪些語言、或不同模型尺寸與 RL 目標是否都一致受益。\u003C\u002Fp>\u003Ch2>對開發者有什麼意義\u003C\u002Fh2>\u003Cp>如果你在做 code model、coding assistant，或是需要多語言支援的 a\u003Ca href=\"\u002Fnews\u002Ffree-ai-agent-resources-bookmark-guide-zh\">gent\u003C\u002Fa>，這篇論文的訊息很直接：不要只盯著 RL 配方，SFT 的初始化可能同樣關鍵。很多人會把重點放在 reward、rollout、policy update，但這篇工作提醒你，模型一開始學到的表示方式，會影響後面 RL 的收益能不能跨語言延伸。\u003C\u002Fp>\u003Cp>對資料資源有限的團隊來說，這也提供一個可操作的思路：如果目標語言資料少，也許可以先用多語言等價程式把共享語意教進去，再進行 RL。這不代表問題就消失，但至少能把來源語言的監督訊號，轉成比較可轉移的形式。\u003C\u002Fp>\u003Cul>\u003Cli>用對齊的多語言實作，教模型共享語意，而不只是記住語法。\u003C\u002Fli>\u003Cli>不要假設某個語言上的 RL 成果，會自然複製到其他語言。\u003C\u002Fli>\u003Cli>把 SFT 視為表示塑形，不只是 instruction following。\u003C\u002Fli>\u003Cli>若要支援長尾程式語言，前置資料設計可能比後段優化更重要。\u003C\u002Fli>\u003C\u002Ful>\u003Cp>從工程角度看，這篇論文也在提醒一件事：如果你的系統要跨格式、跨方言、跨語言泛化，先讓模型看見平行樣本，可能比直接把優化火力開大更穩。這種做法不一定最炫，但常常更實用。\u003C\u002Fp>\u003Ch2>限制與還沒回答的問題\u003C\u002Fh2>\u003Cp>這篇摘要的限制也很明顯。它沒有說明用了多少種程式語言，也沒有列出來源語言和目標語言是哪些。各語言家族之間是否都能同樣受益，摘要裡也看不出來。\u003C\u002Fp>\u003Cp>另一個現實限制是，Parallel programs 本身不容易取得。對低資源語言來說，功能等價、又彼此對齊的程式資料可能比一般訓練資料更稀缺。換句話說，這個方法雖然概念清楚，但資料建置本身可能就是門檻。\u003C\u002Fp>\u003Cp>此外，作者的表示分析很有說服力，但還不能算鐵證。功能相近的程式在 latent space 更靠近，確實和更好的轉移能力一致，但這不代表已經完全證明因果關係。作者目前的說法比較像是提出一個合理機制，後續還需要更多驗證。\u003C\u002Fp>\u003Cp>即使如此，這篇工作仍然有一個很實際的提醒：code RL 的成敗，不只看 reward 怎麼設，也不只看 rollout 多漂亮。若你想讓模型真的跨語言，可能得先在 RL 之前，把它的語意表示往「功能」而不是「語法」的方向推。\u003C\u002Fp>","Parallel-SFT 用多語言等價程式做 SFT，想讓後續 code RL 的零樣本跨語言轉移更穩，特別是低資源程式語言。","arxiv.org","https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.20835",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776924588963-c6d5.png",[13,14,15,16,17,18],"Parallel-SFT","code RL","zero-shot transfer","cross-programming-language transfer","SFT","latent space","zh",0,false,"2026-04-23T06:09:32.299476+00:00","2026-04-23T06:09:32.275+00:00","done","dd4b6a20-4847-4923-83be-3330fd2ba51c","parallel-sft-code-rl-cross-language-transfer-zh","research","0e7d8f32-289f-4117-861c-6feb9bd2eb29","published","2026-04-23T09:00:09.13+00:00",[],{"id":28,"slug":33,"title":34,"language":35},"parallel-sft-code-rl-cross-language-transfer-en","Parallel-SFT aims to make code RL transfer better","en",[37,43,49,55,61,67],{"id":38,"slug":39,"title":40,"cover_image":41,"image_url":41,"created_at":42,"category":27},"7ec4baa4-f0af-441e-a97d-56f81a2ca854","avise-ai-security-evaluation-framework-zh","AVISE 模組化測 AI 安全漏洞","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776924771424-kztu.png","2026-04-23T06:12:30.770582+00:00",{"id":44,"slug":45,"title":46,"cover_image":47,"image_url":47,"created_at":48,"category":27},"0274c95d-bf59-405b-a4fd-425f4bb39368","speechparaling-bench-paralinguistic-speech-generation-zh","SpeechParaling-Bench盯住語氣細節","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776924234553-lme6.png","2026-04-23T06:03:38.74229+00:00",{"id":50,"slug":51,"title":52,"cover_image":53,"image_url":53,"created_at":54,"category":27},"947e3be0-2b4b-4719-90d1-ddd1ac80f18a","safe-continual-rl-changing-environments-zh","安全持續學習還沒解題","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776838196623-anqk.png","2026-04-22T06:09:32.609993+00:00",{"id":56,"slug":57,"title":58,"cover_image":59,"image_url":59,"created_at":60,"category":27},"3823f95c-b999-49c7-8ebb-6533799afe82","random-neural-nets-fluctuations-phase-transitions-zh","隨機神經網路的三態漲落相變","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776838016911-ba0a.png","2026-04-22T06:06:36.386094+00:00",{"id":62,"slug":63,"title":64,"cover_image":65,"image_url":65,"created_at":66,"category":27},"1b8be06a-85ea-4cd1-a3c7-ffccdc3eefd5","edge-of-stability-generalization-zh","邊界不穩定為何反而更會泛化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776837839747-ism8.png","2026-04-22T06:03:36.116147+00:00",{"id":68,"slug":69,"title":70,"cover_image":71,"image_url":71,"created_at":72,"category":27},"7a04d752-3f1a-4df7-b7c5-8bcb1e69c565","bounded-ratio-reinforcement-learning-ppo-zh","BRRL 重新定義 PPO 剪裁目標","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776751794578-t5j7.png","2026-04-21T06:09:39.661696+00:00",[74,79,84,89,94,99,104,109,114,119],{"id":75,"slug":76,"title":77,"created_at":78},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":80,"slug":81,"title":82,"created_at":83},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":85,"slug":86,"title":87,"created_at":88},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":90,"slug":91,"title":92,"created_at":93},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":95,"slug":96,"title":97,"created_at":98},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":100,"slug":101,"title":102,"created_at":103},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":105,"slug":106,"title":107,"created_at":108},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":110,"slug":111,"title":112,"created_at":113},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":115,"slug":116,"title":117,"created_at":118},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":120,"slug":121,"title":122,"created_at":123},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]