[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-random-neural-nets-fluctuations-phase-transitions-en":3,"tags-random-neural-nets-fluctuations-phase-transitions-en":30,"related-lang-random-neural-nets-fluctuations-phase-transitions-en":31,"related-posts-random-neural-nets-fluctuations-phase-transitions-en":35,"series-research-ee3a99cb-0f1f-42b8-9bcf-9ac32ecc6770":72},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10},"ee3a99cb-0f1f-42b8-9bcf-9ac32ecc6770","Random Neural Nets Show Phase-Shifted Fluctuations","\u003Cp>Most neural network writeups focus on training, accuracy, or scaling laws. This paper looks somewhere more mathematical but still relevant to anyone building or analyzing deep models: what happens to functionals of the output of an infinitely-wide random neural network as depth grows. The answer is that the fluctuations do not all behave the same way. They can fall into one of three limiting regimes, depending on the fixed points of the network’s covariance function.\u003C\u002Fp>\u003Cp>The paper, \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.19738\">Phase Transitions in the Fluctuations of Functionals of Random Neural Networks\u003C\u002Fa>, is about a depth-driven change in behavior that shows up in the statistics of random neural network outputs on the d-dimensional sphere. The practical takeaway is not a new model to deploy, but a clearer map of when deep random networks behave like Gaussian objects, when they converge to the same functional of a limiting Gaussian field, and when they land in a non-Gaussian limit in a Wiener chaos.\u003C\u002Fp>\u003Ch2>What problem this paper is trying to fix\u003C\u002Fh2>\u003Cp>Random neural networks are often used as a theoretical lens for understanding how depth and width shape signal propagation. In the infinitely-wide limit, the output becomes Gaussian, which makes analysis tractable. But once you start looking at functionals of that output across increasing depth, the picture gets more complicated.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776838027807-14qw.png\" alt=\"Random Neural Nets Show Phase-Shifted Fluctuations\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>The paper focuses on sequences of functionals of the Gaussian output of an infinitely-wide random neural network defined on the d-dimensional sphere. The core question is: as depth increases, what kind of limiting distribution do these functionals have? That matters because it tells you whether the network’s deep behavior is stable, Gaussian-like, or governed by a more exotic non-Gaussian regime.\u003C\u002Fp>\u003Cp>Instead of treating depth as just another hyperparameter, the authors frame it as a source of phase transitions. In other words, the same random-network setup can produce qualitatively different asymptotic behavior depending on the structure of the covariance map driving the network.\u003C\u002Fp>\u003Ch2>How the method works in plain English\u003C\u002Fh2>\u003Cp>The technical engine of the paper is the covariance function associated with the random network and the iterative operator built from it. The authors show that the long-depth behavior is controlled by the fixed points of this operator, and by whether those fixed points are stable or unstable.\u003C\u002Fp>\u003Cp>That fixed-point structure is the real organizing principle here. As depth increases, the covariance evolves through repeated application of the operator. The nature of the fixed points determines which regime the fluctuations end up in. The paper identifies three distinct outcomes:\u003C\u002Fp>\u003Cul>\u003Cli>convergence to the same functional of a limiting Gaussian field,\u003C\u002Fli>\u003Cli>convergence to a Gaussian distribution,\u003C\u002Fli>\u003Cli>convergence to a distribution in the Qth Wiener chaos.\u003C\u002Fli>\u003C\u002Ful>\u003Cp>To prove these results, the authors use several classical tools from probability theory and stochastic analysis: Hermite expansions, the Diagram Formula, and Stein-Malliavin techniques. What is new in this setting is not the toolbox itself, but how it is combined with the fixed-point analysis of the covariance iteration. The paper’s main conceptual move is to tie asymptotic distributional behavior to the geometry and stability of the covariance map’s fixed points.\u003C\u002Fp>\u003Cp>For developers, the plain-English translation is: if you model a very wide random network and keep increasing depth, the output statistics do not just “smooth out” in one universal way. The depth dynamics can push you into different probabilistic regimes, and the transition between them is structured rather than accidental.\u003C\u002Fp>\u003Ch2>What the paper actually shows\u003C\u002Fh2>\u003Cp>The abstract states that the authors establish both central and non-central limit theorems for sequences of functionals of the Gaussian output. That means they prove not only Gaussian convergence in some cases, but also non-Gaussian convergence in others.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776838014397-t24n.png\" alt=\"Random Neural Nets Show Phase-Shifted Fluctuations\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>Importantly, the abstract does not provide benchmark numbers, simulation results, or empirical comparisons. So there are no accuracy percentages, runtime figures, or dataset metrics to report here. This is a theory paper, and its contribution is mathematical structure rather than measured performance.\u003C\u002Fp>\u003Cp>The key result is the existence of three limiting regimes, determined by the fixed-point structure of the covariance operator. The paper says the asymptotic behavior as depth increases depends crucially on those fixed points. That is the phase-transition story: change the stability landscape, and you change the limiting law.\u003C\u002Fp>\u003Cp>Another important point is that the limiting non-Gaussian regime is not just “some weird distribution.” The abstract specifies convergence to a distribution in the Qth Wiener chaos. For readers used to Gaussian-process theory, that signals a very specific family of higher-order stochastic limits rather than an arbitrary departure from normality.\u003C\u002Fp>\u003Ch2>Why developers and ML engineers should care\u003C\u002Fh2>\u003Cp>Even though this is not an applied ML paper, it gives useful intuition for anyone working with deep random models, kernel limits, or mean-field-style analyses. If you rely on infinite-width approximations, random initialization theory, or depth-as-dynamics reasoning, the paper says the limiting behavior can depend sharply on the covariance fixed points rather than on depth alone.\u003C\u002Fp>\u003Cp>That matters because it suggests there may be regimes where standard Gaussian intuition is valid, and other regimes where it is not. If you are analyzing stability, feature propagation, or the asymptotic statistics of network outputs, this kind of result tells you what kinds of limit theorems you are actually allowed to use.\u003C\u002Fp>\u003Cp>It also reinforces a broader lesson from modern theory work on neural networks: the relevant question is often not just “how wide is the network?” but “what iterative structure does the architecture induce, and what are its fixed points?” In this paper, that structure is the covariance operator. In other settings, it might be a kernel map, an activation-induced recursion, or another dynamical system in function space.\u003C\u002Fp>\u003Ch2>Limitations and open questions\u003C\u002Fh2>\u003Cp>The biggest limitation is also the paper’s scope: it is about an infinitely-wide random neural network on the d-dimensional sphere. That is a clean theoretical setting, but it is not the same as a finite-width trained model on real data. The abstract does not claim direct empirical validation on practical tasks.\u003C\u002Fp>\u003Cp>Because the source material is only the abstract, we also do not get the exact assumptions on the network architecture, the precise class of functionals studied, or the detailed conditions under which each limiting regime appears. Those details likely matter if you want to translate the theorem into a concrete modeling checklist.\u003C\u002Fp>\u003Cp>Another open question is how robust these phase transitions are outside the specific spherical setting. The abstract does not say whether the same fixed-point-driven picture extends to other domains, other activations, or non-Gaussian weight initializations. That is the sort of thing a practitioner would want to know before treating the result as a design rule.\u003C\u002Fp>\u003Cp>Still, the paper is valuable because it makes the asymptotic story more precise. Instead of assuming deep random networks all wash out to the same Gaussian behavior, it shows that the covariance fixed points can split the world into multiple regimes. For engineers who work with theoretical approximations, that is a useful warning: the limit you get depends on the dynamics you build in.\u003C\u002Fp>\u003Cp>In short, this is a clean example of how a mathematically structured random network can exhibit phase transitions in its fluctuations. The result is not a product feature, but it is the kind of theory that helps explain when deep-network approximations are stable, when they are Gaussian, and when they are not.\u003C\u002Fp>","A theory paper shows depth-driven phase transitions in random neural network functionals, with three distinct limiting regimes.","arxiv.org","https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.19738",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776838027807-14qw.png",[13,14,15,16,17],"random neural networks","central limit theorem","Wiener chaos","covariance fixed points","deep learning theory","en",0,false,"2026-04-22T06:06:36.679543+00:00","2026-04-22T06:06:36.653+00:00","done","55afdd64-b1a6-4d89-a322-5327edbb35bb","random-neural-nets-fluctuations-phase-transitions-en","research","3823f95c-b999-49c7-8ebb-6533799afe82","published","2026-04-22T09:00:14.378+00:00",[],{"id":27,"slug":32,"title":33,"language":34},"random-neural-nets-fluctuations-phase-transitions-zh","隨機神經網路的三態漲落相變","zh",[36,42,48,54,60,66],{"id":37,"slug":38,"title":39,"cover_image":40,"image_url":40,"created_at":41,"category":26},"89d74343-03a7-4325-88e0-14029dab320d","safe-continual-rl-changing-environments-en","Safe Continual RL for Changing Real-World Systems","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776838195882-6v8v.png","2026-04-22T06:09:33.432376+00:00",{"id":43,"slug":44,"title":45,"cover_image":46,"image_url":46,"created_at":47,"category":26},"7fb8a4e6-2e67-41e8-8631-a9b482935aea","edge-of-stability-generalization-en","Why “edge of stability” can help generalization","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776837837398-ubbj.png","2026-04-22T06:03:36.883776+00:00",{"id":49,"slug":50,"title":51,"cover_image":52,"image_url":52,"created_at":53,"category":26},"19f116fd-02dd-4a7d-9638-75a3bb70cae2","bounded-ratio-reinforcement-learning-ppo-en","Bounded Ratio RL Reframes PPO's Clipped Objective","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776751796218-p4in.png","2026-04-21T06:09:40.318224+00:00",{"id":55,"slug":56,"title":57,"cover_image":58,"image_url":58,"created_at":59,"category":26},"c1aac50e-0c41-471c-946e-329652f04565","sessa-attention-inside-state-space-memory-en","Sessa puts attention inside state-space memory","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776751621598-1d0l.png","2026-04-21T06:06:37.564074+00:00",{"id":61,"slug":62,"title":63,"cover_image":64,"image_url":64,"created_at":65,"category":26},"2ff3b7ca-c656-4814-9057-0457055b9263","mathnet-benchmark-math-reasoning-retrieval-en","MathNet Benchmarks Math Reasoning and Retrieval","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776751434139-coi2.png","2026-04-21T06:03:38.96902+00:00",{"id":67,"slug":68,"title":69,"cover_image":70,"image_url":70,"created_at":71,"category":26},"c49960e7-31c4-4734-9bc4-5aa5fdeb5b63","prompt-engineering-becoming-infrastructure-en","Prompt Engineering Is Becoming Infrastructure","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776742221209-noob.png","2026-04-21T00:15:43.249018+00:00",[73,78,83,88,93,98,103,108,113,118],{"id":74,"slug":75,"title":76,"created_at":77},"a2715e72-1fe8-41b3-abb1-d0cf1f710189","ai-predictions-2026-big-changes-en","AI Predictions for 2026: Brace for Big Changes","2026-03-26T01:25:07.788356+00:00",{"id":79,"slug":80,"title":81,"created_at":82},"8404bd7b-4c2f-4109-9ec4-baf29d88af2b","ml-papers-of-the-week-github-research-desk-en","ML Papers of the Week Turns GitHub Into a Research Desk","2026-03-27T01:11:39.480259+00:00",{"id":84,"slug":85,"title":86,"created_at":87},"87897a94-8065-4464-a016-1f23e89e17cc","ai-ml-conferences-to-watch-in-2026-en","AI\u002FML Conferences to Watch in 2026","2026-03-27T01:51:54.184108+00:00",{"id":89,"slug":90,"title":91,"created_at":92},"6f1987cf-25f3-47a4-b3e6-db0997695be8","openclaw-agents-manipulated-self-sabotage-en","OpenClaw Agents Can Be Manipulated Into Failure","2026-03-28T03:03:18.899465+00:00",{"id":94,"slug":95,"title":96,"created_at":97},"a53571ad-735a-4178-9f93-cb09b699d99c","vega-driving-language-instructions-en","Vega: Driving with Natural Language Instructions","2026-03-28T14:54:04.698882+00:00",{"id":99,"slug":100,"title":101,"created_at":102},"a34581d6-f36e-46da-88bb-582fb3e7425c","personalizing-autonomous-driving-styles-en","Drive My Way: Personalizing Autonomous Driving Styles","2026-03-28T14:54:26.148181+00:00",{"id":104,"slug":105,"title":106,"created_at":107},"2bc1ad7f-26ce-4f02-9885-803b35fd229d","training-knowledge-bases-writeback-rag-en","Training Knowledge Bases with WriteBack-RAG","2026-03-28T14:54:45.643433+00:00",{"id":109,"slug":110,"title":111,"created_at":112},"71adc507-3c54-4605-bbe2-c966acd6187e","packforcing-long-video-generation-en","PackForcing: Efficient Long-Video Generation Method","2026-03-28T14:55:02.646943+00:00",{"id":114,"slug":115,"title":116,"created_at":117},"675942ef-b9ec-4c5f-a997-381250b6eacb","pixelsmile-facial-expression-editing-en","PixelSmile Framework Enhances Facial Expression Editing","2026-03-28T14:55:20.633463+00:00",{"id":119,"slug":120,"title":121,"created_at":122},"6954fa2b-8b66-4839-884b-e46f89fa1bc3","adaptive-block-scaled-data-types-en","IF4: Smarter 4-Bit Quantization That Adapts to Your Data","2026-03-31T06:00:36.65963+00:00"]