how neural networks discovered. Practice them, watch the loss go down, save checkpoints each epoch. Commonplace workflow. Then I measured coaching dynamics at 5-step intervals as a substitute of epoch-level, and all the pieces I believed I knew fell aside.
The query that began this journey: Does a neural community’s capability increase throughout coaching, or is it mounted from initialization? Till 2019, all of us assumed the reply was apparent—parameters are mounted, so capability should be mounted too. However Ansuini et al. found one thing that shouldn’t be attainable: the efficient representational dimensionality will increase throughout coaching. Yang et al. confirmed it in 2024.
This adjustments all the pieces. If studying house expands whereas the community learns, how can we mechanistically perceive what it’s truly doing?
Excessive-Frequency Coaching Checkpoints
Once we are coaching a DNN with 10,000 steps, we used to arrange chack factors each 100 or 200 steps. Measuring at 5-step intervals generates an excessive amount of information that aren’t straightforward to handle. However these high-frequency checkpoints reveal very invaluable details about how a DNN learns.
Excessive-frequency checkpoints present details about:
- Whether or not early coaching errors might be recovered from (they usually can’t)
- Why some architectures work and others fail
- When interpretability evaluation ought to occur (spoiler: manner sooner than we thought)
- How one can design higher coaching approaches
Throughout an utilized analysis challenge I’ve measured DNN coaching at excessive decision — each 5 steps as a substitute of each 100 or 500. I used a fundamental MLP structure with the identical dataset I’ve been utilizing for the final 10 years.
rolling statistics:
The outcomes have been shocking. Deep neural networks, even easy architectures, increase their efficient parameter house throughout coaching. I had assumed this house was predetermined by the structure itself. As an alternative, DNNs endure discrete transitions—small jumps that improve the efficient dimensionality of their studying house.

distinct developmental window. Picture by writer.
In Determine 2 we are able to see the monitoring of activation efficient dimensionality throughout coaching. We see these transitions focus within the first 25% of coaching, and are hidden at bigger checkpoint intervals (100-1000 steps). We would have liked a high-frequency checkpointing (5 steps) to detect most of them. The curve additionally reveals an fascinating conduct. The preliminary collapse represents loss panorama restructuring the place random initialization offers approach to a task-aligned construction. Then we see an growth section with gradual dimensionality progress. Between 2000-3000 steps, there’s a stabilization that displays DNN architectural capability limits.

This adjustments how we should always take into consideration DNN coaching, interpretability, and structure design.
Exploration vs Growth
Take into account the next two eventualities:
| State of affairs A: Fastened Capability (Exploration) |
State of affairs B: Increasing Capability (Innovation) |
| Your community begins with a set representational capability. Coaching explores completely different areas of this pre-determined house. It’s like navigating a map that exists from the start. Early coaching simply means “haven’t discovered the nice area but”. | Your community begins with minimal capability. Coaching creates representational constructions. Its like constructing roads whereas touring — every street permits new locations. Early coaching establishes what turns into learnable later. |
Which is it?
The query issues as a result of if capability expands, then early coaching isn’t recoverable. You’ll be able to’t simply “practice longer” to repair early errors. So, interpretability has a timeline the place options type in sequence. Understanding this sequence is essential. Furehtermore, structure design appears to be about growth price not simply ultimate capability. Lastly, important durations exist. If we miss the window, we miss the potential.
When We Must Measure Excessive-Frequency Checkpoints
Growth vs Exploration

As seen in Figures 2 and three, high-frequency sampling reveals fascinating info. We are able to indentify three completely different phases:
| Section 1: Collapse (steps 0-300) The community restructures from random initialization. Dimensionality drops sharply because the loss panorama is reshaped across the job. This isn’t studying but, it’s preparation for studying. |
| Section 2: Growth (steps 300-5,000) Dimensionality climbs steadily. That is capability growth. The community is constructing representational constructions. Easy options that allow advanced options that allow higher-order options. |
| Section 3: Stabilization (steps 5,000-8,000) Development plateaus. Architectural constraints bind. The community refines what it has quite than constructing new capability. |
This plots reveals growth, not exploration. The community at step 5,000 can signify features that have been unimaginable at step 300 as a result of they didn’t exist.
Capability Expands, Parameters Don’t

Weight house dimensionality stays almost fixed
(9.72-9.79) with just one detected “bounce” throughout 8000 steps. Picture by writer
The comparability between activation and weight areas reveals that each observe completely different dynamics with high-frequency sampling. The activation house reveals ap. 85 discrete jumps (together with Gaussian noise). The load house reveals just one. The identical community with the identical coaching run. It confirms that the community at step 8000 computes features inaccessible at step 500 regardless of an similar parameter rely. That is the clearest proof for growth.
DNNs innovate by producing new parameter house choices throughout coaching as a way to signify advanced duties.
Transitions Are Quick and Early
We’ve seen how high-frequency sampling reveals many extra transitions. Low-frequency checkpointing would miss almost all of them. These transitions focus early. Two thirds of all transitions occur within the first 2,000 steps — simply 25% of whole coaching time. It implies that if we need to perceive what options type and when, we have to look throughout steps 0-2,000, not at convergence. By step 5,000, the story is over.
Growth {Couples} to Optimization
If we glance once more at Determine 3, we see that as loss decreases, dimensionality expands. The community doesn’t simplify because it learns. It turns into extra advanced. Dimensionality correlates strongly with loss (ρ = -0.951) and reasonably with gradient magnitude (ρ = -0.701). This might appear counterintuitive: improved efficiency correlates with expanded quite than compressed representations. We would anticipate networks to search out less complicated, extra compressed representations as they be taught. As an alternative, they increase into higher-dimensional areas.
Why?
A attainable clarification is that advanced duties require advanced representations. The community doesn’t discover a less complicated clarification and builds the representational adjustments wanted to separate courses, acknowledge patterns, and generalize.
Sensible Deployment
We’ve seen a special approach to perceive and debug DNN coaching throughout any area.
If we all know when options type throughout coaching, we are able to analyze them as they crystallize quite than reverse-engineering a black field afterward.
In actual deployment eventualities, we are able to monitor representational dimensionality in real-time, detect when growth phases happen, and run interpretability analyses at every transition level. This tells us exactly when our community is constructing new representational constructions—and when it’s completed. The measurement method is architecture-agnostic: it really works whether or not you’re coaching CNNs for imaginative and prescient, transformers for language, RL brokers for management, or multimodal fashions for cross-domain duties.
| Instance 1: Intervention experiments that map causal dependencies. Disrupt coaching throughout particular home windows and measure which downstream capabilities are misplaced. If corrupting information throughout steps 2,000-5,000 completely damages texture recognition however the identical corruption at step 6,000 has no impact, you’ve discovered when texture options crystallize and what they rely upon. This works identically for object recognition in imaginative and prescient fashions, syntactic construction in language fashions, or state discrimination in RL brokers. |
| Instance 2: For manufacturing deployment, steady dimensionality monitoring catches representational issues throughout coaching when you may nonetheless repair them. If layers cease increasing, you’ve architectural bottlenecks. If growth turns into erratic, you’ve instability. If early layers saturate whereas late layers fail to increase, you’ve info circulate issues. Commonplace loss curves received’t present these points till it’s too late—dimensionality monitoring surfaces them instantly. |
| Instance 3: The structure design implications are equally sensible. Measure growth dynamics throughout the first 5-10% of coaching throughout candidate architectures. Choose for clear section transitions and structured bottom-up growth. These networks aren’t simply extra performant—they’re basically extra interpretable as a result of options type in clear sequential layers quite than tangled simultaneity. |
What’s Subsequent
So we’ve established that networks increase their representational house throughout coaching, that we are able to measure these transitions at excessive decision, and that this opens new approaches to interpretability and intervention. The pure query: are you able to apply this to your personal work?
I’m releasing the entire measurement infrastructure as open supply. I included validated implementations for MLPs, CNNs, ResNets, Transformers, and Imaginative and prescient Transformers, with hooks for customized architectures.
Every thing runs with three strains added to your coaching loop.

The GitHub repository offers experiment templates for the experiments mentioned above: characteristic formation mapping, intervention protocols, cross-architecture switch prediction, and manufacturing monitoring setups. The measurement methodology is validated. What issues now could be what you uncover while you apply it to your area.
Strive it:
pip set up ndtracker
Quickstart, directions, and examples within the repository: Neural Dimensionality Tracker (NDT)
The code is production-ready. The protocols are documented. The questions are open. I wish to see what you discover while you measure your coaching dynamics at excessive decision regardless of the context and the structure.
You’ll be able to share your outcomes, open points together with your findings, or simply ⭐️ the repo if this adjustments how you concentrate on coaching. Bear in mind, the interpretability timeline exists throughout all neural architectures.
Javier Marín | LinkedIn | Twitter
References & Additional Studying
- Achille, A., Rovere, M., & Soatto, S. (2019). Important studying durations in deep networks. In Worldwide Convention on Studying Representations (ICLR). https://openreview.web/discussion board?id=BkeStsCcKQ
- Frankle, J., Dziugaite, G. Okay., Roy, D. M., & Carbin, M. (2020). Linear mode connectivity and the lottery ticket speculation. In Proceedings of the thirty seventh Worldwide Convention on Machine Studying (pp. 3259-3269). PMLR. https://proceedings.mlr.press/v119/frankle20a.html
- Ansuini, A., Laio, A., Macke, J. H., & Zoccolan, D. (2019). Intrinsic dimension of information representations in deep neural networks. In Advances in Neural Data Processing Programs (Vol. 32, pp. 6109-6119). https://proceedings.neurips.cc/paper/2019/hash/cfcce0621b49c983991ead4c3d4d3b6b-Summary.html
- Yang, J., Zhao, Y., & Zhu, Q. (2024). ε-rank and the staircase phenomenon: New insights into neural community coaching dynamics. arXiv preprint arXiv:2412.05144. https://arxiv.org/abs/2412.05144
- Olah, C., Mordvintsev, A., & Schubert, L. (2017). Characteristic visualization. Distill, 2(11), e7. https://doi.org/10.23915/distill.00007
- Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., DasSarma, N., Drain, D., Ganguli, D., Hatfield-Dodds, Z., Hernandez, D., Jones, A., Kernion, J., Lovitt, L., Ndousse, Okay., Amodei, D., Brown, T., Clark, J., Kaplan, J., McCandlish, S., & Olah, C. (2021). A mathematical framework for transformer circuits. Transformer Circuits Thread. https://transformer-circuits.pub/2021/framework/index.html
