A Neurodynamic Model of Tonality

The code in this repository implements a neurodynamic model of tonal stability based on coupled nonlinear oscillators and the Gradient Frequency Neural Network (GrFNN) framework. The model follows the approach described in A Neurodynamic Account of Musical Tonality (Large et al., 2016), in which tonal structure emerges from resonant interactions among neural oscillators. (Get code on Github)

Overview

The dynamical simulation models a network of oscillatory units tuned to musical frequencies spanning two octaves. When driven by tonal musical input (e.g., melodies or chord sequences), the network learns connections that lead to stable patterns of network activity. After a tonal stimulus is heard, the network settles into a stable state in which individual oscillator amplitudes are non-zero, and the pattern of relative amplitudes reflects  perceived tonal stability.

Key principles

  • Mode-locking dynamics: Oscillators interact via resonant frequency ratios (e.g., 2:1, 3:2), producing stable patterns of synchronization.
  • Dynamical stability as tonality: The steady-state amplitudes of oscillators correspond to perceived tonal stability.
  • Plasticity and learning: Network connections adapt through Hebbian-like learning, allowing the system to encode musical structure over time.
  • Cross-cultural generality: The model captures tonal hierarchies across different musical systems by relying on intrinsic dynamical properties and learned connection strengths rather than culture-specific rules.

The theoretical foundation and empirical validation are described in the accompanying papers, which demonstrates that these dynamics predict probe-tone stability judgments across Western and Hindustani musical contexts.  

Repository Structure

  • setParameters.m
    Defines model parameters, including oscillator frequencies, coupling structure, and learning rates.
  • stimulusMake.m (from GrFNN toolbox)
    Generates stimulus signals used to drive the oscillator network.
  • stimulusTrain.m
    Trains the network on musical input (e.g., chord progressions), allowing synaptic weights to stabilize.
  • stimulusTest.m
    Tests the trained network on new stimuli and computes resulting activation patterns.
  • threeFreqMatsAll.m
    Constructs coupling matrices for higher-order (three-frequency) interactions.

Requirements

  • MATLAB (tested on recent versions)
  • GrFNN Toolbox

Available at: https://musicdynamicslab.uconn.edu/home/multimedia/grfnn-toolbox/

Basic Usage

  1. Add the GrFNN toolbox to your MATLAB path.
  2. Run setParameters.m to initialize model parameters.
  3. Train the network using a cadence in C Major:
    >> networkTrain
  4. Test the model on the three different input sequences (incomplete cadence, C Major melody, Rag Bilaval):
    >> networkTest
  5. Generate movies similar to those in the  supplementary materials of (Large, et. al., 2016):
    >> networkAnimate

The output consists of oscillator amplitudes over time. Final steady-state amplitudes are compared withs a tonal stability profile.

Interpretation

The model predicts that tones forming simple integer ratios with the tonic (e.g., octave, fifth) achieve greater dynamical stability. These stability patterns closely match empirical probe-tone ratings in human listeners, supporting a neurodynamic basis for tonal perception.

Notes

  • The implementation emphasizes clarity and reproducibility rather than computational optimization.
  • The current code uses equal-tempered tuning; alternative tuning systems can be explored by modifying oscillator frequencies.
  • The model can be extended to include richer stimuli, more realistic network architectures, or different coupling types.
  • This implementation was only a proof of concept, many questions remain:
  • Could this network be trained to function as key-funding algorithm?
  • What about different modes?
  • What about different rāgs?
  • These questions are not trivial and may require making different assumptions about parameter regimes that will alter behavior qualitatively. See Additional References below to learn about multifrequency oscillator networks and how different behaviors depend on parameters.

Citation

If you use this code, please cite:

Large, E. W. (2010). Dynamics of musical tonality (V. Ji. Raoul Huys, Ed.). Springer.

Large, E. W., Kim, J. C., Flaig, N. K., Bharucha, J. J., & Krumhansl, C. L. (2016). A neurodynamic account of musical tonality. Music Perception, 33(3), 319–331.

Additional References

Cartwright, J. H. E., Gonzalez, D. L., & Piro, O. (1999). Universality in three-frequency resonances. Physical Review Letters, 59(3), 2902–2906.

Hoppensteadt, F. C., & Izhikevich, E. M. (1996a). Synaptic organizations and dynamical properties of weakly connected neural oscillators I: Analysis of a canonical model. Biological Cybernetics, 75, 117–127.

Hoppensteadt, F. C., & Izhikevich, E. M. (1996b). Synaptic organizations and dynamical properties of weakly connected neural oscillators II. Learning phase information. Biological Cybernetics, 75(2), 129–135.

Kim, J. C. (2017). A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation. Frontiers in Psychology, 8, 666.

Krumhansl, C. L. (1990). Cognitive Foundations of Musical Pitch. Oxford University Press.

Krumhansl, C. L., & Kessler, E. J. (1982). Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys. Psychological Review, 89(4), 334.

Large, E. W. (2011). Musical tonality, neural resonance and Hebbian learning (E. Amiot M Andreatta G Assayag J Bresson J Mandereau C Agon, Ed.; pp. 115–125). Berlin: Springer-Verlag.

Large, E. W., Almonte, F., & Velasco, M. (2010). A canonical model for gradient frequency neural networks. Physica D-Nonlinear Phenomena, 239(12), 905–911. https://doi.org/10.1016/j.physd.2009.11.015

Kim, J. C., & Large, E. W. (2015). Signal Processing in Periodically Forced Gradient Frequency Neural Networks. Frontiers in Computational Neuroscience, 9, 152. https://doi.org/10.3389/fncom.2015.00152

Kim, J. C., & Large, E. W. (2019). Mode locking in periodically forced gradient frequency neural networks. Physical Review E, 99(2), 022421. https://doi.org/10.1103/physreve.99.022421

Kim, J. C., & Large, E. W. (2021). Multifrequency Hebbian plasticity in coupled neural oscillators. Biological Cybernetics, 115(115), 43–57.