Nonlinear Dynamics

Some stuff



code
/static/ca4b0dc67903b38306a98c50ebf56797/icon.png

It’s been a while since I worked on this topic, so I might have forgotten some details. During sophomore year, I was obsessed with the idea that nonlinear dynamics were the key to developing intelligence because of a book called Sync: The Emerging Science of Spontaneous Order by Steven Strogatz. The easiest way to explain what the subject is about is to give an example of one the interesting phenomena. In nature, fireflies blink their lights at a steady rate. However, when a bunch of fireflies group together, they start to synchronize with their neighbors until the whole swarm flashes together. I made a simulation that demonstrates this

In my pursuit of the topic, I worked with Professor Mikhail Rabinovich in studying a model he had developed to study a phenomena he called the Stable Heteroclinic Channel ^fn1. The idea behind the model was that it followed a cyclical pattern where it moved between three states, around which the state would cycle between 3 states unique to each state. Basically, the model encoded a sequence within it’s parameters, and the question was whether something similar could be used to explain how a dynamic system like the brain could encode memory. Graph The xx axis is not aligned on the bottom, but this graph shows the activation of each of the three states at the three meta states, labeled X1, X2, X3. The top graph shows the activation of the metastates, labeled Y, and the top left graph depicts a projection of the state of the model into 3D space. Mikhail called these meta states “chunks”, They can even go one level higher, forming what he called a super chunk. Graph Here is the set of equations that form this model:

dXi(t)dt=Xi(t)F(σi(Sk)j=1NρijXi(t))+Xi(t)νi(t)i=1,N\frac{dX_i(t)}{dt} = X_i(t)F\left(\sigma_i(S_k) - \sum_{j=1}^N\rho_{ij}X_i(t)\right) + X_i(t)\nu_i(t) \\ i = 1,\ldots N

Mikhail had parameterized the model and wanted me to study his parameterization. Graph However, I did not find much interesting, so I turned to the model and analyzed the eigenvectors and values of the Jacobian at the stable states. What I found was that there was an error in the parameterization Mikhail had given, and with this corrected I was able to determine the parameters for which the model was stable. This parameterization can be found in src/params.py With this, I was able to demonstrate a 6 chunk model, larger than any in the original paper. Graph I also found that the model slowed down quickly for most parameters, often not even reaching the second chunk. In fact, even the original parameters would eventually slow to a stop over time.

However, my investigation stopped here, as something happened to Mikhail and I was not able to reach him for a couple months. This caused me to reflect on the field as a whole and I realized that these models that were being developed were poorly suited to developing intelligence because they were so hard to control.