A Clustering Neuronal Network

For a graduate neuronal networks class I took last semester at NYU, I had a project of trying to implement some kind of neuronal network. Given my interest in clustering, I sought to replicate Martí and Rinzel's (2013) feature categorization/clustering network.

Here I show an in-browser demo of their clustering network, as well as a small modification I made to it for anomaly detection. It's kind of fun to play around with (click here to jump directly to it). Most of the parameters in Martí and Rinzel's paper can be modified here to see what effects they have.

The network is an example of a continuous attractor network, which is to say that instead of imagining discrete cells connected to each other, you have an infinite number of cells connected to each other. In some sense, the math becomes easier because you use an integral instead of a sum. However, in this JavaScript demo, I use discrete cells (which can be increased to however many you like, it just makes the simulation slower).

Cells in this network are connected in a ring, and you can imagine each cell as being sensitive to a particular orientation in a line, just like certain visual neurons respond to particular orientations. What this network is going to try to cluster is different orientations that are input into it over time. For example, if the network sees a bunch of nearly flat lines (around $\theta = 0$) over a short period of time, the cells around $\theta = 0$ will begin to maintain a high firing rate. If only a few flat lines are presented though, the cells will forget it and not form a "cluster".

A ring of cells, taken from Marti and Rinzel (2013)

Nearby cells in the ring are connected to each other. With neighboring cells tending to excite each other, slightly further cells tending to inhibit each other, and even further cells having almost no influence. The general shape of the connectivity kernel is what's called a Mexican hat (here approximated with a difference of Gaussians). The general form of this kernel is

\begin{align} J(\theta) &= J_E(\theta) - J_I(\theta)\\ &= j_E \frac{\exp\left(m_E \cos(2\theta)\right)}{I_0(m_E)} - j_I \frac{\exp\left(m_I \cos(2\theta)\right)}{I_0(m_I)} \end{align}

where "$E$" is for excitation and "$I$" inhibition, with $m$ changing the narrowness and $j$ the height of the Mexican hat. "$\theta$" here is the angle difference between neighboring cells. You can mess around with the parameters in the "Kernel" section below to see the effect of different values.

There are two variables that we simulate over time, $s(\theta, t)$, which is the synaptic activation, and $r(\theta, t)$, which represents cell firing. For anomaly detection, I add a third variable $y(\theta, t)$. \begin{align} \tau \frac{\partial}{\partial t}s(\theta, t) &= -s(\theta, t) + r(\theta, t)\\ r(\theta, t) &= \Phi \left[ \frac{1}{\pi} \int_{-\pi/2}^{\pi/2} J(\theta - \theta')s(\theta', t) d\theta' + I(\theta, t)\right]\\ y(\theta, t) &= \Phi(I(\theta, t)) * (1 - s(\theta, t)) \end{align}

Here, $\Phi(x)$ is a sigmoid function for converting a current to a firing rate (between 0 and 1). Its equation is \begin{align} \Phi(x) &= \frac{1}{1 + exp(-\beta[x - x_0])} \end{align} and can be modified in the "Current-to-Rate" section.

The third section, "Excitation", describes how cells respond to stimuli. This basically shows how sensitive cells are to nearby orientations. It has two parameters, $m_s$ and $I_s$ that determine narrowness and the amount of current a cell receives to a nearby orientation. You can modify the inputs to the network by clicking directly on the graph to add more input points. Modifying the "Excitation" section changes how wide and strong each click input is.

Other parameters that can be modified include the length and number of time steps, the number of cells simulated in the network, and the time constant $\tau$.

On the x-axis of the graphs below is time. The y-axis ranges from $-\pi/2$ to $\pi/2$, and represents the orientation of different cells in the network.

This network's fairly interesting. Given the right settings it can detect a number of different clusters of different sizes. With the "anomaly detector" addition, we use the cluster detection to suppress non-novel stimuli. A couple potential problems with this network, though, are that clusters can possibly drift over time and will stay activated indefinitely (which may be problematic for anomaly detection).


$m_I$: $m_E$:
$jI$: $jE$:


$\beta$: $x_0$:


$m_s$: $I_s$:

Other parameters

Max Time:
Time Step:
Number of time steps:

Delta Theta:
Number of cells:


Show R:
Anomaly Detector:


Run Stop Clear inputs Example Inputs

Speed-reading PDFs with jetzt and pdf2htmlEX

I'm a big fan of the rapid serial visualization presentation (RSVP) method of speed-reading. The basic premise is that you present words in the same location in rapid succession, with the idea that your reading speed is largely limited by how quickly you can move your eyes. By quickly presenting words in the same place, you don't have to move your eyes, and can read much more quickly.

I've used a bunch of tools for speed reading in the past. One was "dictator" which is a standalone application. I used others before it, but dictator was the best and really the only one I can remember. I used to use it to read PDFs, but to use it this way I'd have to copy an entire page, paste into dictator, and repeat for each page. This got somewhat annoying.

Later, I found jetzt, which is a Chrome plugin that mimics Spritz, a company with a modified RSVP presentation. jetzt is great for reading webpages. The only problem with it, though, is that it doesn't work on PDFs.

To get around this, we can convert PDFs to HTML files using the fantastic pdf2htmlEX. Installing it on Debian is as easy as sudo apt-get install pdf2htmlex.

Because jetzt is a Chrome plugin, really all the magic happens in a JavaScript and CSS file. By embedding these into our converted HTML file, we can add speed reading capabilities to our PDF!

Here's an example with Alice in Wonderland from Project Gutenberg. Just press "r" on a page to speed read it. It'll look something like the image below. You can speed up or slow down the reading rate using the up and down arrow keys, respectively.

Speed reading Alice in
Wonderland with jetzt

Converting PDFs is fairly straightforward with this bash script (shown below) that automatically adds a link to readPDF.js in the PDF HTML file. It'll use rawgit as a CDN for the following Gist, so you won't even need a local copy of readPDF.js.

I currently use this as a method to skim books and scientific articles. Reading without it now seems super tedious. =P

Hey, they got rapid serial visualization presentation. And here I am moving my eyes like a sucker.

The Utility of Social Preferences in Javascript

This post just contains a fun example of utility indifference curves in a dictator scenario. Basically, I'm just following Charness 2002. The article is an interesting one, showing that people may be more concerned with social welfare (a weighted sum of everyone's payoff) instead of being difference averse (not liking inequality between people's payoffs).

Take the example of a Player B who in a dictator game has to decide a split of money between himself and Player A. A model then for the utility of the splits is the following (where $\pi_a$ is the amount paid to Player A, and $\pi_b$ the amount to B):

$U_B(\pi_a, \pi_b) = (\rho * r + \sigma * s) * x + ( 1 - \rho * r - \sigma * s) * y$


$r = 1$ if $\pi_B > \pi_A$, and $r = 0$ otherwise.

$s = 1$ if $\pi_B < \pi_A$, and $s = 0$ otherwise.

In the graph below, the x-axis represents your payoff, with more payoff towards the right side. The y-axis represents the other player's payoff, with higher values toward the top. Greater utility values are darker, while lower ones are lighter. (Yes, I should probably put this with axis labels and a legend, but it's kind of hard to align it with CSS. =P)

You can tweak the utility equation below, as well as the sliders for the values of $\rho$ and $\sigma$ and the graph will update in real time.

Rho: 0
Sigma: 0

Press the buttons below to generate random examples of competitive, difference averse, or social welfare preferences.

Competitive preferences
happen when $\sigma \leq \rho \leq 0$. With competitive preferences people prefer their payoff to be high compared to others.

Difference aversion
happens when $\sigma < 0 < \rho < 1$. This means that a player both prefers more money and for their payoff to be equal to their counterpart.

Social welfare preferences
happen when $1 \geq \rho \geq \sigma > 0$. With social welfare preferences people prefer money both for themselves and the other player, but prefer more for themselves when they are behind the other player compared to when they're ahead.