Cache-Friendly Spiking Neural Netorks Via Constrained Synapses

REPO: https://codeberg.org/0x177/LSNN

I have been thinking about ways to simulate spiking neural networks besides the typical implementations (particularly custom hardware) and found it very inconvenient that usually spiking neural networks allow for general, N^2 connectivity. Obviously, you can just not allow it and the problem would be solved, but I decided to experiment with creating a spiking neural network implementation that keeps memory as contiguous as possible, just to have a reference implementation in my head.

The current method is to do this:

Model neurons first as points in three dimensional space, and sort them via the Z-order curve (morton codes). This guarantees that physically nearby points are together in memory.

Then, connect each neuron with every other neuron within a given radius, $r$. Sort the list of connections by the index of the presynaptic neuron if unequal, and sort by the index of the postsynaptic neuron if they are equal. For efficiency, create a new array, $b$, of empty arrays with a size of $n$, where $n$ is the number of neurons. Then, for each connection between presynaptic neuron $i$ and postsynaptic neuron $j$, append $j$ to $b_i$. If this is done sequentally, no further sorting is required.

To measure how contiguous this array of connections is, I will define the anti-contiguouty of an array of indices as the maximum distance between adjacent elements in the array. The following is a plot of the anti-contiguouty of each array in $b$, for a random arrangement of 25 neurons in a unit sphere:

plot

Then, we initialize the synapses from the list of connections, and utilize $b$ to ensure efficient and cache-friendly iteration during STDP and spike propagation.