Cache-Friendly Spiking Neural Netorks Via Constrained Synapses

REPO: https://codeberg.org/0x177/LSNN

Typical sparse SNNs allow for general connectivity between neurons, i.e allowing a synapse between any presynaptic neuron and any postsynaptic neuron. However, in the brain, the majority of synapses are local, i.e. the postsynaptic neuron must be within a distance of the presynaptic neuron.

note: long-range synapses (white matter) are still very important, and an extension of this implementation is required to cover them. However, that is out of scope for this repo.

To exploit this, I propose the following:

Model neurons first as points in three dimensional space, and sort them via the Z-order curve (morton codes). This guarantees that physically nearby points are together in memory.

Then, connect each neuron with every other neuron within a given radius, $r$. Sort the list of connections by the index of the presynaptic neuron if unequal, and sort by the index of the postsynaptic neuron if they are equal. For efficiency, create a new array, $b$, of empty arrays with a size of $n$, where $n$ is the number of neurons. Then, for each connection between presynaptic neuron $i$ and postsynaptic neuron $j$, append $j$ to $b_i$. If this is done sequentally, no further sorting is required.

To measure how contiguous this array of connections is, we will define anti-contiguouty of an array of indices as the maximum distance between adjacent elements in the array. The following is a plot of the anti-contiguouty of each array in $b$, for a random arrangement of 25 neurons in a unit sphere:

plot

Then, we initialize the synapses from the list of connections, and utilize $b$ to ensure efficient and cache-friendly iteration during STDP and spike propagation.

Hopefully, I will create a PDF that goes more into detail about this approach soon.