SNNs more closely resemble the function of biological neurons and are perfect for temporally changing inputs. I decided to teach myself rust at the same time I learned about these so I built one from scratch trying to mimic the results of this paper (or rather a follow up paper in which they change the inhibition pattern leading to behavior similar to a self organizing map; I can’t find the link to said paper right now…).
After building that net I had some ideas about how to improve symbol recognition. This lead me down a massive rabbit hole about how vision is processed in the brain and eventually spiraled out to the function and structure of the hippocampus and now back to the neocortex where I’m currently focusing now on mimicking the behavior and structure of cortical minicolumns.
The main benefit of SNNs over ANNs is also a detriment: the neurons are meant to run in parallel. This means it’s blazing fast if you have neuromorphic hardware, but it’s incredibly slow and computationally intense if you try to simulate it on a typical machine with von Neumann architecture.
Can you elaborate on that first paragraph? I’m interested.
SNNs more closely resemble the function of biological neurons and are perfect for temporally changing inputs. I decided to teach myself rust at the same time I learned about these so I built one from scratch trying to mimic the results of this paper (or rather a follow up paper in which they change the inhibition pattern leading to behavior similar to a self organizing map; I can’t find the link to said paper right now…).
After building that net I had some ideas about how to improve symbol recognition. This lead me down a massive rabbit hole about how vision is processed in the brain and eventually spiraled out to the function and structure of the hippocampus and now back to the neocortex where I’m currently focusing now on mimicking the behavior and structure of cortical minicolumns.
The main benefit of SNNs over ANNs is also a detriment: the neurons are meant to run in parallel. This means it’s blazing fast if you have neuromorphic hardware, but it’s incredibly slow and computationally intense if you try to simulate it on a typical machine with von Neumann architecture.