Summary: Neural silencing periods are not a disadvantage representing biological limitations, but rather an advantage for temporal sequencing identification.
Source: Bar-Ilan University
The brain is composed of millions of billions of neurons which communicate with each other. Each neuron collects its many inputs and transmits a spike to its connecting neurons. The dynamics of such large and highly interconnected neural networks is the basis of all high order brain functionalities.
In an article published today in the journal Scientific Reports, a group of scientists has experimentally demonstrated that there are frequent periods of silence in which a neuron fails to respond to its inputs. As opposed to elecronic devices, which are fast and reliable, the brain is composed of unreliable neurons.
“A logic-gate always gives the same output to the same input, otherwise electronic devices like cellphones and computers, which are composed of many billions of interconnected logic-gates, wouldn’t function well,” said Prof. Ido Kanter, of Bar-Ilan University’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the study.
“Comparing the unreliability of the brain to a computer or cellphone: one time your computer answers 1+1=2 and other times 1+1=5, or dialing 7 in your cellphone many times can result in 4 or 9. Silencing periods would appear to be a major disadvantage of the brain, but our latest findings have shown otherwise.”
Contrary to what one might think, Kanter and team have demonstrated that neuronal silencing periods are not a disadvantage representing biological limitations, but rather an advantage for temporal sequence identification.
“Assume you would like to remember a phone number, 0765…,” said Yuval Meir, a co-author of the study.
“Neurons which were active when the digit 0 was presented might be silenced when the next digit 7 is presented, for example. Consequently, each digit is trained on a different dynamically created sub-network, and this silencing mechanism enables our brain to identify sequences efficiently.”
Contrary to what one might think, Kanter and team have demonstrated that neuronal silencing periods are not a disadvantage representing biological limitations, but rather an advantage for temporal sequence identification. Image is in the public domain
The brain silencing mechanism is a proposed source for a new AI mechanism, and in addition has been demonstrated as the origin for a new type of cryptosystem for handwriting recognition at automated teller machines (ATMs).
This cryptosystem allows the user to write his personal identification number (PIN) on an electronic board rather than clicking a PIN into the ATM.
The sequence identification developed by Kanter and team, based on neuronal silencing periods, is not only capable of identifying the correct PIN but also the user’s personal handwriting style and the timing in which each digit of the PIN is written on the board. These added features act as safeguards against stolen cards, even if a thief knows the user’s PIN.
This latest research by Kanter and team shows that it is not always beneficial to improve the unreliablilty of stuttered neurons in the brain, because they have advantages for higher brain functions.
About this neuroscience and AI research news
Author: Elana OberlanderSource: Bar-Ilan UniversityContact: Elana Oberlander – Bar-Ilan University
Image: The image is in the public domain
Original Research: Open access.
“Brain inspired neuronal silencing mechanism to enable reliable sequence identification” by Ido Kanter et al. Scientific Reports
Brain inspired neuronal silencing mechanism to enable reliable sequence identification
Real-time sequence identification is a core use-case of artificial neural networks (ANNs), ranging from recognizing temporal events to identifying verification codes. Existing methods apply recurrent neural networks, which suffer from training difficulties; however, performing this function without feedback loops remains a challenge.
Here, we present an experimental neuronal long-term plasticity mechanism for high-precision feedforward sequence identification networks (ID-nets) without feedback loops, wherein input objects have a given order and timing.
This mechanism temporarily silences neurons following their recent spiking activity.
Therefore, transitory objects act on different dynamically created feedforward sub-networks. ID-nets are demonstrated to reliably identify 10 handwritten digit sequences, and are generalized to deep convolutional ANNs with continuous activation nodes trained on image sequences.
Counterintuitively, their classification performance, even with a limited number of training examples, is high for sequences but low for individual objects.
ID-nets are also implemented for writer-dependent recognition, and suggested as a cryptographic tool for encrypted authentication. The presented mechanism opens new horizons for advanced ANN algorithms.