Neural networks are patterned after our brain. So, understanding neural networks should be just as easy as understanding ourselves!
And just as hard.
There is a feedback loop in the work done in the field of artificial intelligence. The inspiration for Artificial Neural Networks (ANN) is the mammalian brain. And through our work with ANNs, we have learned more about how our brain works.
The building blocks in a brain comprise axioms, dendrites, and synapses (see figure 1) in a rhizomatic1Complex interconnections as opposed to linear ordering formation. We can gain a lot of insight into the workings of ANNs by considering a few details of how our brain works. Or at least, how we think it works!
Let us trace the process your body is going through as you read this on your screen. Light emitted from the display (i.e. input) enters your eye, where it stimulates receptors (dendrites) of neural cells on the retina. This stimulation changes the ratio of sodium and potassium ions in the cellular solution of the neural cell, which discharges like a capacitor. Each cell has some positive threshold electric potential, which when exceeded causes the cell to transmit an electric pulse to the brain2This is akin to an activation threshold in ANNs.
Inside the brain the cell body (axon) transitions into tiny branches, also called synaptic terminals. The brain has more than 10 billion neurons with around 60 trillion synaptic terminals (Shepherd, 1990)3As an aside, these enormous numbers more than make up for slower information relay rate compared to silicon (103 sec. vs. 109 sec.). As a result, the brain is more efficient than your desktop using 10–16 Joules per operation per second vs. 10-6 Joules for a microprocessor (Faggin, 1991). The transmitted impulse triggers release of neurotransmitters across the junction between neuron terminals (synapse). This chemical release travels across the fluid in the brain triggering an electrical impulse at an adjacent neuron and a chain reaction starts.
Your life experiences have conditioned these neurotransmissions such that specific areas of the brain are activated in a particular pattern every time you see a combination of alphabets. And therefore you can read and understand.
The previous paragraph has implications for ANN architectures and operation. It presents ideas likes weighting input vectors, activation thresholds, parallelism and high interconnectivity, and learning through feedback. These are ideas that we will delve into further in future posts.
There are two important takeaways here. One is that at a basic level, the brain has a relatively straightforward input-output process. Neurons themselves act as simple switches that turn on or off depending on an activation threshold level and the strength of the incoming electrochemical impulse. It is truly remarkable that when neurons are put together in a network (i.e. the neocortex) and given time and training, these uncomplicated biological cells evolve into the incomprehensible genius of Mozart, Da Vinci, and Einstein.
The other takeaway is the awesome power of a fault-tolerant parallel processing network. We can write algorithms that emulate biological neurons, and use these to design networks patterned after the neural physiology found in the brain. Given the superior algorithmic processing speed, accuracy and memory of modern day computers, it makes sense that ANNs can do a better job at learning and execution of certain specialized tasks.
- Haykin, Simon 2004. Neural Networks – A Comprehensive Foundation. Prentice Hall, Second Edition
Go back to Volume 2: Practice.