Early Era of Artificial Neural Networks

Vtantravahi
5 min readJun 18, 2022
Image Source: Author Edit

Birds inspired us to fly over the sky, burdock plants inspired to create velcro, and countless other inventions inspired by nature. It seems only logical to get inspired by the brain's architecture to build intelligent systems. That’s how the idea leads to the creation of artificial neural networks (ANN). However, although planes were inspired by birds, they don’t flap wings. Similarly, ANNs have gradually evolved and became different from their biological cousins.

ANNs are heart of Deep Learning. They have been around us doing amazing things, making them versatile, scalable, complex and ideal to track large data and/or complicated Machine Learning tasks, such as clustering photos based on identity (Google Images), Powering your web search/ search with voice recognition (Google, Siri, Alexa, Cortona etc.), recommending videos based out your interest (Netflix, YouTube) or nowadays learning to replicate human actions using robots (Hotelling robots to Boston Dynamics).

AI Powered Apps

From Biological Neurons to Artificial Neurons

Astonishingly, ANNs have been around us for a while. They are first coined and introduced back in the 1940s by the Neurophysiologist McCulloch & Mathematician Pits in their Paper “A LOGICAL CALCULUS OF THE IDEAS IMMANENT IN NERVOUS ACTIVITY*

Image Source: www.semanticscholar.org

They have presented a simplified computational algorithm/model of how biological neurons work together in animal brains to perform complex decisions using proportional logic. This was the first neural architecture and since then many evolved based on this research.

This early success of ANNs in the 1960s lead to the widespread belief that we might soon end up having intelligent machines. But, when they have soon realised that their promise might be unfulfilled, funding went elsewhere and AI went into dark winter for some time until 1980s when new architectures were invented and new training techniques developed. But the progress was slow, and by 1990s ML algorithms came into existence, which often lead to better results with strong theoretical foundations unlike neural networks. This pushed the AI back into the dark. Finally, now we are experiencing a new era of AI where this might differ from previous due to:

  • Huge amounts of data availability to train complex architecture which outperforms better than ML algorithms.
  • Tremendous increase in computing power (CPU → GPU → TPU).
  • The improvement in training techniques and plethora of high-level API availability (Keras, TensorFlow, Pytorch, etc.)
  • ANNs seem to have greater funding because of its current progress across various streams.

Biological Neuron

Image Source: www.cs.toronto.edu

Before we get into artificial neural nets, let's first know about biological neurons. It looks unusual mostly found in every animal/human cerebral cortex, composed of a cell body known as nucleus and most of other complex parts and branching extensions called dendrites and one long extension called axon. Near its extremity, the axon splits into multiple branches called axon-terminals/telodendria and the tip of these terminals are minuscule structures called synaptic terminals or simply synapses, which connect to dendrites of other neurons. Biological neurons receive short electrical impulses from other neurons, when these are ample enough with in milliseconds neuron fires its signals to others connected. Neurons are organised in vast complex networks connecting billions of such neurons, where one neuron can connect to thousands of other neurons to perform complex decisions with in snap of fingers. These neurons are often arranged in consecutive layers, as seen below.

Image Source : Stack Exchange

Artificial Neuron

An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network.

So, here come question that how we replicated biological neurons? Let’s break down steps and understand the working unit of neuron. We begin with signals which can be a thought/data. In computation, we express them in mathematically with some variables, such as.

Inputs -->  X1 X2 X3 ...... Xn

We know that signal makes neurons excited based on the type of signal received. Various neurons take their part in computation of decision before further processing, which are nothing but weights associated with respect to each neuron to carry out the decision.

Inputs and Weights

Now, biological neuron accumulates all this information and charges it with electricity to sum up information. An Artificial neuron implemented also tries to behave in similar way by summing up all weighted inputs from different neurons and we call this transfer function. The newly calculated weighted input is called as network input.

Artificial Neuron

After the neuron is being charged up, it releases electricity to the next neuron if it has exceeded a certain threshold. In a similar way, we can implement it computationally by introducing an activation function. An activation function takes network input and checks whether it has crossed a certain threshold, then fires the signal to the next neuron, which is referred as neuron activation.

Artificial Neuron Unit

Closing Note

To Sum up, artificial neuron is the combination of mathematical and computational model that try to mimic the human biological neurons and helping humans to build complex systems/ intelligent machines by clustering such millions of such neurons. It’s time to thank McCulloch & Pitts for laying path towards today's complex systems that are around us from the start of day, automating home using voice commands to driving us on the road.

--

--

Vtantravahi

👋Greetings, I am Venkatesh Tantravahi, your friendly tech wizard. By day, I am a grad student in CIS at SUNY, by night a data nerd turning ☕️🧑‍💻 and 😴📝