
#6 - Artificial Intelligence: Digital Neural Network Architecture
Summary
Topic: AI Neural Network : A digital architecture to mimic the brain
Summary: AI Neural networks mimic the neural network of the brain. But how do build a digital neural network? What is its architecture? We present the basic component of such technical solution.
No mathematical or scientific background is needed to read this paper.
Keywords: AI; Neural network; Machine Learning; Deep learning; Neuron; Synapses; Layers
Author: Sylvain LIÈGE
Note: This Paper was NOT written by AI, although AI might be used for research purposes.
Assumption: We assume that you have read our previous papers or at least are familiar with their topics.
1 Introduction
In our previous white papers, we have introduced how Algebra allows to convert the real world in computable data. We then explained how we use Differential Calculus to construct mathematical functions to predict the future based on the past. Then we presented how biological neural networks have been used to create digital ones in the hope to build a system that can somehow “think”. The remaining question art this point is: what is a digital neural network doing that can mimic and give the illusion of thinking? This is what we will do in this paper: identify the mathematical foundations that are structuring and running a digital neural network.
We’ll focus on a classical neural network’s core structure, the foundation of even today’s massive AI systems.
There will be mathematics in this paper, but fear not, I’ll try to keep it to a minimum. As usual, I might simplify stuff to get the point across. This is not a lesson on mathematics.
2 Neural networks principles
As we have established, neural networks are made of “neurons” that process information in and sends transformed information out to other neurons. This well and nice but …what is this information in? What is this transformation about? What is this information out? How does a neuron decide to send information to the next neuron? When does this process stop? So, in short: how on earth are we going to do that?
If you remember, in our paper about Differential Calculus, we have presented how mathematics uses functions to get information in, transform this information and outputs a new information out. You will agree with me that this process sounds a lot like what we want our neurons to do.
So, what do we know about the brain neurons? They take some info as input like a smell molecule, it distributes the information in pieces to feed specialised neurons, then the neurons transform that information and passes the result to other neurons. Something important to notice here: the output is not always sent to every other connected neurons. So, the distribution of information itself is done cleverly and the whole brain is not activated for every decision to make.

This is illustrated on this diagram above. The orange neurons are the activated ones and the red synapses are the ones used to convey information from one neuron to the other.
So, we have a list of problems to solve:
- What is an “information” looking like?
- What is a “transformation” looking like?
- How do we decide of the info is sent or not to other neurons?
- What is the output looking like?
2.1 What information does a neuron process?
In real life, a neuron is processing electrical impulses. Of course, it is much more complicated than that, but we can keep it at that for now.
On the digital world, even if ultimately we do process electrical information, it is a very inconvenient way to think about it. We need something that we, humans, understand better and that computers understand well, too. And guess what that thing is! …yes, numbers! Numbers are something easy to manipulate for a computer and easy to conceive for a human being. On top of that, maths is particularly good with numbers. It seems that we now have our weapons of choice: numbers and functions. Numbers will be our “information” and functions will be our transformation.
To keep things a bit abstract and simple to look at, we will name “x” our numbers and “f” our functions. F(x) is the transformation of x by f. This is no more complicated than the calculation of your time of arrival we have seen in the Differential Calculus paper, remember?
So, our new world now looks like this:

Something is missing here. Remember? Sometimes the information is sent to the next neuron and sometimes not. We need to add some sort of switch at the exit of each neuron that will turn on or off the sending of the information to the next neuron. This switch job is to block the information based on some criteria. If some conditions are not met, then the info is blocked.
We will now add this switch to our architecture. For the sake of simplicity, we will consider an “all or nothing” situation, but as you can imagine, it is far more subtle than that in real life. I drastically simplify. By the way, this switch has a name in Ai, it is named “Activation function”. As its name indicates it activates the next neuron or not. In practice these functions are not black or white and they have various degrees of finesse. The activation function decides the output’s strength—fully on, off, or in-between—mimicking how brain neurons selectively signal. But for this paper we can still name our switch an activation function, a basic one but it’s working.

We now have one last concept to cover, and we are done with our digital network architecture.
In the brain, the synapses don’t simply send the information to the next neuron without doing anything. Synapses are not just lazy electrical cables. They will modulate the strength of the information going through them. In other words, they can send the information and amplify it, be neutral or reduce it, making it more or less important to the next neuron. This has a name: postsynaptic potential (PSP). You don’t have to remember that, though.
So, what it means to us is that we are missing one element on our digital neural network. We miss a way to make the information more or less important. Lucky us, the solution is very simple: we will add a multiplicator to the synapse, we will call it a “weight”. If this weight is less than 1 then it reduces the importance of the information and if greater it amplifies the importance. For convenience, we will name this “weight” w.
We now have our final digital neural network looking like this.

Of course, as you can imagine, a brain is a vastly more complex than that. But nonetheless, this is mimicking its structure in a fairly decent way.
3 Now what?
The next steps will be to make this whole architecture work in practice. This will be the topic for our next paper.
4 Where is the Intelligence?
Since we have only built our architecture for mimicking the brain, I cannot answer this question. Is this architecture going to produce intelligence? It would be nice, right? We have in place a digital brain with a neural network structure using mathematics as its engine. So far, we do not have any “intelligence” but we do hope to create some. We’ll find out more in our next paper.

Sylvain LIÈGE PhD.
Categories
- AI (9)
- Hungary (2)
- IT for Managers (5)
- Programming (1)
- Quote (9)
- Technology (15)
- White Paper (15)