#11 AI: Fixing the Training gone Wrong

AI: Fixing the Training gone Wrong

Building on Paper #10’s AI training pitfalls—underfitting (too lazy), overfitting (too rigid), high bias (skewed guesses), and high variance (wild swings)—this paper offers practical fixes for our smell detector. We explore three levers: boosting network capacity, extending training with more epochs, and enriching data for smarter learning.

#10 AI Training going wrong

AI Training

This paper explores why the model might fail in practice: underfitting (too simplistic), overfitting (too rigid), and the underlying issues of bias and variance. Through examples, we show how underfitting leads to random guesses , while overfitting causes oversensitivity. We introduce bias (consistent errors) and variance (prediction variability).

#9 AI Training & Back Propagation

AI Training & Back Propagation

AI Training & Back Propagation – In order to use a Digital Neural Network, we need to train it. In this paper we present how we can “train” one using supervised training and backpropagation. By comparing the model’s output with the value that we know to be correct, we can tune the parameters and make it solve the problem at hand.

#8 – AI Forward Propagation

AI Forward Propagation

AI Forward Propagation – AI Neural networks mimic the neural network of the brain. In this paper we present what is happening inside a digital neural network from data entry to result. We study the various mathematical steps in their simplest format to allow global understanding of the inside mechanisms. The end-to-end process is called Forward Propagation.