Artificial Neuron • Weights, Bias, Activation Functions • Forward & Backward Propagation • Loss Functions (MSE, Cross-Entropy)
Neural Networks are the core of Deep Learning. They work like tiny digital brains that learn patterns from data. In this chapter, you will learn how an artificial neuron works, how learning happens inside a network, and why these models are so powerful.
An illustration is also included to help you visualize a neuron.
1. Artificial Neuron
An artificial neuron is the smallest unit of a neural network. It behaves like a simple decision-making system. It receives inputs, processes them, and produces an output.
Below is a simple image showing how a neuron works:
Artificial Neuron Diagram
(Inputs → Neuron → Output)
Image displayed above.
How the Neuron Works
An artificial neuron does four main things:
- Receives inputs (like hours studied, sleep time, etc.)
- Multiplies each input with a weight (importance)
- Adds a bias (adjustment)
- Passes the result through an activation function to generate an output
Real-Life Example
Imagine predicting whether a student will pass or fail:
- Inputs: hours studied, attention level, number of practice tests
- Neuron learns which inputs are most important
- Output: "Pass" or "Fail"
2. Weights, Bias, and Activation Functions
Weights
- Every input has a weight.
- A weight tells the neuron how important that input is.
- Bigger weight = more importance.
Bias
- Bias is an extra value added to help the neuron fit the data better.
- It increases flexibility and accuracy.
Activation Functions
Activation functions decide if the neuron should "activate" or not.
They help the network learn complex patterns.
Common Activation Functions
- ReLU – fast and used in most modern networks
- Sigmoid – used for two-class problems
- Tanh – stronger version of sigmoid
- Softmax – used when choosing among many classes
3. Forward and Backward Propagation
Neural networks learn through two important processes.
A. Forward Propagation
This is when the data flows from input → neuron → output.
Simple Explanation
- You give the network some input
- It performs calculations using weights and bias
- It produces an output or prediction
Example: The network predicts whether a picture is of a cat or dog.
B. Backward Propagation (Backpropagation)
This is how the network corrects its mistakes.
How It Works
- The prediction is compared with the correct answer
- The network calculates the error
- The error flows backward
- Weights and bias are adjusted to improve accuracy
Real-Life Analogy
A student solves a math problem → gets it wrong → teacher explains the mistake → student improves.
This is exactly how backpropagation teaches networks.
4. Loss Functions
Loss functions measure how wrong the network is.
Lower loss means better learning.
A. Mean Squared Error (MSE)
Used in regression problems (predicting numbers).
Example
Predicting the price of a house:
- High difference → high MSE
- Low difference → low MSE
B. Cross-Entropy Loss
Used in classification problems (choosing categories).
Example
Cat vs. Dog classification
Cross-entropy measures how confidently the model picks the correct category.
5. Code Examples (Python + TensorFlow/Keras)
These examples help you understand how neural networks work in real code.
A. Creating a Simple Neural Network
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
# Simple neural network for classification
model = Sequential([
Dense(8, activation='relu', input_shape=(4,)), # Hidden layer
Dense(3, activation='softmax') # Output layer
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
print(model.summary())B. Forward Propagation Example
import numpy as np
# Input features
x = np.array([2.0, 1.0, 0.5])
# Weights and bias
weights = np.array([0.4, 0.3, 0.1])
bias = 0.2
# Forward propagation
output = np.dot(x, weights) + bias
print("Neuron output before activation:", output)C. Activation Function Example
import numpy as np
def relu(x):
return np.maximum(0, x)
value = -3.5
print("ReLU output:", relu(value))D. Loss Function Example
import numpy as np
# Actual value and predicted value
y_true = np.array([1, 0, 0]) # Correct class
y_pred = np.array([0.7, 0.2, 0.1]) # Model prediction
# Cross entropy loss
loss = -np.sum(y_true * np.log(y_pred))
print("Cross-entropy loss:", loss)6. Practice Exercises (With Answers)
Q1. What does an artificial neuron do?
Answer:
It takes inputs, multiplies them by weights, adds bias, applies activation, and produces an output.
Q2. Why do we need activation functions?
Answer:
They allow the network to learn complex, non-linear patterns.
Q3. What type of loss function is used for classification problems?
Answer:
Cross-Entropy Loss.
Q4. What is forward propagation?
Answer:
The process of sending inputs through the network to get an output.
Q5. What is the main purpose of backpropagation?
Answer:
To reduce errors by adjusting weights and bias.