Setup¶

We begin by importing the necessary libraries and defining a utility class for timing our code block executions.

In [ ]:
import time
from IPython.display import display, HTML

def countdown_timer(minutes, seconds):
    display(HTML(f"""
    <p id="timer" style="font-size:300px; color: red;"></p>
    <script>
    var minutes = {minutes};
    var seconds = {seconds};
    var timer = document.getElementById("timer");
    function updateTimer()  else  else 
            var minutesDisplay = minutes.toString().padStart(2, '0');
            var secondsDisplay = seconds.toString().padStart(2, '0');
            timer.innerHTML = minutesDisplay + ":" + secondsDisplay;
            setTimeout(updateTimer, 1000);
        }}
    }}
    updateTimer();
    </script>
    """))

# Run the countdown timer for x minutes and x seconds
countdown_timer(1, 0)

Course Logistics¶

Please remember to fill out the course evaluations to provide feedback.

Your feedback is highly valued and helps continuously improve the curriculum:

  1. Midterm Review Feedback: Google Form
  2. Course Evaluations Feedback: NYU Course Feedback Form

Imports¶

We import foundational libraries such as NumPy, Matplotlib, and core PyTorch modules (torch, torch.nn, torch.optim) for deep learning.

In [ ]:
%%capture
!pip install optuna
In [ ]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import warnings
import seaborn as sns
import optuna

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, load_iris
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import mean_squared_error, accuracy_score, ConfusionMatrixDisplay
from sklearn.metrics import (
    accuracy_score,
    precision_score,
    recall_score,
    f1_score,
    confusion_matrix,
    roc_curve,
    auc
)

# Ignoring warnings and listing GPU to verify it is assigned
warnings.filterwarnings("ignore")
tf.config.list_physical_devices('GPU')
Out[ ]:
[]
In [ ]:
# Checking GPU
tf.debugging.set_log_device_placement(True)

a=tf.constant([[1.0,2.0,3.0],[4.0,5.0,6.0]])
b=tf.constant([[1.0,2.0],[3.0,4.0],[5.0,6.0]])
c=tf.matmul(a,b)

print(c)
Executing op _EagerConst in device /job:localhost/replica:0/task:0/device:CPU:0
Executing op _EagerConst in device /job:localhost/replica:0/task:0/device:CPU:0
Executing op MatMul in device /job:localhost/replica:0/task:0/device:CPU:0
tf.Tensor(
[[22. 28.]
 [49. 64.]], shape=(2, 2), dtype=float32)

1. Introduction to Machine Learning¶

Machine learning provides the foundational mathematical and algorithmic framework upon which deep learning is built.

For a robust, intuitive review of foundational statistical and machine learning concepts, I recommend the excellent resource: StatQuest with Josh Starmer.

1.1 What is Machine Learning?¶

Machine learning (ML) is a branch of artificial intelligence focused on enabling computers to learn patterns from data and make predictions or decisions without explicit programming. Rather than following a deterministic set of rules, machine learning algorithms optimize a mathematical objective by observing data over time.

Core Paradigms:

  1. Supervised Learning:
    • The model learns from labeled training data, mapping inputs (features) to desired outputs (targets).
    • Applications: Classification (discrete categories, e.g., image recognition) and Regression (continuous values, e.g., bounding box coordinates).
  2. Unsupervised Learning:
    • The model learns from unlabeled data, tasked with finding hidden structural patterns.
    • Applications: Clustering, Dimensionality Reduction (e.g., PCA), and generative modeling.
  3. Reinforcement Learning:
    • An agent learns to make sequences of decisions in an environment to maximize a cumulative reward.
    • Applications: Robotics, autonomous navigation, and strategic game playing.

1.2 Key Machine Learning Terminology¶

  1. Features ($x$): The input variables or quantifiable properties of the phenomena being observed (e.g., pixel intensities in an image).
  2. Labels ($y$): The ground-truth target outputs associated with a specific input in a supervised learning setting.
  3. Training Data: The subset of data explicitly used to optimize the model parameters.
  4. Validation Data: A held-out dataset used to evaluate the model during training, crucial for tuning hyperparameters and preventing overfitting.
  5. Test Data: A strictly isolated dataset used only once at the end to evaluate the final model's true generalization performance.
  6. Overfitting: When a model memorizes the training data—including its noise—resulting in high training accuracy but poor generalization to unseen data.
  7. Underfitting: When a model is too simple to capture the underlying structure of the data.

2. Introduction to Deep Learning¶

Deep learning is a subset of machine learning utilizing artificial neural networks with multiple layers (hence "deep") to model complex, non-linear relationships in high-dimensional data.

2.1 Anatomy of a Neural Network¶

  1. Layers:

    • Input Layer: Receives the raw numerical data (e.g., flattened image pixels).
    • Hidden Layers: Intermediate layers consisting of artificial neurons. These layers extract increasingly abstract features. Non-linear activation functions (like ReLU) are applied here.
    • Output Layer: Produces the final prediction (e.g., a probability distribution over classes via a Softmax function).
  2. Forward Propagation:

    • The process of passing input data through the network. At each layer, the input is multiplied by a weight matrix, added to a bias vector, and passed through an activation function to produce the input for the next layer.
  3. Backpropagation (Backprop):

    • The core algorithm for training neural networks. It computes the gradient of the loss function with respect to every weight in the network using the chain rule of calculus.
    • These gradients are then used by an optimization algorithm (like Stochastic Gradient Descent) to update the weights and minimize the error. image.png

Why Deep Networks Work

  1. Layered Abstractions: Each layer in a deep network captures higher-level abstractions of the data. For instance, in an image classification network:
  • Early layers might detect edges or colors.
  • Deeper layers detect shapes, textures, or even whole objects.
  1. Representation of Complex Patterns: With more layers, networks can learn intricate data representations, making them adept at recognizing patterns that are challenging for traditional algorithms.

  2. Universal Approximation: With enough neurons and layers, neural networks can approximate virtually any function, enabling them to generalize well across diverse tasks.

Example of Deep Learning: MNIST Prediction visualizer

Gradient Descent¶

In [ ]:
# Define a simple quadratic loss function and its derivative
def loss_function(x):
    return x**2 + 5  # A simple parabolic function with a minimum at x=0

def gradient(x):
    return 2*x  # Derivative of the function x^2 + 5 with respect to x

# Set initial parameters for gradient descent
x = 10  # Start point for gradient descent
learning_rate = 0.1  # Step size
iterations = 20  # Number of iterations

# Lists to store x values and loss values for plotting
x_values = [x]
loss_values = [loss_function(x)]

# Perform gradient descent
for i in range(iterations):
    grad = gradient(x)            # Compute the gradient at the current x
    x = x - learning_rate * grad   # Update x in the opposite direction of the gradient
    x_values.append(x)             # Store x for visualization
    loss_values.append(loss_function(x))  # Store the current loss
In [ ]:
# Plot the loss function
x_range = np.linspace(-10, 10, 200)
plt.plot(x_range, loss_function(x_range), label='Loss Function', color='blue')
plt.scatter(x_values, loss_values, color='red', label='Steps in Gradient Descent')
plt.title("Gradient Descent on a Simple Quadratic Loss Function\n"
          f"(Actual minima: 5, Predicted minima: {loss_values[-1]:.4f})")
plt.xlabel("x")
plt.ylabel("Loss")
plt.legend()
plt.show()
No description has been provided for this image

Classification¶

Data preprocessing¶

In [ ]:
# Fetching Data
s='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
# Assigning column names for personal use of checking data if needed
column_names=['sepal_length','sepal_width','petal_length','petal_width','class']

df=pd.read_csv(s,header=None,encoding='utf-8',names=column_names)
df.head()
Out[ ]:
sepal_length sepal_width petal_length petal_width class
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
In [ ]:
# Slicing data and class columns
X=df.iloc[:100,:4].values
y=df.iloc[:100,4].values

# Assigning setosa -1 and versicolor 1
y=np.where(y=='Iris-setosa',-1,1)

# Train test split
X_train,X_test,y_train,y_test=train_test_split(X[:,:2],y,test_size= 0.2)

# Scatter plot
plt.scatter(X[:50,0],X[:50,1],color='red',marker='o',label='setosa')
plt.scatter(X[50:100,0],X[50:100,1],color='blue',marker='x',label='versicolor')
plt.xlabel('Sepal length (cm)')
plt.ylabel('Petal length (cm)')
plt.legend(loc='upper left')
plt.show()
No description has been provided for this image
In [ ]:
X_train2,X_test2,y_train2,y_test2=train_test_split(X[:,:3],y,test_size= 0.2)

fig=plt.figure(figsize=(8,8))
ax=fig.add_subplot(111,projection='3d')
ax.scatter(X[:,0],X[:,1],X[:,2],c=y,cmap=plt.cm.Set1,zdir='x' )
ax.set_xlim(2.01,6.99)
ax.set_ylim(2.01,6.99)
ax.set_zlim(0.01,6.99)
ax.set_xlabel('Sepal length (cm)')
ax.set_ylabel('Sepal width (cm)')
ax.set_zlabel('Petal length (cm)')
Out[ ]:
Text(0.5, 0, 'Petal length (cm)')
No description has been provided for this image

Create a perceptron model¶

In [ ]:
class Perceptron:
    def __init__(self, input_length, eta=0.1, epochs=10):
        # Initialize perceptron parameters
        # input_length: number of features
        # eta: learning rate (default is 0.1)
        # epochs: number of training iterations (default is 10)
        self.input_length = input_length
        self.eta = eta
        self.epochs = epochs

        # Initialize weights with random values (including bias)
        # Length is input_length + 1 to account for the bias term
        self.wght = np.random.normal(0, 1, input_length + 1)

        # Array to store error at each epoch for analysis
        self.error = []

    def activation_func(self, x):
        # Activation function: binary threshold
        # Returns 1 if x > 0; otherwise, returns -1
        if x > 0:
            return 1
        else:
            return -1

    def train(self, X, Y):
        # Train the perceptron model
        # X: Input features
        # Y: Target labels

        # Initialize array to store error per sample
        error = np.zeros(len(X))
        t = 1  # Epoch counter

        # Training loop over specified number of epochs
        while t <= self.epochs:
            # Iterate over each training example
            for i in range(len(X)):
                # Calculate output using activation function on weighted sum of inputs
                out = self.activation_func(np.dot(self.wght[:-1], X[i]) + self.wght[-1])

                # Calculate error for current sample
                error[i] = Y[i] - out

                # Update weights based on error and learning rate (eta)
                self.wght[:-1] = self.wght[:-1] + self.eta * error[i] * X[i]  # Update weights
                self.wght[-1] = self.wght[-1] + self.eta * error[i]           # Update bias term

            # Calculate total error for the epoch
            E = np.sum(error ** 2)
            print(f"Epoch: {t}, Error: {E}")

            # Append error to list for analysis/plotting
            self.error.append(E)
            t += 1

        # Store trained weights for prediction use
        self.trained_wght = self.wght

    def predict(self, X, Y):
        # Predict labels for input X and calculate accuracy
        # X: Input features
        # Y: Actual labels for calculating accuracy

        # Store predictions in list
        self.predicted = []

        # Generate predictions for each sample in X
        self.predicted = [self.activation_func(np.dot(self.trained_wght[:-1], x) + self.trained_wght[-1]) for x in X]

        # Calculate accuracy by comparing predictions to actual labels
        self.accuracy = (len(Y) - np.count_nonzero(self.predicted - Y)) / len(Y)

    def plot_2d(self, X, y, x_label, y_label):
        # Plot 2D decision boundary
        # X: Input features (for plotting data points)
        # y: Labels for coloring points
        # x_label, y_label: Labels for the x and y axes

        # Extract weights and bias for decision boundary calculation
        w1, w2, b = self.wght

        # Generate x values for decision boundary line
        x_values = np.linspace(X[:, 0].min(), X[:, 0].max(), 100)

        # Calculate corresponding y values for the decision boundary
        y_values = -(w1 / w2) * x_values - b / w2

        # Plot data points and decision boundary
        plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Set1)
        plt.plot(x_values, y_values, color='blue', linestyle='--', label='Decision Boundary')
        plt.xlabel(x_label)
        plt.ylabel(y_label)
        plt.legend()
        plt.show()

    def plot_3d(self,X,y,x_label,y_label,z_label):
        # Same as above method, just for 3D space
        w1,w2,w3,b=self.wght
        xx,yy=np.meshgrid(np.linspace(2.01,6.99,50),np.linspace(2.01,6.99,50))
        zz=-(w1*xx+w2*yy+b)/w3

        fig=plt.figure(figsize=(8,8))
        ax=fig.add_subplot(111,projection='3d')
        ax.scatter(X[:,0],X[:,1],X[:,2],c=y,cmap=plt.cm.Set1,zdir='x')
        ax.set_xlim(2.01,6.99)
        ax.set_ylim(2.01,6.99)
        ax.set_zlim(0.01,6.99)
        ax.set_xlabel(x_label)
        ax.set_ylabel(y_label)
        ax.set_zlabel(z_label)

        ax.plot_surface(xx,yy,zz,color='blue',alpha=0.3,label='Decision Boundary')
        # ax.legend()
        plt.show()

Training the model for 2D data¶

In [ ]:
ppn=Perceptron(input_length=X_train.shape[1],eta=0.1,epochs=1000)
ppn.train(X_train,y_train)
Epoch: 1, Error: 116.0
Epoch: 2, Error: 68.0
Epoch: 3, Error: 44.0
Epoch: 4, Error: 16.0
Epoch: 5, Error: 36.0
Epoch: 6, Error: 40.0
Epoch: 7, Error: 16.0
Epoch: 8, Error: 36.0
Epoch: 9, Error: 16.0
Epoch: 10, Error: 28.0
Epoch: 11, Error: 32.0
Epoch: 12, Error: 8.0
Epoch: 13, Error: 8.0
Epoch: 14, Error: 36.0
Epoch: 15, Error: 8.0
Epoch: 16, Error: 8.0
Epoch: 17, Error: 36.0
Epoch: 18, Error: 8.0
Epoch: 19, Error: 8.0
Epoch: 20, Error: 36.0
Epoch: 21, Error: 16.0
Epoch: 22, Error: 16.0
Epoch: 23, Error: 28.0
Epoch: 24, Error: 24.0
Epoch: 25, Error: 8.0
Epoch: 26, Error: 36.0
Epoch: 27, Error: 8.0
Epoch: 28, Error: 8.0
Epoch: 29, Error: 28.0
Epoch: 30, Error: 32.0
Epoch: 31, Error: 8.0
Epoch: 32, Error: 28.0
Epoch: 33, Error: 24.0
Epoch: 34, Error: 8.0
Epoch: 35, Error: 28.0
Epoch: 36, Error: 32.0
Epoch: 37, Error: 8.0
Epoch: 38, Error: 8.0
Epoch: 39, Error: 28.0
Epoch: 40, Error: 24.0
Epoch: 41, Error: 8.0
Epoch: 42, Error: 28.0
Epoch: 43, Error: 8.0
Epoch: 44, Error: 8.0
Epoch: 45, Error: 28.0
Epoch: 46, Error: 8.0
Epoch: 47, Error: 8.0
Epoch: 48, Error: 20.0
Epoch: 49, Error: 32.0
Epoch: 50, Error: 8.0
Epoch: 51, Error: 8.0
Epoch: 52, Error: 20.0
Epoch: 53, Error: 24.0
Epoch: 54, Error: 16.0
Epoch: 55, Error: 8.0
Epoch: 56, Error: 28.0
Epoch: 57, Error: 16.0
Epoch: 58, Error: 8.0
Epoch: 59, Error: 20.0
Epoch: 60, Error: 24.0
Epoch: 61, Error: 8.0
Epoch: 62, Error: 8.0
Epoch: 63, Error: 28.0
Epoch: 64, Error: 8.0
Epoch: 65, Error: 8.0
Epoch: 66, Error: 16.0
Epoch: 67, Error: 20.0
Epoch: 68, Error: 8.0
Epoch: 69, Error: 8.0
Epoch: 70, Error: 16.0
Epoch: 71, Error: 20.0
Epoch: 72, Error: 8.0
Epoch: 73, Error: 8.0
Epoch: 74, Error: 16.0
Epoch: 75, Error: 20.0
Epoch: 76, Error: 16.0
Epoch: 77, Error: 8.0
Epoch: 78, Error: 20.0
Epoch: 79, Error: 16.0
Epoch: 80, Error: 8.0
Epoch: 81, Error: 8.0
Epoch: 82, Error: 8.0
Epoch: 83, Error: 20.0
Epoch: 84, Error: 16.0
Epoch: 85, Error: 8.0
Epoch: 86, Error: 16.0
Epoch: 87, Error: 28.0
Epoch: 88, Error: 16.0
Epoch: 89, Error: 8.0
Epoch: 90, Error: 16.0
Epoch: 91, Error: 20.0
Epoch: 92, Error: 8.0
Epoch: 93, Error: 8.0
Epoch: 94, Error: 8.0
Epoch: 95, Error: 20.0
Epoch: 96, Error: 16.0
Epoch: 97, Error: 8.0
Epoch: 98, Error: 20.0
Epoch: 99, Error: 16.0
Epoch: 100, Error: 8.0
Epoch: 101, Error: 16.0
Epoch: 102, Error: 20.0
Epoch: 103, Error: 8.0
Epoch: 104, Error: 8.0
Epoch: 105, Error: 8.0
Epoch: 106, Error: 20.0
Epoch: 107, Error: 16.0
Epoch: 108, Error: 8.0
Epoch: 109, Error: 20.0
Epoch: 110, Error: 16.0
Epoch: 111, Error: 8.0
Epoch: 112, Error: 16.0
Epoch: 113, Error: 20.0
Epoch: 114, Error: 8.0
Epoch: 115, Error: 8.0
Epoch: 116, Error: 8.0
Epoch: 117, Error: 20.0
Epoch: 118, Error: 8.0
Epoch: 119, Error: 8.0
Epoch: 120, Error: 8.0
Epoch: 121, Error: 20.0
Epoch: 122, Error: 16.0
Epoch: 123, Error: 8.0
Epoch: 124, Error: 16.0
Epoch: 125, Error: 28.0
Epoch: 126, Error: 8.0
Epoch: 127, Error: 8.0
Epoch: 128, Error: 8.0
Epoch: 129, Error: 20.0
Epoch: 130, Error: 16.0
Epoch: 131, Error: 8.0
Epoch: 132, Error: 20.0
Epoch: 133, Error: 24.0
Epoch: 134, Error: 16.0
Epoch: 135, Error: 8.0
Epoch: 136, Error: 20.0
Epoch: 137, Error: 24.0
Epoch: 138, Error: 8.0
Epoch: 139, Error: 8.0
Epoch: 140, Error: 8.0
Epoch: 141, Error: 20.0
Epoch: 142, Error: 8.0
Epoch: 143, Error: 8.0
Epoch: 144, Error: 8.0
Epoch: 145, Error: 20.0
Epoch: 146, Error: 16.0
Epoch: 147, Error: 8.0
Epoch: 148, Error: 8.0
Epoch: 149, Error: 8.0
Epoch: 150, Error: 20.0
Epoch: 151, Error: 16.0
Epoch: 152, Error: 8.0
Epoch: 153, Error: 8.0
Epoch: 154, Error: 8.0
Epoch: 155, Error: 20.0
Epoch: 156, Error: 16.0
Epoch: 157, Error: 16.0
Epoch: 158, Error: 8.0
Epoch: 159, Error: 8.0
Epoch: 160, Error: 8.0
Epoch: 161, Error: 20.0
Epoch: 162, Error: 16.0
Epoch: 163, Error: 8.0
Epoch: 164, Error: 8.0
Epoch: 165, Error: 8.0
Epoch: 166, Error: 20.0
Epoch: 167, Error: 16.0
Epoch: 168, Error: 16.0
Epoch: 169, Error: 8.0
Epoch: 170, Error: 8.0
Epoch: 171, Error: 8.0
Epoch: 172, Error: 20.0
Epoch: 173, Error: 16.0
Epoch: 174, Error: 8.0
Epoch: 175, Error: 8.0
Epoch: 176, Error: 8.0
Epoch: 177, Error: 20.0
Epoch: 178, Error: 16.0
Epoch: 179, Error: 16.0
Epoch: 180, Error: 8.0
Epoch: 181, Error: 8.0
Epoch: 182, Error: 8.0
Epoch: 183, Error: 20.0
Epoch: 184, Error: 16.0
Epoch: 185, Error: 8.0
Epoch: 186, Error: 8.0
Epoch: 187, Error: 8.0
Epoch: 188, Error: 20.0
Epoch: 189, Error: 16.0
Epoch: 190, Error: 16.0
Epoch: 191, Error: 8.0
Epoch: 192, Error: 8.0
Epoch: 193, Error: 8.0
Epoch: 194, Error: 20.0
Epoch: 195, Error: 16.0
Epoch: 196, Error: 8.0
Epoch: 197, Error: 8.0
Epoch: 198, Error: 8.0
Epoch: 199, Error: 20.0
Epoch: 200, Error: 16.0
Epoch: 201, Error: 16.0
Epoch: 202, Error: 8.0
Epoch: 203, Error: 8.0
Epoch: 204, Error: 8.0
Epoch: 205, Error: 20.0
Epoch: 206, Error: 16.0
Epoch: 207, Error: 8.0
Epoch: 208, Error: 8.0
Epoch: 209, Error: 8.0
Epoch: 210, Error: 20.0
Epoch: 211, Error: 16.0
Epoch: 212, Error: 16.0
Epoch: 213, Error: 8.0
Epoch: 214, Error: 8.0
Epoch: 215, Error: 8.0
Epoch: 216, Error: 20.0
Epoch: 217, Error: 16.0
Epoch: 218, Error: 8.0
Epoch: 219, Error: 8.0
Epoch: 220, Error: 8.0
Epoch: 221, Error: 20.0
Epoch: 222, Error: 16.0
Epoch: 223, Error: 16.0
Epoch: 224, Error: 8.0
Epoch: 225, Error: 8.0
Epoch: 226, Error: 8.0
Epoch: 227, Error: 20.0
Epoch: 228, Error: 16.0
Epoch: 229, Error: 8.0
Epoch: 230, Error: 8.0
Epoch: 231, Error: 8.0
Epoch: 232, Error: 20.0
Epoch: 233, Error: 16.0
Epoch: 234, Error: 16.0
Epoch: 235, Error: 8.0
Epoch: 236, Error: 8.0
Epoch: 237, Error: 8.0
Epoch: 238, Error: 20.0
Epoch: 239, Error: 16.0
Epoch: 240, Error: 8.0
Epoch: 241, Error: 8.0
Epoch: 242, Error: 8.0
Epoch: 243, Error: 20.0
Epoch: 244, Error: 16.0
Epoch: 245, Error: 8.0
Epoch: 246, Error: 8.0
Epoch: 247, Error: 8.0
Epoch: 248, Error: 8.0
Epoch: 249, Error: 20.0
Epoch: 250, Error: 8.0
Epoch: 251, Error: 8.0
Epoch: 252, Error: 8.0
Epoch: 253, Error: 8.0
Epoch: 254, Error: 20.0
Epoch: 255, Error: 8.0
Epoch: 256, Error: 8.0
Epoch: 257, Error: 8.0
Epoch: 258, Error: 20.0
Epoch: 259, Error: 16.0
Epoch: 260, Error: 16.0
Epoch: 261, Error: 8.0
Epoch: 262, Error: 8.0
Epoch: 263, Error: 8.0
Epoch: 264, Error: 20.0
Epoch: 265, Error: 16.0
Epoch: 266, Error: 8.0
Epoch: 267, Error: 8.0
Epoch: 268, Error: 8.0
Epoch: 269, Error: 8.0
Epoch: 270, Error: 20.0
Epoch: 271, Error: 8.0
Epoch: 272, Error: 8.0
Epoch: 273, Error: 8.0
Epoch: 274, Error: 20.0
Epoch: 275, Error: 16.0
Epoch: 276, Error: 8.0
Epoch: 277, Error: 8.0
Epoch: 278, Error: 8.0
Epoch: 279, Error: 20.0
Epoch: 280, Error: 16.0
Epoch: 281, Error: 8.0
Epoch: 282, Error: 8.0
Epoch: 283, Error: 8.0
Epoch: 284, Error: 8.0
Epoch: 285, Error: 20.0
Epoch: 286, Error: 8.0
Epoch: 287, Error: 8.0
Epoch: 288, Error: 8.0
Epoch: 289, Error: 8.0
Epoch: 290, Error: 20.0
Epoch: 291, Error: 8.0
Epoch: 292, Error: 8.0
Epoch: 293, Error: 8.0
Epoch: 294, Error: 20.0
Epoch: 295, Error: 16.0
Epoch: 296, Error: 8.0
Epoch: 297, Error: 8.0
Epoch: 298, Error: 8.0
Epoch: 299, Error: 20.0
Epoch: 300, Error: 16.0
Epoch: 301, Error: 16.0
Epoch: 302, Error: 8.0
Epoch: 303, Error: 8.0
Epoch: 304, Error: 8.0
Epoch: 305, Error: 20.0
Epoch: 306, Error: 16.0
Epoch: 307, Error: 8.0
Epoch: 308, Error: 8.0
Epoch: 309, Error: 8.0
Epoch: 310, Error: 20.0
Epoch: 311, Error: 16.0
Epoch: 312, Error: 8.0
Epoch: 313, Error: 8.0
Epoch: 314, Error: 8.0
Epoch: 315, Error: 8.0
Epoch: 316, Error: 20.0
Epoch: 317, Error: 16.0
Epoch: 318, Error: 16.0
Epoch: 319, Error: 16.0
Epoch: 320, Error: 8.0
Epoch: 321, Error: 8.0
Epoch: 322, Error: 8.0
Epoch: 323, Error: 8.0
Epoch: 324, Error: 20.0
Epoch: 325, Error: 16.0
Epoch: 326, Error: 16.0
Epoch: 327, Error: 8.0
Epoch: 328, Error: 8.0
Epoch: 329, Error: 8.0
Epoch: 330, Error: 8.0
Epoch: 331, Error: 20.0
Epoch: 332, Error: 16.0
Epoch: 333, Error: 8.0
Epoch: 334, Error: 8.0
Epoch: 335, Error: 8.0
Epoch: 336, Error: 8.0
Epoch: 337, Error: 20.0
Epoch: 338, Error: 16.0
Epoch: 339, Error: 16.0
Epoch: 340, Error: 8.0
Epoch: 341, Error: 8.0
Epoch: 342, Error: 8.0
Epoch: 343, Error: 8.0
Epoch: 344, Error: 20.0
Epoch: 345, Error: 8.0
Epoch: 346, Error: 8.0
Epoch: 347, Error: 8.0
Epoch: 348, Error: 8.0
Epoch: 349, Error: 20.0
Epoch: 350, Error: 8.0
Epoch: 351, Error: 8.0
Epoch: 352, Error: 8.0
Epoch: 353, Error: 8.0
Epoch: 354, Error: 20.0
Epoch: 355, Error: 8.0
Epoch: 356, Error: 8.0
Epoch: 357, Error: 8.0
Epoch: 358, Error: 8.0
Epoch: 359, Error: 20.0
Epoch: 360, Error: 16.0
Epoch: 361, Error: 16.0
Epoch: 362, Error: 16.0
Epoch: 363, Error: 8.0
Epoch: 364, Error: 8.0
Epoch: 365, Error: 8.0
Epoch: 366, Error: 8.0
Epoch: 367, Error: 20.0
Epoch: 368, Error: 16.0
Epoch: 369, Error: 16.0
Epoch: 370, Error: 8.0
Epoch: 371, Error: 8.0
Epoch: 372, Error: 8.0
Epoch: 373, Error: 8.0
Epoch: 374, Error: 20.0
Epoch: 375, Error: 8.0
Epoch: 376, Error: 8.0
Epoch: 377, Error: 8.0
Epoch: 378, Error: 8.0
Epoch: 379, Error: 20.0
Epoch: 380, Error: 16.0
Epoch: 381, Error: 8.0
Epoch: 382, Error: 8.0
Epoch: 383, Error: 8.0
Epoch: 384, Error: 8.0
Epoch: 385, Error: 20.0
Epoch: 386, Error: 16.0
Epoch: 387, Error: 16.0
Epoch: 388, Error: 8.0
Epoch: 389, Error: 8.0
Epoch: 390, Error: 8.0
Epoch: 391, Error: 8.0
Epoch: 392, Error: 20.0
Epoch: 393, Error: 8.0
Epoch: 394, Error: 8.0
Epoch: 395, Error: 8.0
Epoch: 396, Error: 8.0
Epoch: 397, Error: 20.0
Epoch: 398, Error: 8.0
Epoch: 399, Error: 8.0
Epoch: 400, Error: 8.0
Epoch: 401, Error: 8.0
Epoch: 402, Error: 20.0
Epoch: 403, Error: 16.0
Epoch: 404, Error: 16.0
Epoch: 405, Error: 16.0
Epoch: 406, Error: 8.0
Epoch: 407, Error: 8.0
Epoch: 408, Error: 8.0
Epoch: 409, Error: 8.0
Epoch: 410, Error: 20.0
Epoch: 411, Error: 16.0
Epoch: 412, Error: 16.0
Epoch: 413, Error: 8.0
Epoch: 414, Error: 8.0
Epoch: 415, Error: 8.0
Epoch: 416, Error: 8.0
Epoch: 417, Error: 20.0
Epoch: 418, Error: 8.0
Epoch: 419, Error: 8.0
Epoch: 420, Error: 8.0
Epoch: 421, Error: 8.0
Epoch: 422, Error: 20.0
Epoch: 423, Error: 8.0
Epoch: 424, Error: 8.0
Epoch: 425, Error: 8.0
Epoch: 426, Error: 8.0
Epoch: 427, Error: 20.0
Epoch: 428, Error: 8.0
Epoch: 429, Error: 16.0
Epoch: 430, Error: 8.0
Epoch: 431, Error: 8.0
Epoch: 432, Error: 8.0
Epoch: 433, Error: 8.0
Epoch: 434, Error: 20.0
Epoch: 435, Error: 8.0
Epoch: 436, Error: 8.0
Epoch: 437, Error: 8.0
Epoch: 438, Error: 8.0
Epoch: 439, Error: 20.0
Epoch: 440, Error: 8.0
Epoch: 441, Error: 16.0
Epoch: 442, Error: 8.0
Epoch: 443, Error: 8.0
Epoch: 444, Error: 8.0
Epoch: 445, Error: 8.0
Epoch: 446, Error: 12.0
Epoch: 447, Error: 16.0
Epoch: 448, Error: 8.0
Epoch: 449, Error: 16.0
Epoch: 450, Error: 8.0
Epoch: 451, Error: 16.0
Epoch: 452, Error: 8.0
Epoch: 453, Error: 16.0
Epoch: 454, Error: 8.0
Epoch: 455, Error: 16.0
Epoch: 456, Error: 16.0
Epoch: 457, Error: 8.0
Epoch: 458, Error: 8.0
Epoch: 459, Error: 8.0
Epoch: 460, Error: 8.0
Epoch: 461, Error: 12.0
Epoch: 462, Error: 16.0
Epoch: 463, Error: 8.0
Epoch: 464, Error: 16.0
Epoch: 465, Error: 8.0
Epoch: 466, Error: 8.0
Epoch: 467, Error: 8.0
Epoch: 468, Error: 8.0
Epoch: 469, Error: 20.0
Epoch: 470, Error: 8.0
Epoch: 471, Error: 8.0
Epoch: 472, Error: 8.0
Epoch: 473, Error: 8.0
Epoch: 474, Error: 20.0
Epoch: 475, Error: 8.0
Epoch: 476, Error: 16.0
Epoch: 477, Error: 8.0
Epoch: 478, Error: 8.0
Epoch: 479, Error: 8.0
Epoch: 480, Error: 8.0
Epoch: 481, Error: 20.0
Epoch: 482, Error: 8.0
Epoch: 483, Error: 8.0
Epoch: 484, Error: 8.0
Epoch: 485, Error: 8.0
Epoch: 486, Error: 12.0
Epoch: 487, Error: 16.0
Epoch: 488, Error: 8.0
Epoch: 489, Error: 8.0
Epoch: 490, Error: 8.0
Epoch: 491, Error: 16.0
Epoch: 492, Error: 8.0
Epoch: 493, Error: 8.0
Epoch: 494, Error: 8.0
Epoch: 495, Error: 8.0
Epoch: 496, Error: 12.0
Epoch: 497, Error: 8.0
Epoch: 498, Error: 8.0
Epoch: 499, Error: 16.0
Epoch: 500, Error: 8.0
Epoch: 501, Error: 8.0
Epoch: 502, Error: 8.0
Epoch: 503, Error: 8.0
Epoch: 504, Error: 12.0
Epoch: 505, Error: 8.0
Epoch: 506, Error: 8.0
Epoch: 507, Error: 16.0
Epoch: 508, Error: 8.0
Epoch: 509, Error: 8.0
Epoch: 510, Error: 8.0
Epoch: 511, Error: 8.0
Epoch: 512, Error: 8.0
Epoch: 513, Error: 8.0
Epoch: 514, Error: 12.0
Epoch: 515, Error: 8.0
Epoch: 516, Error: 8.0
Epoch: 517, Error: 8.0
Epoch: 518, Error: 8.0
Epoch: 519, Error: 8.0
Epoch: 520, Error: 12.0
Epoch: 521, Error: 8.0
Epoch: 522, Error: 8.0
Epoch: 523, Error: 8.0
Epoch: 524, Error: 8.0
Epoch: 525, Error: 8.0
Epoch: 526, Error: 8.0
Epoch: 527, Error: 8.0
Epoch: 528, Error: 8.0
Epoch: 529, Error: 12.0
Epoch: 530, Error: 8.0
Epoch: 531, Error: 8.0
Epoch: 532, Error: 8.0
Epoch: 533, Error: 8.0
Epoch: 534, Error: 8.0
Epoch: 535, Error: 8.0
Epoch: 536, Error: 8.0
Epoch: 537, Error: 8.0
Epoch: 538, Error: 8.0
Epoch: 539, Error: 8.0
Epoch: 540, Error: 12.0
Epoch: 541, Error: 8.0
Epoch: 542, Error: 8.0
Epoch: 543, Error: 8.0
Epoch: 544, Error: 8.0
Epoch: 545, Error: 8.0
Epoch: 546, Error: 8.0
Epoch: 547, Error: 8.0
Epoch: 548, Error: 8.0
Epoch: 549, Error: 12.0
Epoch: 550, Error: 8.0
Epoch: 551, Error: 8.0
Epoch: 552, Error: 8.0
Epoch: 553, Error: 8.0
Epoch: 554, Error: 8.0
Epoch: 555, Error: 12.0
Epoch: 556, Error: 8.0
Epoch: 557, Error: 8.0
Epoch: 558, Error: 8.0
Epoch: 559, Error: 8.0
Epoch: 560, Error: 8.0
Epoch: 561, Error: 8.0
Epoch: 562, Error: 8.0
Epoch: 563, Error: 8.0
Epoch: 564, Error: 8.0
Epoch: 565, Error: 8.0
Epoch: 566, Error: 8.0
Epoch: 567, Error: 8.0
Epoch: 568, Error: 8.0
Epoch: 569, Error: 12.0
Epoch: 570, Error: 8.0
Epoch: 571, Error: 8.0
Epoch: 572, Error: 8.0
Epoch: 573, Error: 8.0
Epoch: 574, Error: 8.0
Epoch: 575, Error: 12.0
Epoch: 576, Error: 8.0
Epoch: 577, Error: 8.0
Epoch: 578, Error: 8.0
Epoch: 579, Error: 8.0
Epoch: 580, Error: 8.0
Epoch: 581, Error: 8.0
Epoch: 582, Error: 8.0
Epoch: 583, Error: 8.0
Epoch: 584, Error: 8.0
Epoch: 585, Error: 8.0
Epoch: 586, Error: 8.0
Epoch: 587, Error: 8.0
Epoch: 588, Error: 8.0
Epoch: 589, Error: 12.0
Epoch: 590, Error: 8.0
Epoch: 591, Error: 8.0
Epoch: 592, Error: 8.0
Epoch: 593, Error: 8.0
Epoch: 594, Error: 8.0
Epoch: 595, Error: 12.0
Epoch: 596, Error: 8.0
Epoch: 597, Error: 8.0
Epoch: 598, Error: 8.0
Epoch: 599, Error: 8.0
Epoch: 600, Error: 8.0
Epoch: 601, Error: 12.0
Epoch: 602, Error: 8.0
Epoch: 603, Error: 8.0
Epoch: 604, Error: 8.0
Epoch: 605, Error: 8.0
Epoch: 606, Error: 8.0
Epoch: 607, Error: 8.0
Epoch: 608, Error: 8.0
Epoch: 609, Error: 8.0
Epoch: 610, Error: 8.0
Epoch: 611, Error: 8.0
Epoch: 612, Error: 8.0
Epoch: 613, Error: 8.0
Epoch: 614, Error: 8.0
Epoch: 615, Error: 12.0
Epoch: 616, Error: 8.0
Epoch: 617, Error: 8.0
Epoch: 618, Error: 8.0
Epoch: 619, Error: 8.0
Epoch: 620, Error: 8.0
Epoch: 621, Error: 12.0
Epoch: 622, Error: 8.0
Epoch: 623, Error: 8.0
Epoch: 624, Error: 8.0
Epoch: 625, Error: 8.0
Epoch: 626, Error: 8.0
Epoch: 627, Error: 12.0
Epoch: 628, Error: 8.0
Epoch: 629, Error: 8.0
Epoch: 630, Error: 8.0
Epoch: 631, Error: 8.0
Epoch: 632, Error: 8.0
Epoch: 633, Error: 8.0
Epoch: 634, Error: 8.0
Epoch: 635, Error: 8.0
Epoch: 636, Error: 8.0
Epoch: 637, Error: 8.0
Epoch: 638, Error: 8.0
Epoch: 639, Error: 8.0
Epoch: 640, Error: 8.0
Epoch: 641, Error: 12.0
Epoch: 642, Error: 8.0
Epoch: 643, Error: 8.0
Epoch: 644, Error: 8.0
Epoch: 645, Error: 8.0
Epoch: 646, Error: 8.0
Epoch: 647, Error: 12.0
Epoch: 648, Error: 8.0
Epoch: 649, Error: 8.0
Epoch: 650, Error: 8.0
Epoch: 651, Error: 8.0
Epoch: 652, Error: 0.0
Epoch: 653, Error: 0.0
Epoch: 654, Error: 0.0
Epoch: 655, Error: 0.0
Epoch: 656, Error: 0.0
Epoch: 657, Error: 0.0
Epoch: 658, Error: 0.0
Epoch: 659, Error: 0.0
Epoch: 660, Error: 0.0
Epoch: 661, Error: 0.0
Epoch: 662, Error: 0.0
Epoch: 663, Error: 0.0
Epoch: 664, Error: 0.0
Epoch: 665, Error: 0.0
Epoch: 666, Error: 0.0
Epoch: 667, Error: 0.0
Epoch: 668, Error: 0.0
Epoch: 669, Error: 0.0
Epoch: 670, Error: 0.0
Epoch: 671, Error: 0.0
Epoch: 672, Error: 0.0
Epoch: 673, Error: 0.0
Epoch: 674, Error: 0.0
Epoch: 675, Error: 0.0
Epoch: 676, Error: 0.0
Epoch: 677, Error: 0.0
Epoch: 678, Error: 0.0
Epoch: 679, Error: 0.0
Epoch: 680, Error: 0.0
Epoch: 681, Error: 0.0
Epoch: 682, Error: 0.0
Epoch: 683, Error: 0.0
Epoch: 684, Error: 0.0
Epoch: 685, Error: 0.0
Epoch: 686, Error: 0.0
Epoch: 687, Error: 0.0
Epoch: 688, Error: 0.0
Epoch: 689, Error: 0.0
Epoch: 690, Error: 0.0
Epoch: 691, Error: 0.0
Epoch: 692, Error: 0.0
Epoch: 693, Error: 0.0
Epoch: 694, Error: 0.0
Epoch: 695, Error: 0.0
Epoch: 696, Error: 0.0
Epoch: 697, Error: 0.0
Epoch: 698, Error: 0.0
Epoch: 699, Error: 0.0
Epoch: 700, Error: 0.0
Epoch: 701, Error: 0.0
Epoch: 702, Error: 0.0
Epoch: 703, Error: 0.0
Epoch: 704, Error: 0.0
Epoch: 705, Error: 0.0
Epoch: 706, Error: 0.0
Epoch: 707, Error: 0.0
Epoch: 708, Error: 0.0
Epoch: 709, Error: 0.0
Epoch: 710, Error: 0.0
Epoch: 711, Error: 0.0
Epoch: 712, Error: 0.0
Epoch: 713, Error: 0.0
Epoch: 714, Error: 0.0
Epoch: 715, Error: 0.0
Epoch: 716, Error: 0.0
Epoch: 717, Error: 0.0
Epoch: 718, Error: 0.0
Epoch: 719, Error: 0.0
Epoch: 720, Error: 0.0
Epoch: 721, Error: 0.0
Epoch: 722, Error: 0.0
Epoch: 723, Error: 0.0
Epoch: 724, Error: 0.0
Epoch: 725, Error: 0.0
Epoch: 726, Error: 0.0
Epoch: 727, Error: 0.0
Epoch: 728, Error: 0.0
Epoch: 729, Error: 0.0
Epoch: 730, Error: 0.0
Epoch: 731, Error: 0.0
Epoch: 732, Error: 0.0
Epoch: 733, Error: 0.0
Epoch: 734, Error: 0.0
Epoch: 735, Error: 0.0
Epoch: 736, Error: 0.0
Epoch: 737, Error: 0.0
Epoch: 738, Error: 0.0
Epoch: 739, Error: 0.0
Epoch: 740, Error: 0.0
Epoch: 741, Error: 0.0
Epoch: 742, Error: 0.0
Epoch: 743, Error: 0.0
Epoch: 744, Error: 0.0
Epoch: 745, Error: 0.0
Epoch: 746, Error: 0.0
Epoch: 747, Error: 0.0
Epoch: 748, Error: 0.0
Epoch: 749, Error: 0.0
Epoch: 750, Error: 0.0
Epoch: 751, Error: 0.0
Epoch: 752, Error: 0.0
Epoch: 753, Error: 0.0
Epoch: 754, Error: 0.0
Epoch: 755, Error: 0.0
Epoch: 756, Error: 0.0
Epoch: 757, Error: 0.0
Epoch: 758, Error: 0.0
Epoch: 759, Error: 0.0
Epoch: 760, Error: 0.0
Epoch: 761, Error: 0.0
Epoch: 762, Error: 0.0
Epoch: 763, Error: 0.0
Epoch: 764, Error: 0.0
Epoch: 765, Error: 0.0
Epoch: 766, Error: 0.0
Epoch: 767, Error: 0.0
Epoch: 768, Error: 0.0
Epoch: 769, Error: 0.0
Epoch: 770, Error: 0.0
Epoch: 771, Error: 0.0
Epoch: 772, Error: 0.0
Epoch: 773, Error: 0.0
Epoch: 774, Error: 0.0
Epoch: 775, Error: 0.0
Epoch: 776, Error: 0.0
Epoch: 777, Error: 0.0
Epoch: 778, Error: 0.0
Epoch: 779, Error: 0.0
Epoch: 780, Error: 0.0
Epoch: 781, Error: 0.0
Epoch: 782, Error: 0.0
Epoch: 783, Error: 0.0
Epoch: 784, Error: 0.0
Epoch: 785, Error: 0.0
Epoch: 786, Error: 0.0
Epoch: 787, Error: 0.0
Epoch: 788, Error: 0.0
Epoch: 789, Error: 0.0
Epoch: 790, Error: 0.0
Epoch: 791, Error: 0.0
Epoch: 792, Error: 0.0
Epoch: 793, Error: 0.0
Epoch: 794, Error: 0.0
Epoch: 795, Error: 0.0
Epoch: 796, Error: 0.0
Epoch: 797, Error: 0.0
Epoch: 798, Error: 0.0
Epoch: 799, Error: 0.0
Epoch: 800, Error: 0.0
Epoch: 801, Error: 0.0
Epoch: 802, Error: 0.0
Epoch: 803, Error: 0.0
Epoch: 804, Error: 0.0
Epoch: 805, Error: 0.0
Epoch: 806, Error: 0.0
Epoch: 807, Error: 0.0
Epoch: 808, Error: 0.0
Epoch: 809, Error: 0.0
Epoch: 810, Error: 0.0
Epoch: 811, Error: 0.0
Epoch: 812, Error: 0.0
Epoch: 813, Error: 0.0
Epoch: 814, Error: 0.0
Epoch: 815, Error: 0.0
Epoch: 816, Error: 0.0
Epoch: 817, Error: 0.0
Epoch: 818, Error: 0.0
Epoch: 819, Error: 0.0
Epoch: 820, Error: 0.0
Epoch: 821, Error: 0.0
Epoch: 822, Error: 0.0
Epoch: 823, Error: 0.0
Epoch: 824, Error: 0.0
Epoch: 825, Error: 0.0
Epoch: 826, Error: 0.0
Epoch: 827, Error: 0.0
Epoch: 828, Error: 0.0
Epoch: 829, Error: 0.0
Epoch: 830, Error: 0.0
Epoch: 831, Error: 0.0
Epoch: 832, Error: 0.0
Epoch: 833, Error: 0.0
Epoch: 834, Error: 0.0
Epoch: 835, Error: 0.0
Epoch: 836, Error: 0.0
Epoch: 837, Error: 0.0
Epoch: 838, Error: 0.0
Epoch: 839, Error: 0.0
Epoch: 840, Error: 0.0
Epoch: 841, Error: 0.0
Epoch: 842, Error: 0.0
Epoch: 843, Error: 0.0
Epoch: 844, Error: 0.0
Epoch: 845, Error: 0.0
Epoch: 846, Error: 0.0
Epoch: 847, Error: 0.0
Epoch: 848, Error: 0.0
Epoch: 849, Error: 0.0
Epoch: 850, Error: 0.0
Epoch: 851, Error: 0.0
Epoch: 852, Error: 0.0
Epoch: 853, Error: 0.0
Epoch: 854, Error: 0.0
Epoch: 855, Error: 0.0
Epoch: 856, Error: 0.0
Epoch: 857, Error: 0.0
Epoch: 858, Error: 0.0
Epoch: 859, Error: 0.0
Epoch: 860, Error: 0.0
Epoch: 861, Error: 0.0
Epoch: 862, Error: 0.0
Epoch: 863, Error: 0.0
Epoch: 864, Error: 0.0
Epoch: 865, Error: 0.0
Epoch: 866, Error: 0.0
Epoch: 867, Error: 0.0
Epoch: 868, Error: 0.0
Epoch: 869, Error: 0.0
Epoch: 870, Error: 0.0
Epoch: 871, Error: 0.0
Epoch: 872, Error: 0.0
Epoch: 873, Error: 0.0
Epoch: 874, Error: 0.0
Epoch: 875, Error: 0.0
Epoch: 876, Error: 0.0
Epoch: 877, Error: 0.0
Epoch: 878, Error: 0.0
Epoch: 879, Error: 0.0
Epoch: 880, Error: 0.0
Epoch: 881, Error: 0.0
Epoch: 882, Error: 0.0
Epoch: 883, Error: 0.0
Epoch: 884, Error: 0.0
Epoch: 885, Error: 0.0
Epoch: 886, Error: 0.0
Epoch: 887, Error: 0.0
Epoch: 888, Error: 0.0
Epoch: 889, Error: 0.0
Epoch: 890, Error: 0.0
Epoch: 891, Error: 0.0
Epoch: 892, Error: 0.0
Epoch: 893, Error: 0.0
Epoch: 894, Error: 0.0
Epoch: 895, Error: 0.0
Epoch: 896, Error: 0.0
Epoch: 897, Error: 0.0
Epoch: 898, Error: 0.0
Epoch: 899, Error: 0.0
Epoch: 900, Error: 0.0
Epoch: 901, Error: 0.0
Epoch: 902, Error: 0.0
Epoch: 903, Error: 0.0
Epoch: 904, Error: 0.0
Epoch: 905, Error: 0.0
Epoch: 906, Error: 0.0
Epoch: 907, Error: 0.0
Epoch: 908, Error: 0.0
Epoch: 909, Error: 0.0
Epoch: 910, Error: 0.0
Epoch: 911, Error: 0.0
Epoch: 912, Error: 0.0
Epoch: 913, Error: 0.0
Epoch: 914, Error: 0.0
Epoch: 915, Error: 0.0
Epoch: 916, Error: 0.0
Epoch: 917, Error: 0.0
Epoch: 918, Error: 0.0
Epoch: 919, Error: 0.0
Epoch: 920, Error: 0.0
Epoch: 921, Error: 0.0
Epoch: 922, Error: 0.0
Epoch: 923, Error: 0.0
Epoch: 924, Error: 0.0
Epoch: 925, Error: 0.0
Epoch: 926, Error: 0.0
Epoch: 927, Error: 0.0
Epoch: 928, Error: 0.0
Epoch: 929, Error: 0.0
Epoch: 930, Error: 0.0
Epoch: 931, Error: 0.0
Epoch: 932, Error: 0.0
Epoch: 933, Error: 0.0
Epoch: 934, Error: 0.0
Epoch: 935, Error: 0.0
Epoch: 936, Error: 0.0
Epoch: 937, Error: 0.0
Epoch: 938, Error: 0.0
Epoch: 939, Error: 0.0
Epoch: 940, Error: 0.0
Epoch: 941, Error: 0.0
Epoch: 942, Error: 0.0
Epoch: 943, Error: 0.0
Epoch: 944, Error: 0.0
Epoch: 945, Error: 0.0
Epoch: 946, Error: 0.0
Epoch: 947, Error: 0.0
Epoch: 948, Error: 0.0
Epoch: 949, Error: 0.0
Epoch: 950, Error: 0.0
Epoch: 951, Error: 0.0
Epoch: 952, Error: 0.0
Epoch: 953, Error: 0.0
Epoch: 954, Error: 0.0
Epoch: 955, Error: 0.0
Epoch: 956, Error: 0.0
Epoch: 957, Error: 0.0
Epoch: 958, Error: 0.0
Epoch: 959, Error: 0.0
Epoch: 960, Error: 0.0
Epoch: 961, Error: 0.0
Epoch: 962, Error: 0.0
Epoch: 963, Error: 0.0
Epoch: 964, Error: 0.0
Epoch: 965, Error: 0.0
Epoch: 966, Error: 0.0
Epoch: 967, Error: 0.0
Epoch: 968, Error: 0.0
Epoch: 969, Error: 0.0
Epoch: 970, Error: 0.0
Epoch: 971, Error: 0.0
Epoch: 972, Error: 0.0
Epoch: 973, Error: 0.0
Epoch: 974, Error: 0.0
Epoch: 975, Error: 0.0
Epoch: 976, Error: 0.0
Epoch: 977, Error: 0.0
Epoch: 978, Error: 0.0
Epoch: 979, Error: 0.0
Epoch: 980, Error: 0.0
Epoch: 981, Error: 0.0
Epoch: 982, Error: 0.0
Epoch: 983, Error: 0.0
Epoch: 984, Error: 0.0
Epoch: 985, Error: 0.0
Epoch: 986, Error: 0.0
Epoch: 987, Error: 0.0
Epoch: 988, Error: 0.0
Epoch: 989, Error: 0.0
Epoch: 990, Error: 0.0
Epoch: 991, Error: 0.0
Epoch: 992, Error: 0.0
Epoch: 993, Error: 0.0
Epoch: 994, Error: 0.0
Epoch: 995, Error: 0.0
Epoch: 996, Error: 0.0
Epoch: 997, Error: 0.0
Epoch: 998, Error: 0.0
Epoch: 999, Error: 0.0
Epoch: 1000, Error: 0.0
In [ ]:
ppn.predict(X_test,y_test)
ppn.accuracy
Out[ ]:
1.0
In [ ]:
ppn.plot_2d(X,y,"Sepal length (cm)","Petal length (cm)")
No description has been provided for this image

Training the model for 3D data¶

In [ ]:
ppn2=Perceptron(input_length=X_train2.shape[1],eta=0.2,epochs=1000)
ppn2.train(X_train2,y_train2)
In [ ]:
ppn2.predict(X_test2,y_test2)
ppn2.accuracy
Out[ ]:
1.0
In [ ]:
ppn2.plot_3d(X,y,"Sepal length (cm)","Sepal width (cm)","Petal length (cm)")
No description has been provided for this image

Regression¶

In [ ]:
# Set a random seed for reproducibility
np.random.seed(42)

Linear Regression¶

In [ ]:
# Generate synthetic linear data
X_linear = 2 * np.random.rand(100, 1)
y_linear = 4 + 3 * X_linear + np.random.randn(100, 1)  # y = 4 + 3x + noise

# Fit linear regression model
linear_model = LinearRegression()
linear_model.fit(X_linear, y_linear)
Out[ ]:
LinearRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
LinearRegression()
In [ ]:
# Predict values
X_new = np.array([[0], [2]])
y_predict = linear_model.predict(X_new)

# Plotting Linear Regression
plt.figure(figsize=(12, 5))

plt.subplot(1, 2, 1)
plt.scatter(X_linear, y_linear, color='blue', label='Data Points')
plt.plot(X_new, y_predict, color='red', linewidth=2, label='Fitted Line')
plt.title('Linear Regression')
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
Out[ ]:
<matplotlib.legend.Legend at 0x79bdc0291c30>
No description has been provided for this image

Logistic Regression¶

In [ ]:
# Generate synthetic data for logistic regression
X_logistic = np.random.randn(100, 2)  # Two features
y_logistic = (X_logistic[:, 0] + X_logistic[:, 1] > 0).astype(int)  # Binary labels

# Fit logistic regression model
logistic_model = LogisticRegression()
logistic_model.fit(X_logistic, y_logistic)
Out[ ]:
LogisticRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
LogisticRegression()
In [ ]:
# Plotting Logistic Regression
plt.subplot(1, 2, 2)
plt.scatter(X_logistic[:, 0], X_logistic[:, 1], c=y_logistic, cmap='bwr', edgecolor='k', s=50)
plt.title('Logistic Regression')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')

# Create a grid to plot decision boundary
xx, yy = np.meshgrid(np.linspace(-3, 3, 100), np.linspace(-3, 3, 100))
Z = logistic_model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=0.3, cmap='bwr')

plt.tight_layout()
plt.show()
No description has been provided for this image

5. Final Evaluation¶

After training, we visualize the model's predictions against the ground truth to qualitatively assess performance on a random validation batch.

Theory

In machine learning, evaluating the performance of your models is crucial to understand how well they make predictions. Various metrics provide insights into different aspects of model performance. Below are some commonly used evaluation metrics:

  1. Accuracy
  • Definition: Accuracy is the ratio of correctly predicted instances to the total instances in the dataset.
  • Formula:
    $$ \text{Accuracy} = \frac{\text{True Positives} + \text{True Negatives}}{\text{Total Instances}} $$
  • Usage: Accuracy is useful when the classes are balanced; however, it can be misleading in the presence of class imbalance.
  1. Precision
  • Definition: Precision indicates the quality of the positive predictions. It measures the proportion of true positive predictions to the total predicted positives.
  • Formula:
    $$ \text{Precision} = \frac{\text{True Positives}}{\text{True Positives} + \text{False Positives}} $$
  • Usage: Precision is crucial when the cost of false positives is high. For example, in email spam detection, a false positive (legitimate email marked as spam) may be more damaging than missing a spam email.
  1. Recall (Sensitivity)
  • Definition: Recall measures the model's ability to identify all relevant instances. It represents the proportion of true positives to the total actual positives.
  • Formula:
    $$ \text{Recall} = \frac{\text{True Positives}}{\text{True Positives} + \text{False Negatives}} $$
  • Usage: Recall is important when the cost of false negatives is high. For example, in medical diagnoses, failing to identify a disease (false negative) can have serious consequences.
  1. F1-Score
  • Definition: The F1-score is the harmonic mean of precision and recall. It provides a balance between the two metrics and is particularly useful for imbalanced classes.
  • Formula:
    $$ \text{F1-Score} = 2 \times \frac{\text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}} $$
  • Usage: The F1-score is a better measure than accuracy for imbalanced datasets, as it accounts for both false positives and false negatives.
  1. Confusion Matrix
  • Definition: A confusion matrix is a table used to describe the performance of a classification model. It shows the counts of true positive, false positive, true negative, and false negative predictions.

  • Structure:

    |---------------------------| Predicted Positive    | Predicted Negative |
    |---------------------------|---------------------------------|---------------------------------|
    | Actual Positive  | True Positive (TP)    | False Negative (FN) |
    | Actual Negative | False Positive (FP) | True Negative (TN)    |

  • Usage: The confusion matrix provides insights into which classes are being confused by the model, allowing for better error analysis.

  1. ROC Curve (Receiver Operating Characteristic Curve)
  • Definition: The ROC curve is a graphical representation of a classifier's performance across different thresholds. It plots the true positive rate (sensitivity) against the false positive rate (1-specificity).
  • Area Under the Curve (AUC): The area under the ROC curve (AUC) provides a single measure of performance. AUC values range from 0 to 1, with 1 being a perfect model and 0.5 indicating a model with no discriminative power.
  • Usage: The ROC curve is useful for evaluating binary classifiers, particularly when the class distribution is imbalanced. It helps visualize the trade-off between sensitivity and specificity.

Code¶

In [ ]:
# Create a synthetic binary classification dataset
X, y = make_classification(n_samples=1000, n_features=20, n_classes=2, random_state=42)

# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Train a Logistic Regression model
model = LogisticRegression()
model.fit(X_train, y_train)
Out[ ]:
LogisticRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
LogisticRegression()
In [ ]:
# Make predictions
y_pred = model.predict(X_test)

# Calculate evaluation metrics
accuracy = accuracy_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)

# Calculate ROC curve and AUC
fpr, tpr, thresholds = roc_curve(y_test, model.predict_proba(X_test)[:, 1])
roc_auc = auc(fpr, tpr)

# Print evaluation metrics
print("Evaluation Metrics:")
print(f"Accuracy: {accuracy:.2f}")
print(f"Precision: {precision:.2f}")
print(f"Recall: {recall:.2f}")
print(f"F1 Score: {f1:.2f}")
print("\nConfusion Matrix:")
print(conf_matrix)
Evaluation Metrics:
Accuracy: 0.85
Precision: 0.88
Recall: 0.83
F1 Score: 0.85

Confusion Matrix:
[[127  18]
 [ 27 128]]
In [ ]:
# Plot ROC curve
plt.figure(figsize=(10, 6))
plt.plot(fpr, tpr, color='blue', label=f'ROC Curve (AUC = {roc_auc:.2f})')
plt.plot([0, 1], [0, 1], color='red', linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.legend(loc='lower right')
plt.show()
No description has been provided for this image

Hyperparameter Optimization¶

Why is Hyperparameter Optimization Needed?

In machine learning, hyperparameters are the parameters that are set before the learning process begins. These parameters can significantly affect the performance of the model. For instance, in a decision tree, hyperparameters like the maximum depth of the tree, minimum samples split, and others can lead to different performance outcomes.

  • Model Performance: Hyperparameter optimization helps to find the best set of hyperparameters that improve the model's performance on unseen data.
  • Avoid Overfitting: Proper tuning of hyperparameters can prevent overfitting and ensure the model generalizes well to new data.
  • Efficient Training: Well-chosen hyperparameters can reduce training time and resources, leading to faster convergence.
  • Finding the optimal hyperparameters often requires extensive experimentation and can be time-consuming, making automation crucial for improving model training efficiency.

Optuna is an open-source hyperparameter optimization framework designed for machine learning. It automates the process of searching for the best hyperparameters using various optimization algorithms.

In [ ]:
# Load dataset
iris = load_iris()
X, y = iris.data, iris.target

# Split dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
In [ ]:
# Define the objective function for optimization
def objective(trial):
    # Suggest hyperparameters
    n_estimators = trial.suggest_int('n_estimators', 50, 200)
    max_depth = trial.suggest_int('max_depth', 1, 20)
    min_samples_split = trial.suggest_int('min_samples_split', 2, 10)

    # Create and train the model
    model = RandomForestClassifier(n_estimators=n_estimators, max_depth=max_depth, min_samples_split=min_samples_split, random_state=42)
    model.fit(X_train, y_train)

    # Make predictions and evaluate the model
    y_pred = model.predict(X_test)
    accuracy = accuracy_score(y_test, y_pred)

    return accuracy
In [ ]:
# Create a study object and optimize the objective function
study = optuna.create_study(direction='maximize')
study.optimize(objective, n_trials=10)
[I 2024-10-31 17:20:44,372] A new study created in memory with name: no-name-3cccf89d-6668-46d6-9cf1-5665927399bb
[I 2024-10-31 17:20:45,066] Trial 0 finished with value: 1.0 and parameters: {'n_estimators': 95, 'max_depth': 1, 'min_samples_split': 5}. Best is trial 0 with value: 1.0.
[I 2024-10-31 17:20:46,123] Trial 1 finished with value: 1.0 and parameters: {'n_estimators': 186, 'max_depth': 6, 'min_samples_split': 3}. Best is trial 0 with value: 1.0.
[I 2024-10-31 17:20:47,692] Trial 2 finished with value: 1.0 and parameters: {'n_estimators': 197, 'max_depth': 1, 'min_samples_split': 8}. Best is trial 0 with value: 1.0.
[I 2024-10-31 17:20:48,363] Trial 3 finished with value: 1.0 and parameters: {'n_estimators': 193, 'max_depth': 10, 'min_samples_split': 2}. Best is trial 0 with value: 1.0.
[I 2024-10-31 17:20:48,815] Trial 4 finished with value: 1.0 and parameters: {'n_estimators': 140, 'max_depth': 17, 'min_samples_split': 6}. Best is trial 0 with value: 1.0.
[I 2024-10-31 17:20:49,260] Trial 5 finished with value: 1.0 and parameters: {'n_estimators': 117, 'max_depth': 17, 'min_samples_split': 7}. Best is trial 0 with value: 1.0.
[I 2024-10-31 17:20:49,950] Trial 6 finished with value: 1.0 and parameters: {'n_estimators': 152, 'max_depth': 12, 'min_samples_split': 4}. Best is trial 0 with value: 1.0.
[I 2024-10-31 17:20:50,332] Trial 7 finished with value: 1.0 and parameters: {'n_estimators': 98, 'max_depth': 12, 'min_samples_split': 3}. Best is trial 0 with value: 1.0.
[I 2024-10-31 17:20:50,601] Trial 8 finished with value: 1.0 and parameters: {'n_estimators': 85, 'max_depth': 1, 'min_samples_split': 6}. Best is trial 0 with value: 1.0.
[I 2024-10-31 17:20:51,043] Trial 9 finished with value: 1.0 and parameters: {'n_estimators': 124, 'max_depth': 13, 'min_samples_split': 4}. Best is trial 0 with value: 1.0.
In [ ]:
# Print the best hyperparameters
print("Best hyperparameters: ", study.best_params)
print("Best accuracy: ", study.best_value)
Best hyperparameters:  {'n_estimators': 95, 'max_depth': 1, 'min_samples_split': 5}
Best accuracy:  1.0