Exercises Guide
Labs
Labs are not graded and are the hands-on companion to each lecture unit. Each lab is a short, focused session designed to help you build intuition for the theoretical concepts introduced in class by working directly with code. You will run, visualize, and experiment with the ideas so that by the time you reach a homework assignment, the underlying mechanics feel familiar.
Labs marked as TBD have tentative topics that may be added, removed, or reordered as the course progresses.
Lab 1: Introduction to Machine Learning
Lab 2: Linear Models
- Lab 2-1: Linear Regression via Gradient Descent
- Lab 2-2: Logistic Regression via Gradient Descent
- Lab 2-3: Classification Metrics and ROC Curves
- Lab 2-4: Softmax Regression via Gradient Descent
Lab 3: Multi-Layer Perceptron (TBD)
- Lab 3-1: MLP Forward/Backward Pass and Numerical Gradient Checking
- Lab 3-2: Vanishing Gradient: Sigmoid vs. ReLU
Lab 4: Model Validation (TBD)
- Lab 4-1: From NumPy to PyTorch (Dataset, DataLoader, TensorBoard)
- Lab 4-2: Autograd and Automatic Differentiation
- Lab 4-3: K-Fold Cross-Validation with PyTorch DataLoader
- Lab 4-4: Custom Dataset and DataLoader
- Lab 4-5: Data Leakage Demo
Lab 5: Model Generalization (TBD)
- Lab 5-1: Overfitting and Underfitting via MLP Capacity
- Lab 5-2: Regularization (Dropout, Weight Decay, BatchNorm)
- Lab 5-3: Early Stopping and Model Checkpointing
Lab 6: Convolutional Neural Networks (TBD)
- Lab 6-1: CNN Architecture Design and Parameter Counting
- Lab 6-2: Effect of Kernel Size, Stride, and Padding on Feature Maps
- Lab 6-3: Visualizing Learned First-Layer Filters
Lab 8: Recurrent Neural Networks (TBD)
- Lab 8-1: RNN vs. GRU vs. LSTM on Variable-Length Sequences
- Lab 8-2: Character-Level Language Model and Text Generation
- Lab 8-3: Gradient Norm Tracking and Vanishing Gradients in RNNs
Lab 9: Attention Mechanisms (TBD)
- Lab 9-1: Self-Attention from Scratch with NumPy
- Lab 9-2: Vision Transformer Input Pipeline (Patch Embedding and Positional Encoding)
Lab 10: Generative Models (TBD)
- Lab 10-1: Autoencoder Latent Space Visualization on MNIST
- Lab 10-2: VAE Latent Space Interpolation
- Lab 10-3: GAN Training Dynamics
Lab 11: Representations (TBD)
- Lab 11-1: Fine-Tuning vs. Feature Extraction with Pretrained ResNet
- Lab 11-2: High-Dimensional Feature Visualization and Clustering
- Lab 11-3: Cross-Entropy Loss vs. Triplet Loss Feature Space
Lab 12: Support Vector Machines and Kernels (TBD)
- Lab 12-1: Kernel Visualization and Hyperparameter Effects (C and gamma)
Lab 13: Decision Trees (TBD)
- Lab 13-1: Decision Tree Implementation from Scratch (Gini/Entropy Split)
Lab 14: Ensemble Methods (TBD)
- Lab 14-1: Bagging vs. Boosting: Bias-Variance Tradeoff and Feature Importance
Lab 15: Bayesian Learning (TBD)
- Lab 15-1: Naive Bayes Spam Classifier from Scratch with NumPy
What You Will Receive
For each lab you will receive a single Jupyter notebook:
labXX.ipynb: A guided notebook with explanations and complete, runnable code for you to explore and experiment with.
Some labs also include small data files or image assets placed in a shared data/ folder at the top level of the course repository.
Workflow
Step 1: Open the Lab Notebook
Launch the notebook using JupyterLab or Google Colab. Each lab notebook opens with a brief motivation section explaining what you are about to explore and why it matters in the context of machine learning.
Always run notebook cells from top to bottom. If you skip a cell or run them out of order, later cells may fail because variables or imports from earlier cells are missing.
Step 2: Run and Observe
Unlike the homework assignments, labs do not require you to write code from scratch or fill in TODO blocks. Instead, complete and working code is provided for you. Your primary task is to run the cells, read the accompanying markdown explanations, and deeply understand how the mathematical equations from the lecture translate into NumPy operations or PyTorch layers. Pay close attention to how the data shapes transform at each step.
Step 3: Experiment Freely
Labs are exploratory playgrounds. Once you have run the provided code and observed its default output, you are highly encouraged to tinker with it.
Try changing hyperparameters (like learning rates, batch sizes, or network depth), altering architectures, or even breaking things on purpose to see what kind of error messages PyTorch throws. Tinkering freely is one of the best ways to build intuition before tackling the homework.