From 3c0a72c8bd3c7099cab457d18ca75e05c7eb9cb8 Mon Sep 17 00:00:00 2001 From: anasbekheit Date: Sun, 3 Dec 2023 01:38:02 +0200 Subject: [PATCH] [BUG-FIX][Intro-prac] Swapped jnp.equal with jnp.isclose. --- practicals/Introduction_to_ML_using_JAX.ipynb | 7856 +++++++++-------- 1 file changed, 3933 insertions(+), 3923 deletions(-) diff --git a/practicals/Introduction_to_ML_using_JAX.ipynb b/practicals/Introduction_to_ML_using_JAX.ipynb index 004f824..38175d8 100644 --- a/practicals/Introduction_to_ML_using_JAX.ipynb +++ b/practicals/Introduction_to_ML_using_JAX.ipynb @@ -1,3924 +1,3934 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "m2s4kN_QPQVe" - }, - "source": [ - "# **Intro to ML using JAX**\n", - "\n", - "\n", - "\n", - "\n", - "\"Open\n", - "\n", - "ยฉ Deep Learning Indaba 2022. Apache License 2.0.\n", - "\n", - "**Authors:** Kale-ab Tessera\n", - "\n", - "**Reviewers:** Javier Antoran, James Allingham, Ruan van der Merwe, \n", - "Sebastian Bodenstein, Laurence Midgley, Joao Guilherme and Elan van Biljon. \n", - "\n", - "**Introduction:** \n", - "\n", - "In this tutorial, we will learn about JAX, a new machine learning framework that has taken deep learning research by storm! JAX is praised for its speed, and we will learn how to achieve these speedups, using core concepts in JAX, such as automatic differentiation (`grad`), parallelization (`pmap`), vectorization (`vmap`), just-in-time compilation (`jit`), and more. We will then use what we have learned to implement Linear Regression effectively while learning some of the fundamentals of optimization.\n", - "\n", - "**Topics:** \n", - "\n", - "Content: `Numerical Computing` , `Supervised Learning` \n", - "Level: `Beginner`\n", - "\n", - "\n", - "**Aims/Learning Objectives:**\n", - "\n", - "- Learn the basics of JAX and its similarities and differences with NumPy.\n", - "- Learn how to use JAX transforms - `jit`, `grad`, `vmap`, and `pmap`.\n", - "- Learn the basics of optimization and how to implement effective training procedures using [Haiku](https://github.com/deepmind/dm-haiku) and [Optax](https://github.com/deepmind/optax). \n", - "\n", - "**Prerequisites:**\n", - "\n", - "- Basic knowledge of [NumPy](https://github.com/numpy/numpy).\n", - "- Basic knowledge of [functional programming](https://en.wikipedia.org/wiki/Functional_programming). \n", - "\n", - "**Outline:** \n", - "\n", - ">[Part 1 - Basics of JAX](#scrollTo=Enx0WUr8tIPf)\n", - "\n", - ">>[1.1 From NumPy โžก Jax - Beginner](#scrollTo=-ZUp8i37dFbU)\n", - "\n", - ">>>[JAX and NumPy - Similarities ๐Ÿค](#scrollTo=CbOEYsWQ6tHv)\n", - "\n", - ">>>[JAX and NumPy - Differences โŒ](#scrollTo=lg4__l4A7yqc)\n", - "\n", - ">>[1.2 Acceleration in JAX ๐Ÿš€ - Beginner, Intermediate, Advanced](#scrollTo=TSj972IWxTo2)\n", - "\n", - ">>>[JAX is backend Agnostic - Beginner](#scrollTo=_bQ9QqT-yKbs)\n", - "\n", - ">>>[JAX Transformations - Beginner, Intermediate, Advanced](#scrollTo=JM_08mXEBRIK)\n", - "\n", - ">>>>[Basic JAX Transformations - jit and grad - Beginner](#scrollTo=cOGuGWtLmP7n)\n", - "\n", - ">>>>[Pure Functions ๐Ÿ’ก - Beginner](#scrollTo=fT56qxXzTVKZ)\n", - "\n", - ">>>>[More Advanced Transforms - vmap and pmap - Intermediate, Advanced](#scrollTo=tvBzh8wiGuLf)\n", - "\n", - "\n", - ">[Part 2 - From Linear to Non-Linear Regression](#scrollTo=aB0503xgmSFh)\n", - "\n", - ">>[2.1 Linear Regression - ๐Ÿ“ˆ Beginner](#scrollTo=XrWSN-zaWAhJ)\n", - "\n", - ">>>[Regression Toy Example - Housing Prices](#scrollTo=AcyM6XRj1cDz)\n", - "\n", - ">>>[Optimization by Trial-and-Error](#scrollTo=vnoEkgimTQ6V)\n", - "\n", - ">>>[Loss Function](#scrollTo=oLGAp30ZDnJ5)\n", - "\n", - ">>>[Gradient descent: No more tuning parameters by hand!](#scrollTo=fg5Hi4783Gus)\n", - "\n", - "\n", - ">>[2.2 From Linear to Polynomial Regression - Intermediate](#scrollTo=Ao93xuXGJhLh)\n", - "\n", - ">>>[Under-fitting](#scrollTo=CcXjMKi0Znr6)\n", - "\n", - ">>>[Over-fitting](#scrollTo=uwwajy30U9fX)\n", - "\n", - ">>[2.3 Training Models Using Haiku and Optax - Beginner](#scrollTo=sAtms17jtCOU)\n", - "\n", - ">>>[Haiku](#scrollTo=exuVety_bFhQ)\n", - "\n", - ">>>[Optax](#scrollTo=_3h034w5bWn6)\n", - "\n", - ">>>[Full Training Loop Using Haiku and Optax ๐Ÿง™](#scrollTo=7IaqVuRPg3ER)\n", - "\n", - ">[Conclusion](#scrollTo=fV3YG7QOZD-B)\n", - "\n", - ">[Appendix:](#scrollTo=XrRoSqlxfi7f)\n", - "\n", - ">>[Derivation of partial derivatives for exercise 2.4.](#scrollTo=9OH9H7ndfuyQ)\n", - "\n", - ">[Feedback](#scrollTo=o1ndpYE50BpG)\n", - "\n", - "\n", - "**Before you start:**\n", - "\n", - "For this practical, you will need to use a GPU to speed up training. To do this, go to the \"Runtime\" menu in Colab, select \"Change runtime type\" and then in the popup menu, choose \"GPU\" in the \"Hardware accelerator\" box.\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "6EqhIg1odqg0" - }, - "source": [ - "## Installation and Imports" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "4boGA9rYdt9l", - "cellView": "form" - }, - "outputs": [], - "source": [ - "## Install and import anything required. Capture hides the output from the cell.\n", - "# @title Install and import required packages. (Run Cell)\n", - "\n", - "import subprocess\n", - "import os\n", - "\n", - "# Based on https://stackoverflow.com/questions/67504079/how-to-check-if-an-nvidia-gpu-is-available-on-my-system\n", - "try:\n", - " subprocess.check_output('nvidia-smi')\n", - " print(\"a GPU is connected.\")\n", - "except Exception: \n", - " # TPU or CPU\n", - " if \"COLAB_TPU_ADDR\" in os.environ and os.environ[\"COLAB_TPU_ADDR\"]:\n", - " print(\"A TPU is connected.\")\n", - " import jax.tools.colab_tpu\n", - " jax.tools.colab_tpu.setup_tpu()\n", - " else:\n", - " print(\"Only CPU accelerator is connected.\")\n", - " # x8 cpu devices - number of (emulated) host devices\n", - " os.environ[\"XLA_FLAGS\"] = \"--xla_force_host_platform_device_count=8\"\n", - "import jax\n", - "import jax.numpy as jnp\n", - "from jax import grad, jit, vmap, pmap\n", - "\n", - "import matplotlib.pyplot as plt\n", - "import numpy as np" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "YQe1CfDyrkdL", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Helper Functions. (Run Cell)\n", - "import copy\n", - "from typing import Dict\n", - "\n", - "\n", - "def plot_performance(data: Dict, title: str):\n", - " runs = list(data.keys())\n", - " time = list(data.values())\n", - "\n", - " # creating the bar plot\n", - " plt.bar(runs, time, width=0.35)\n", - "\n", - " plt.xlabel(\"Implementation\")\n", - " plt.ylabel(\"Average time taken (in s)\")\n", - " plt.title(title)\n", - " plt.show()\n", - "\n", - " best_perf_key = min(data, key=data.get)\n", - " all_runs_key = copy.copy(runs)\n", - "\n", - " # all_runs_key_except_best\n", - " all_runs_key.remove(best_perf_key)\n", - "\n", - " for k in all_runs_key:\n", - " print(\n", - " f\"{best_perf_key} was {round((data[k]/data[best_perf_key]),2)} times faster than {k} !!!\"\n", - " )" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "yFzjRHUsUQqq", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Check the device you are using (Run Cell)\n", - "print(f\"Num devices: {jax.device_count()}\")\n", - "print(f\" Devices: {jax.devices()}\")" - ] - }, - { - "cell_type": "markdown", - "source": [ - "Text Cell below creates a LaTeX Macro to be used in math equations. " - ], - "metadata": { - "id": "0RGo-mOedEV8" - } - }, - { - "cell_type": "markdown", - "source": [ - "$$\n", - "\\newcommand{\\because}[1]{&& \\triangleright \\textrm{#1}}\n", - "$$" - ], - "metadata": { - "id": "blMNBku0dB8h" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Enx0WUr8tIPf" - }, - "source": [ - "# **Part 1 - Basics of JAX**\n", - "\n", - "**What is JAX?**\n", - "\n", - "[JAX](https://jax.readthedocs.io/en/latest/index.html) is a python package for writing composable numerical transformations. It leverages [Autograd](https://github.com/hips/autograd) and [XLA](https://www.tensorflow.org/xla) (Accelerated Linear Algebra), to achieve high-performance numerical computing, which is particularly relevant in machine learning.\n", - "\n", - "It provides functionality such as automatic differentiation (`grad`), parallelization (`pmap`), vectorization (`vmap`), just-in-time compilation (`jit`), and more. These transforms operate on [pure functions](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#pure-functions), so JAX encourages a **functional programming** paradigm. Furthermore, the use of XLA allows one to target different kinds of accelerators (CPU, GPU and TPU), without code changes. \n", - "\n", - "JAX is different from frameworks such as PyTorch or Tensorflow (TF). It is more low-level and minimalistic. JAX simply offers a set of primitives (simple operations) like `jit` and `vmap`, and relies on other libraries for other things e.g. using the data loader from PyTorch or TF. Due to JAX's simplicity, it is commonly used with higher-level neural network libraries such as [Haiku](https://github.com/deepmind/dm-haiku) or [Flax](https://github.com/google/flax). (Imagine writing complicated architectures using a NumPy-like interface alone! ๐Ÿ˜ฎ ) " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "-ZUp8i37dFbU" - }, - "source": [ - "## **1.1 From NumPy โžก Jax** - `Beginner`\n", - "\n", - " " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "CbOEYsWQ6tHv" - }, - "source": [ - "### JAX and NumPy - Similarities ๐Ÿค\n", - "\n", - "The main similarity between JAX and NumPy is that they share a similar interface and often, JAX and NumPy arrays can be used interchangeably. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "McStJC-l3qsG" - }, - "source": [ - "#### Similiar Interface" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "KbYfoaujT2F7" - }, - "source": [ - "Let's plot the sine functions using NumPy." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "sgRLq58OTz1t" - }, - "outputs": [], - "source": [ - "# 100 linearly spaced numbers from -np.pi to np.pi\n", - "x = np.linspace(-np.pi, np.pi, 100)\n", - "\n", - "# the function, which is y = sin(x) here\n", - "y = np.sin(x)\n", - "\n", - "# plot the functions\n", - "plt.plot(x, y, \"b\", label=\"y=sin(x)\")\n", - "\n", - "plt.legend(loc=\"upper left\")\n", - "\n", - "# show the plot\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XCEnlC-PU3ps" - }, - "source": [ - "Now using jax. We already imported `jax.numpy` as `jnp` in the first cell." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "kRQf2mNRTlt3" - }, - "outputs": [], - "source": [ - "# 100 linearly spaced numbers from -jnp.pi to jnp.pi\n", - "x = jnp.linspace(-jnp.pi, jnp.pi, 100)\n", - "\n", - "# the function, which is y = sin(x) here\n", - "y = jnp.sin(x)\n", - "\n", - "# plot the functions\n", - "plt.plot(x, y, \"b\", label=\"y=sin(x)\")\n", - "\n", - "plt.legend(loc=\"upper left\")\n", - "\n", - "# show the plot\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "wuNscwHeV_dn" - }, - "source": [ - "**Exercise 1.1 - Code Task:** Can you plot the cosine function using `jnp`?" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "5svZFPUCQNsG" - }, - "outputs": [], - "source": [ - "# Plot Cosine using jnp. (UPDATE ME)\n", - "\n", - "# 100 linearly spaced numbers\n", - "# UPDATE ME\n", - "x = ...\n", - "\n", - "# UPDATE ME\n", - "y = ...\n", - "\n", - "\n", - "# plot the functions\n", - "plt.plot(x, y, \"b\", label=\"y=cos(x)\")\n", - "\n", - "plt.legend(loc=\"upper left\")\n", - "\n", - "# show the plot\n", - "plt.show()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "m4AVrGzy6JWR", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Answer to code task (Try not to peek until you've given it a good try!')\n", - "# 100 linearly spaced numbers\n", - "x = jnp.linspace(-jnp.pi, jnp.pi, 100)\n", - "\n", - "y = jnp.cos(x)\n", - "\n", - "# plot the functions\n", - "plt.plot(x, y, \"b\", label=\"y=cos(x)\")\n", - "\n", - "plt.legend(loc=\"upper left\")\n", - "\n", - "# show the plot\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "lg4__l4A7yqc" - }, - "source": [ - "### JAX and NumPy - Differences โŒ \n", - "\n", - "Although JAX and NumPy have some similarities, they do have some important differences:\n", - "- Jax arrays are **immutable** (they can't be modified after they are created).\n", - "- The way they handle **randomness** -- JAX handles randomness explicitly." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "dPbOnhE4ZSTi" - }, - "source": [ - "#### JAX arrays are immutable, while NumPy arrays are not.\n", - "\n", - "JAX and NumPy arrays are often interchangeable, **but** Jax arrays are **immutable** (they can't be modified after they are created). Allowing mutations makes transforms difficult and violates conditions for [pure functions](https://en.wikipedia.org/wiki/Pure_function).\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "source": [ - "Let's see this in practice by changing the number at the beginning of an array. " - ], - "metadata": { - "id": "Vdfb1wtd-GkF" - } - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "7r-Los6YZR-f" - }, - "outputs": [], - "source": [ - "# NumPy: mutable arrays\n", - "x = np.arange(10)\n", - "x[0] = 10\n", - "print(x)" - ] - }, - { - "cell_type": "markdown", - "source": [ - "Let's try this in JAX." - ], - "metadata": { - "id": "8Y23OWjE_BDA" - } - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "OxjkKpqAZxWo" - }, - "outputs": [], - "source": [ - "# JAX: immutable arrays\n", - "# Should raise an error.\n", - "try:\n", - " x = jnp.arange(10)\n", - " x[0] = 10\n", - "except Exception as e:\n", - " print(\"Exception {}\".format(e))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "VoWT5RBUagW8" - }, - "source": [ - "So it fails! We can't mutate a JAX array once it has been created. To update JAX arrays, we need to use [helper functions](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html) that return an updated copy of the JAX array. \n", - "\n", - "Instead of doing this `x[idx] = y`, we need to do this `x = x.at[idx].set(y)`. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "qJYxkh4qagwO" - }, - "outputs": [], - "source": [ - "x = jnp.arange(10)\n", - "new_x = x.at[0].set(10)\n", - "print(f\" new_x: {new_x} original x: {x}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Ut0meCGB5qD0" - }, - "source": [ - "Note here that `new_x` is a copy and that the original `x` is unchanged. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "oAH4c_smdGQU" - }, - "source": [ - "#### Randomness in NumPy vs JAX \n", - "\n", - "JAX is more explicit in Pseudo Random Number Generation (PRNG) than NumPy and other libraries (such as TensorFlow or PyTorch). [PRNG](https://en.wikipedia.org/wiki/Pseudorandom_number_generator) is the process of algorithmically generating a sequence of numbers, which *approximate* the properties of a sequence of random numbers. \n", - "\n", - "Let's see the differences in how JAX and NumPy generate random numbers." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Q2m376Ethf8m" - }, - "source": [ - "##### In Numpy, PRNG is based on a global `state`.\n", - "\n", - "Let's set the initial seed." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "-0t3sjxzdgmP" - }, - "outputs": [], - "source": [ - "# Set random seed\n", - "np.random.seed(42)\n", - "prng_state = np.random.get_state()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "QKVz5atZMMOV" - }, - "outputs": [], - "source": [ - "# @title Helper function to compare prng keys (Run Cell)\n", - "def is_prng_state_the_same(prng_1, prng_2):\n", - " \"\"\"Helper function to compare two prng keys.\"\"\"\n", - " # concat all elements in prng tuple\n", - " list_prng_data_equal = [(a == b) for a, b in zip(prng_1, prng_2)]\n", - " # stack all elements together\n", - " list_prng_data_equal = np.hstack(list_prng_data_equal)\n", - " # check if all elements are the same\n", - " is_prng_equal = all(list_prng_data_equal)\n", - " return is_prng_equal" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "nloZ9abah3J3" - }, - "source": [ - "Let's take a few samples from a Gaussian (normal) Distribution and check if PRNG keys/global state change." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "aiUcfX7iSenY" - }, - "outputs": [], - "source": [ - "print(\n", - " f\"sample 1 = {np.random.normal()} Did prng state change: {not is_prng_state_the_same(prng_state,np.random.get_state())}\"\n", - ")\n", - "prng_state = np.random.get_state()\n", - "print(\n", - " f\"sample 2 = {np.random.normal()} Did prng state change: {not is_prng_state_the_same(prng_state,np.random.get_state())}\"\n", - ")\n", - "prng_state = np.random.get_state()\n", - "print(\n", - " f\"sample 3 = {np.random.normal()} Did prng state change: {not is_prng_state_the_same(prng_state,np.random.get_state())}\"\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "nuHkW6V4iLa9" - }, - "source": [ - "Numpy's global random state is updated every time a random number is generated, so *sample 1 != sample 2 != sample 3*. \n", - "\n", - "Having the state automatically updated, makes it difficult to handle randomness in a **reproducible** way across different threads, processes and devices. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "lGDU6ckKkzqL" - }, - "source": [ - "##### In JAX, PRNG is explicit.\n", - "\n", - "In JAX, for each random number generation, you need to explicitly pass in a random key/state." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "6oKdk5CSmD-f" - }, - "source": [ - "Passing the same state/key results in the same number being generated. This is generally undesirable." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Y-6B0hjtlTmd" - }, - "outputs": [], - "source": [ - "from jax import random\n", - "\n", - "key = random.PRNGKey(42)\n", - "print(f\"sample 1 = {random.normal(key)}\")\n", - "print(f\"sample 2 = {random.normal(key)}\")\n", - "print(f\"sample 3 = {random.normal(key)}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "l0KcwEbZqIaQ" - }, - "source": [ - "To generate different and independent samples, you need to manually **split** the keys. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "v-7BhY0MmEhI" - }, - "outputs": [], - "source": [ - "from jax import random\n", - "\n", - "key = random.PRNGKey(42)\n", - "print(f\"sample 1 = {random.normal(key)}\")\n", - "\n", - "# We split the key -> new key and subkey\n", - "new_key, subkey = random.split(key)\n", - "\n", - "# We use the subkey immediately and keep the new key for future splits.\n", - "# It doesn't really matter which key we keep and which one we use immediately.\n", - "print(f\"sample 2 = {random.normal(subkey)}\")\n", - "\n", - "# We split the new key -> new key2 and subkey\n", - "new_key2, subkey = random.split(new_key)\n", - "print(f\"sample 3 = {random.normal(subkey)}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "2VnTDptmuk-i" - }, - "source": [ - "By using JAX, we can more easily reproduce random number generation in parallel across threads, processes, or even devices by explicitly passing and keeping track of the prng key (without relying on a global state that automatically gets updated). For more details on PRNG in JAX, you can read more [here](https://jax.readthedocs.io/en/latest/jep/263-prng.html). " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "TSj972IWxTo2" - }, - "source": [ - "## **1.2 Acceleration in JAX** ๐Ÿš€ - `Beginner`, `Intermediate`, `Advanced`\n", - "\n", - "JAX leverages Autograd and XLA for accelerating numerical computation. The use of Autograd allows for automatic differentiation (`grad`), while XLA allows JAX to run on multiple accelerators/backends and run transforms like `jit` and `pmap`. JAX also allows you to use `vmap` for automatic vectorization. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_bQ9QqT-yKbs" - }, - "source": [ - "### JAX is backend Agnostic - `Beginner`\n", - "\n", - "Using JAX, you can run the same code on different backends/AI accelerators (e.g. CPU/GPU/TPU), **with no changes in code** (no more `.to(device)` - from frameworks like PyTorch). This means we can easily run linear algebra operations directly on GPU/TPU." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "4PbcFsfAibBu" - }, - "source": [ - "**Multiplying Matrices**\n", - "\n", - "Dot products are a common operation in numerical computing and a central part of modern deep learning. They are defined over [vectors](https://en.wikipedia.org/wiki/Coordinate_vector), which can loosely be thought of as a list of multiple scalers (single values). \n", - "\n", - "Formally, given two vectors $\\boldsymbol{x}$,$\\boldsymbol{y}$ $\\in R^n$, their dot product is defined as:\n", - "\n", - "
$\\boldsymbol{x}^{\\top} \\boldsymbol{y}=\\sum_{i=1}^{n} x_{i} y_{i}$
" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "AY1RsVkXaokP" - }, - "source": [ - "Dot Product in NumPy (will run on cpu)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "yj59KkD_HDOs" - }, - "outputs": [], - "source": [ - "size = 1000\n", - "x = np.random.normal(size=(size, size))\n", - "y = np.random.normal(size=(size, size))\n", - "numpy_time = %timeit -o -n 10 a_np = np.dot(y,x.T)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "6c_kl-u0KPVY" - }, - "source": [ - "Dot Product using JAX (will run on current runtime - e.g. GPU)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "PHRcHK86KO3w" - }, - "outputs": [], - "source": [ - "size = 1000\n", - "key1, key2 = jax.random.split(jax.random.PRNGKey(42), num=2)\n", - "x = jax.random.normal(key1, shape=(size, size))\n", - "y = jax.random.normal(key2, shape=(size, size))\n", - "jax_time = %timeit -o -n 10 jnp.dot(y, x.T).block_until_ready()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "LMTSpEG3TNah" - }, - "source": [ - "\n", - "> When timing JAX functions, we use `.block_until_ready()` because JAX uses [asynchronous dispatch](https://jax.readthedocs.io/en/latest/async_dispatch.html#async-dispatch). This means JAX doesn't wait for the operation to complete before returning control to your code. To fairly compute the time taken for JAX operations, we therefore block until the operation is done.\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "S3vwh6Q724gn" - }, - "source": [ - "How much faster was the dot product in JAX (Using GPU)?" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "UkASX9p34A1D" - }, - "outputs": [], - "source": [ - "np_average_time = np.mean(numpy_time.all_runs)\n", - "jax_average_time = np.mean(jax_time.all_runs)\n", - "data = {\"numpy\": np_average_time, \"jax\": jax_average_time}\n", - "\n", - "plot_performance(data, title=\"Average time taken per framework to run dot product\")" - ] - }, - { - "cell_type": "markdown", - "source": [ - "JAX not running much faster? -> Re-run the JAX cell. \n", - "> \"Keep in mind that the first time you run JAX code, it will be slower because it is being compiled. T*his is true even if you donโ€™t use jit in your own code, because JAXโ€™s builtin functions are also jit compiled*.\" - [JAX Docs](https://jax.readthedocs.io/en/latest/faq.html#benchmarking-jax-code).\n", - "\n", - "If you are running on an accelerator, you should see a considerable performance benefit of using JAX, without making any changes to your code! \n", - "\n", - "\n", - "\n", - "\n", - "\n" - ], - "metadata": { - "id": "X6Rv_OQgBOqr" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "JM_08mXEBRIK" - }, - "source": [ - "### JAX Transformations - `Beginner`, `Intermediate`, `Advanced`\n", - "\n", - "JAX transforms (e.g. jit, grad, vmap, pmap) first convert python functions into an intermediate language called *jaxpr*. Transforms are then applied to this jaxpr representation.\n", - "\n", - "JAX generates jaxpr, in a process known as **tracing**. During tracing, function inputs are wrapped by a tracer object and then JAX records all operations (including regular python code) that occur during the function call. These recorded operations are used to reconstruct the function. \n", - "\n", - "Any python side-effects are not recorded during tracing. JAX transforms and compilations are designed to work only with **pure functions**. For more on tracing and jaxpr, you can read [here](https://jax.readthedocs.io/en/latest/jaxpr.html).\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "cOGuGWtLmP7n" - }, - "source": [ - "#### Basic JAX Transformations - `jit` and `grad` - `Beginner`\n", - "\n", - "In this section, we will explore two basic JAX transforms: \n", - "- jit (Just-in-time compilation) - compiles and caches JAX Python functions so that they can be run efficiently on XLA to *speed up function calls*.\n", - "- grad - *Automatically* compute *gradients* of functions." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "QsJE_U-ZzVol" - }, - "source": [ - "##### jit\n", - "\n", - "Jax dispatches operations to accelerators one at a time. If we have repeated operations, we can use `jit` to compile the function the first time it is called, then subsequent calls will be [cached](https://en.wikipedia.org/wiki/Cache_(computing) (save the compiled version so that it doesn't need to be recompiled everytime we call it). " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "uIYsqIp_-Dly" - }, - "source": [ - "Let's compile [ReLU (Rectified Linear Unit)](https://arxiv.org/abs/1803.08375), a popular activation function in deep learning. \n", - "\n", - "ReLU is defined as follows:\n", - "
$f(x)=max(0,x)$
\n", - "\n", - "It can be visualized as follows:\n", - "\n", - "
\n", - "\n", - "
,\n", - "\n", - "where $x$ is the input to the function and $y$ is output of ReLU.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Vm-bN9sQETLV" - }, - "source": [ - "$$f(x)=\\max (0, x)=\\left\\{\\begin{array}{l}x_{i} \\text { if } x_{i}>0 \\\\ 0 \\text { if } x_{i}<=0\\end{array}\\right.$$" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "dFiuu3BFAKdY" - }, - "source": [ - "**Exercise 1.2 - Code Task:** Complete the ReLU implementation below using standard python." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "1_qMJJbs-Cbe" - }, - "outputs": [], - "source": [ - "# Implement ReLU.\n", - "def relu(x):\n", - " if x > 0:\n", - " return\n", - " # TODO Implement me!\n", - " else:\n", - " return\n", - " # TODO Implement me!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "zCobLakM1esy" - }, - "outputs": [], - "source": [ - "# @title Run to test your function.\n", - "\n", - "\n", - "def plot_relu(relu_function):\n", - " max_int = 5\n", - " # Generete 100 evenly spaced points from -max_int to max_int\n", - " x = np.linspace(-max_int, max_int, 1000)\n", - " y = np.array([relu_function(xi) for xi in x])\n", - " plt.plot(x, y, label=\"ReLU\")\n", - " plt.legend(loc=\"upper left\")\n", - " plt.xticks(np.arange(min(x), max(x) + 1, 1))\n", - " plt.show()\n", - "\n", - "\n", - "def check_relu_function(relu_function):\n", - " # Generete 100 evenly spaced points from -100 to -1\n", - " x = np.linspace(-100, -1, 100)\n", - " y = np.array([relu_function(xi) for xi in x])\n", - " assert (y == 0).all()\n", - "\n", - " # Check if x == 0\n", - " x = 0\n", - " y = relu_function(x)\n", - " assert y == 0\n", - "\n", - " # Generete 100 evenly spaced points from 0 to 100\n", - " x = np.linspace(0, 100, 100)\n", - " y = np.array([relu_function(xi) for xi in x])\n", - " assert np.allclose(x, y)\n", - "\n", - " print(\"Your ReLU function is correct!\")\n", - "\n", - "\n", - "check_relu_function(relu)\n", - "plot_relu(relu)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Kken6_XvDdOK", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Answer to code task (Try not to peek until you've given it a good try!')\n", - "def relu(x):\n", - " if x > 0:\n", - " return x\n", - " else:\n", - " return 0\n", - "\n", - "\n", - "check_relu_function(relu)\n", - "plot_relu(relu)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "2mgIAyE2Fx3O" - }, - "source": [ - "Let's try to `jit` this function to speed up compilation and then try to call it." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "4YDkiNlRF6jn" - }, - "outputs": [], - "source": [ - "relu_jit = jax.jit(relu)\n", - "\n", - "key = jax.random.PRNGKey(42)\n", - "# Gen 1000000 random numbers and pass them to relu\n", - "num_random_numbers = 1000000\n", - "x = jax.random.normal(key, (num_random_numbers,))\n", - "\n", - "# Should raise an error.\n", - "try:\n", - " relu_jit(x)\n", - "except Exception as e:\n", - " print(\"Exception {}\".format(e))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "y7q33C4pHOQW" - }, - "source": [ - "**Why does this fail?**\n", - "\n", - "\n", - "> As mentioned above, JAX transforms first converts python functions into an intermediate language called *jaxpr*. Jaxpr only captures what is executed on the parameters given to it during tracing, so this means during conditional calls, jaxpr only considers the branch taken.\n", - "> \n", - "> When jit-compiling a function, we want to compile and cache a version of the function that can handle multiple different argument types (so we don't have to recompile for each function evaluation). For example, when we compile a function on an array `jnp.array([1., 2., 3.], jnp.float32)`, we would likely also want to use the compiled function for `jnp.array([4., 5., 6.], jnp.float32)`. \n", - "> \n", - "> To achieve this, JAX traces your code based on abstract values. The default abstraction level is a ShapedArray - array that has a fixed size and dtype, for example, if we trace a function using `ShapedArray((3,), jnp.float32)`, it can be reused for any concrete array of size 3, and float32 dtype. \n", - "> \n", - "> This does come with some challenges. Tracing that relies on concrete values becomes tricky and sometimes results in `ConcretizationTypeError` as in the ReLU function above. Furthermore, when tracing a function with conditional statements (\"if ...\"), JAX doesn't know which branch to take when tracing and so tracing can't occur.\n", - "\n", - "**TLDR**: JAX tracing doesn't work well with conditional statements (\"if ...\"). \n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "uLswU8aMEQ9K" - }, - "source": [ - "To solve this, we have two options:\n", - "- Use static arguments to make sure JAX traces on a concrete value level - this is not ideal if you need to retrace a lot. Example - bottom of this [section](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#python-control-flow-jit).\n", - "- Use builtin JAX condition flow primitives such as [`lax.cond`](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.cond.html) or [`jnp.where`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.where.html). " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "SX8k4R7daBpP" - }, - "source": [ - "**Exercise 1.3 - Code Task** : Let's convert our ReLU function above to work with jit.\n", - "\n", - "**Useful methods:** [`jnp.where`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.where.html) (or [`jnp.maximum`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.maximum.html), if you prefer.) " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "p-4mXLwqaK-b" - }, - "outputs": [], - "source": [ - "# Implement a jittable ReLU\n", - "def relu(x):\n", - " # TODO Implement ME!\n", - " return ..." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "B5fq_QRoaaG5", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Run to test your function.\n", - "check_relu_function(relu)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "XLtBaplGxlS3", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Answer to code task (Try not to peek until you've given it a good try!')\n", - "def relu(x):\n", - " return jnp.where(x > 0, x, 0)\n", - " # Another option - return jnp.maximum(x,0)\n", - "\n", - "\n", - "check_relu_function(relu)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "KYogDOCLiLXN", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Now let's see the performance benefit of using jit! (Run me)\n", - "\n", - "# jit our function\n", - "relu_jit = jax.jit(relu)\n", - "\n", - "# generate random input\n", - "key = jax.random.PRNGKey(42)\n", - "num_random_numbers = 1000000\n", - "x = jax.random.normal(key, (num_random_numbers,))\n", - "\n", - "# time normal jit function\n", - "jax_time = %timeit -o -n 10 relu(x).block_until_ready()\n", - "\n", - "# Warm up/Compile - first run for jitted function\n", - "relu_jit(x).block_until_ready()\n", - "\n", - "# time jitted function\n", - "jax_jit_time = %timeit -o -n 10 relu_jit(x).block_until_ready()\n", - "\n", - "# Let's plot the performance difference\n", - "jax_avg_time = np.mean(jax_time.all_runs)\n", - "jax_jit_avg_time = np.mean(jax_jit_time.all_runs)\n", - "data = {\"JAX (no jit)\": jax_avg_time, \"JAX (with jit)\": jax_jit_avg_time}\n", - "\n", - "plot_performance(data, title=\"Average time taken for ReLU function\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "dxq-z-xzs40s" - }, - "source": [ - "##### grad\n", - "\n", - "`grad` is used to automatically compute the gradient of a function in JAX. It can be applied to Python and NumPy functions, which means you can differentiate through loops, branches, recursion, and closures. \n", - "\n", - "`grad` takes in a function `f` and returns a function. If `f` is a mathematical function $f$, then `grad(f)` corresponds to $f'$ (Lagrange's notation), with `grad(f)(x)` corresponding to $f'(x)$.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "C49R8EOs-GHe" - }, - "source": [ - "Let's take a simple function $f(x)=6x^4-9x+4$" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "lUMepl6J-dQP" - }, - "outputs": [], - "source": [ - "f = lambda x: 6 * x**4 - 9 * x + 4" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "9ayvrkpiBiu4" - }, - "source": [ - "We can compute the gradient of this function - $f'(x)$ and evaluate it at $x=3$." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "YNm9hS2S-vJk" - }, - "outputs": [], - "source": [ - "dfdx = grad(f)\n", - "dfdx_3 = dfdx(3.0)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "UcRUywsnF3LZ" - }, - "source": [ - "**Exercise 1.4 - Math Task**: Can you calculate $f'(2)$ by hand?" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "PybYK6NEFWrD", - "cellView": "form" - }, - "outputs": [], - "source": [ - "answer = 0 # @param {type:\"integer\"}\n", - "\n", - "dfdx_2 = dfdx(2.0)\n", - "\n", - "assert (\n", - " answer == dfdx_2\n", - "), \"Incorrect answer, hint https://en.wikipedia.org/wiki/Power_rule#Statement_of_the_power_rule\"\n", - "\n", - "print(\"Nice, you got the correct answer!\")" - ] - }, - { - "cell_type": "code", - "source": [ - "# @title Answer to math task (Try not to run until you've given it a good try!') \n", - "%%latex \n", - "\\begin{aligned}\n", - "f(x) & = 6x^4-9x+4 \\\\\n", - "f'(x) & = 24x^3 -9 && \\triangleright \\textrm{Power Rule.} \\\\ \n", - "f'(2) & = 24(2)^3 -9 = 183 && \\triangleright \\textrm{Substituting x=2} \\\\\n", - "\\end{aligned}" - ], - "metadata": { - "id": "CAwlhxIlRPp9", - "cellView": "form" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "wcB5ZjojH67Q" - }, - "source": [ - "We can also chain `grad` to calculate higher order deratives. \n", - "\n", - "We can calculate $f'''(x)$ as follows:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "013SFq7BE54W" - }, - "outputs": [], - "source": [ - "d3dx = grad(grad(grad(f)))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "7_r9VQGoIsa6" - }, - "source": [ - "**Exercise 1.5 - Math Task**: How about $f'''(2)$ by hand?" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "WZUArv4TInPg" - }, - "outputs": [], - "source": [ - "answer = 0 # @param {type:\"integer\"}\n", - "\n", - "d3dx_2 = d3dx(2.0)\n", - "\n", - "assert answer == d3dx_2, \"Incorrect answer, hint ...\"\n", - "\n", - "print(\"Nice, you got the correct answer!\")" - ] - }, - { - "cell_type": "code", - "source": [ - "# @title Answer to math task (Try not to run until you've given it a good try!') \n", - "%%latex \n", - "\n", - "\\begin{aligned}\n", - "f(x) & = 6x^4-9x+4 \\\\\n", - "f'(x) & = 24x^3 -9 && \\triangleright \\textrm{Power Rule.} \\\\\n", - "f''(x) & = 72x^2 && \\triangleright \\textrm{Power Rule.} \\\\\n", - "f'''(x) & = 144x && \\triangleright \\textrm{Power Rule.} \\\\\n", - "f'''(2) & = 144(2)=288 && \\triangleright \\textrm{Substituting x=2} \\\\ \n", - "\\end{aligned}" - ], - "metadata": { - "id": "TCC7SkH8MMVk", - "cellView": "form" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "c3QgJNU9XYyz" - }, - "source": [ - "Another useful method is `value_and_grad`, where we can get the value ($f(x)$) and gradient ($f'(x)$). " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "x3zeSv6gXuyd" - }, - "outputs": [], - "source": [ - "from jax import value_and_grad\n", - "\n", - "f_x, dy_dx = value_and_grad(f)(2.0)\n", - "print(f\"f(x): {f_x} fโ€ฒ(x): {dy_dx} \")" - ] - }, - { - "cell_type": "markdown", - "source": [ - "> For partial derivatives, you need to use the [`argnums`](https://jax.readthedocs.io/en/latest/_autosummary/jax.grad.html) param to specify which variables you want to differentiate with respect to. \n", - "\n" - ], - "metadata": { - "id": "_vUr-B6gSxnu" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "MktOLPnwvnH3" - }, - "source": [ - "**Exercise 1.6 - Group Task:** Chat with neighbour/think about how JAX's automatic differentiation compares to other libraries such as Pytorch or Tensorflow. " - ] - }, - { - "cell_type": "markdown", - "source": [ - "Another useful application related to `grad` is when you want your `grad` function to return auxiliary (extra) data, that you don't want differentiated. You can use the `has_aux` parameter to do this (example in \"Auxiliary data\" section in [here](https://github.com/google/jax/blob/main/docs/jax-101/01-jax-basics.ipynb))." - ], - "metadata": { - "id": "rvXlE7z02M2D" - } - }, - { - "cell_type": "markdown", - "source": [ - "#### Pure Functions ๐Ÿ’ก - `Beginner`\n", - "\n", - "So we have learned about `jit` and `grad`. Before we move on, let's make sure we understand [**pure functions**](https://en.wikipedia.org/wiki/Pure_function). \n", - "\n", - "JAX transformation and compilation are designed to work reliably on **pure functions**. These functions have the following properties:\n", - "1. All **input** data is passed through the **function's parameters**. \n", - "2. All **results** are output through the **function's return**. \n", - "3. The function always returns the same **result** if invoked with the **same inputs**. What if your function involves randomness? Pass in the random seed!\n", - "4. **No [side-effects](https://en.wikipedia.org/wiki/Side_effect_(computer_science))** - no mutation of non-local variables or input/output streams. \n", - "\n", - "Let's see what could happen if we don't stick to using pure functions." - ], - "metadata": { - "id": "fT56qxXzTVKZ" - } - }, - { - "cell_type": "markdown", - "source": [ - "##### Side Effects" - ], - "metadata": { - "id": "Mad7l7s0CtT1" - } - }, - { - "cell_type": "markdown", - "source": [ - "Let's call print within a function." - ], - "metadata": { - "id": "xkQWTE2Xe955" - } - }, - { - "cell_type": "code", - "source": [ - "def impure_print_side_effect(x):\n", - " print(\"Print me!\") # This is a side-effect\n", - " return x\n", - "\n", - "\n", - "# The side-effects appear during the first run\n", - "print(\"First call: \", jax.jit(impure_print_side_effect)(4.0))" - ], - "metadata": { - "id": "S9aeUdUoBmCg" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "As expected, the print statement is called.\n", - "\n", - "Let's call this function again. " - ], - "metadata": { - "id": "nu4rnyS7ox_L" - } - }, - { - "cell_type": "code", - "source": [ - "# Subsequent runs with parameters of same type and shape may not show the side-effect\n", - "# This is because JAX now invokes a cached compilation of the function\n", - "print(\"Second call: \", jax.jit(impure_print_side_effect)(5.0))" - ], - "metadata": { - "id": "-wnkIqAxfDeJ" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "Ah, no print statement! Since JAX cached the compilation of the function, `print()` calls will only happen during tracing and not every time the function is called. " - ], - "metadata": { - "id": "64rNvVnwo-eB" - } - }, - { - "cell_type": "code", - "source": [ - "# JAX re-runs the Python function when the type or shape of the argument changes\n", - "print(\n", - " \"Third call, different type: \", jax.jit(impure_print_side_effect)(jnp.array([5.0]))\n", - ")" - ], - "metadata": { - "id": "Mp_CkOL-o86t" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "In this case, we called the function with a different shaped object and so it triggered the re-tracing of the function and print was called again. " - ], - "metadata": { - "id": "XFogrIf5fbLU" - } - }, - { - "cell_type": "markdown", - "source": [ - "To print values in compiled functions, use [host callbacks](https://jax.readthedocs.io/en/latest/jax.experimental.host_callback.html?highlight=print#jax.experimental.host_callback.id_print)([example](https://github.com/google/jax/issues/196#issuecomment-1191155679)) or if your jax version>=0.3.16, you can use [`jax.debug.print`](https://jax.readthedocs.io/en/latest/debugging/print_breakpoint.html). \n" - ], - "metadata": { - "id": "pqV6_25GCxHL" - } - }, - { - "cell_type": "markdown", - "source": [ - "##### Globals" - ], - "metadata": { - "id": "EqL1-TGaC8Ir" - } - }, - { - "cell_type": "markdown", - "source": [ - "Using global variables can also lead to some undesired consequences!" - ], - "metadata": { - "id": "t8dzJog8tMe_" - } - }, - { - "cell_type": "code", - "source": [ - "g = 0.0\n", - "\n", - "\n", - "def impure_uses_globals(x):\n", - " return x + g\n", - "\n", - "\n", - "# JAX captures the value of the global during the first run\n", - "print(\"First call: \", jax.jit(impure_uses_globals)(4.0))" - ], - "metadata": { - "id": "vwAkKrDiCXO6" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "This prints 4, using the original value of `g`.\n", - "\n", - "Let's update `g` and call our function again." - ], - "metadata": { - "id": "pWNE8B5btcfc" - } - }, - { - "cell_type": "code", - "source": [ - "g = 10.0 # Update the global\n", - "\n", - "# Subsequent runs may silently use the cached value of the globals\n", - "print(\"Second call: \", jax.jit(impure_uses_globals)(4.0))" - ], - "metadata": { - "id": "mLMpdQZwtUEL" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "Even though we updated our global variable, this still prints 4, using the original value of `g`. This is because the value of `g` was cached." - ], - "metadata": { - "id": "o3-ygEx0tpBX" - } - }, - { - "cell_type": "code", - "source": [ - "# JAX re-runs the Python function when the type or shape of the argument changes\n", - "# This will end up reading the latest value of the global\n", - "print(\"Third call, different type: \", jax.jit(impure_uses_globals)(jnp.array([4.0])))" - ], - "metadata": { - "id": "LDecWNyktWDN" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "Similar to the side-effects example, re-tracing gets triggered when the shape of our input has changed. In this case, our function now uses the updated value of `g`." - ], - "metadata": { - "id": "3mIZaXOqt5ix" - } - }, - { - "cell_type": "markdown", - "source": [ - "Since the global variables are cached, it is still okay to use global **constants** inside jax functions." - ], - "metadata": { - "id": "aLis2BV04BQK" - } - }, - { - "cell_type": "markdown", - "source": [ - "#### JAX transforms <-> Pure Functions \n", - "In summary, JAX transforms should only be used with pure functions!" - ], - "metadata": { - "id": "JAbqUwp0uPta" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "tvBzh8wiGuLf" - }, - "source": [ - "#### More Advanced Transforms - `vmap` and `pmap` - `Intermediate`, `Advanced`\n", - "\n", - "JAX also provides transforms that allow you automatically vectorize (`vmap`) and parallelize (`pmap`) your code. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "RCUB9YkCnCFb" - }, - "source": [ - "##### vmap - `Intermediate`\n", - "\n", - "vmap (Vectorizing map) automatically vectorizes your python functions. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "e858lqfYKd4d" - }, - "source": [ - "Let's define a simple function that calculates the min and max of an input." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "-6qalyXgDsKB" - }, - "outputs": [], - "source": [ - "def min_max(x):\n", - " return jnp.array([jnp.min(x), jnp.max(x)])" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "muSIsUkgKlxh" - }, - "source": [ - "We can apply this function to the vector - `[0, 1, 2, 3, 4]` and get the min and max values." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "F5wIeGieKsWG" - }, - "outputs": [], - "source": [ - "x = jnp.arange(5)\n", - "min_max(x)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_PkC7NnPLNXq" - }, - "source": [ - "What about if we want to apply this to a batch/list of vectors (i.e. calculate the min and max independently across multiple batches)? " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "hRngFfwCMHLd" - }, - "source": [ - "Let's create our batch - 3 vectors of size 5." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "EKuh459OD6jx" - }, - "outputs": [], - "source": [ - "batch_size = 3\n", - "batched_x = np.arange(15).reshape((batch_size, -1))\n", - "print(batched_x)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "hApYpVEvNS1y" - }, - "source": [ - "**Exercise 1.7 - Question**: What do you think would be the result if we passed batch_x into `min_max`?" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "gu6C3J0kMrtj", - "cellView": "form" - }, - "outputs": [], - "source": [ - "batch_min_max_output = [[0,4],[5,9],[10,14]] # @param [\"[[0,4],[5,9],[10,14]]\", \"[[0,10],[1,11],[2,12],[3,13],[4,14]]\", \"[0,14]\"] {type:\"raw\"}\n", - "\n", - "assert (batch_min_max_output == np.array(min_max(batched_x))).all(), \"Incorrect answer.\"\n", - "\n", - "print(\"Nice, you got the correct answer!\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "6K0weiHOOb8L" - }, - "source": [ - "So the above is not what we want. The `min` and `max` is applied across the entire batch, when we want the min and max per vector/mini-batch. \n", - "\n", - "We can also manually batch this by `jnp.stack` and a for loop, as follows:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "q8RdAqr8N-Fd" - }, - "outputs": [], - "source": [ - "@jit\n", - "def manual_batch_min_max_loop(batched_x):\n", - " min_max_result_list = []\n", - " for x in batched_x:\n", - " min_max_result_list.append(min_max(x))\n", - " return jnp.stack(min_max_result_list)\n", - "\n", - "\n", - "print(manual_batch_min_max_loop(batched_x))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "jmu3VVtMR0GV" - }, - "source": [ - "Or, just manually updating the `axis` in `jnp.min` and `jnp.max`. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "lzxmORv-RcUg" - }, - "outputs": [], - "source": [ - "@jit\n", - "def manual_batch_min_max_axis(batched_x):\n", - " return jnp.array([jnp.min(batched_x, axis=1), jnp.max(batched_x, axis=1)]).T\n", - "\n", - "\n", - "print(manual_batch_min_max_axis(batched_x))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "CetKYASUSE4Q" - }, - "source": [ - "These approaches both work, but we need to change our function to work with batches. We can't just run the same code across a batch of data.\n", - "\n", - "There is where `vmap` becomes useful! Using `vmap` we can write a function once, as if it is working on a single element, and then use `vmap` to automatically vectorize it! " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "s2F8WUNQROkQ" - }, - "outputs": [], - "source": [ - "# define our vmap function using our original single vector function\n", - "@jit\n", - "def min_max_vmap(batched_x):\n", - " return vmap(min_max)(batched_x)\n", - "\n", - "\n", - "# Run it on a single vecor\n", - "## We add extra dimention in a single vector, shape changes from (5,) to (1,5), which makes the vmapping possible\n", - "x_with_leading_dim = jax.numpy.expand_dims(x, axis=0)\n", - "print(f\"Single vector: {min_max_vmap(x_with_leading_dim)}\")\n", - "\n", - "# Run it on batch of vectors\n", - "print(f\"Batch/list of vector:{min_max_vmap(batched_x)}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "-3bome92VRL6" - }, - "source": [ - "So this is really convenient, but what about performance? " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "O1Nb4uniUUor" - }, - "outputs": [], - "source": [ - "batched_x = np.arange(50000).reshape((500, 100))\n", - "\n", - "# Trace the functions with first call\n", - "manual_batch_min_max_loop(batched_x).block_until_ready()\n", - "manual_batch_min_max_axis(batched_x).block_until_ready()\n", - "min_max_vmap(batched_x).block_until_ready()\n", - "\n", - "min_max_forloop_time = %timeit -o -n 10 manual_batch_min_max_loop(batched_x).block_until_ready()\n", - "min_max_axis_time = %timeit -o -n 10 manual_batch_min_max_axis(batched_x).block_until_ready()\n", - "min_max_vmap_time = %timeit -o -n 10 min_max_vmap(batched_x).block_until_ready()\n", - "\n", - "print(\n", - " f\"Avg Times (lower is better) - Naive Implementation: {np.round(np.mean(min_max_forloop_time.all_runs),5)} Manually Vectorized: {np.round(np.mean(min_max_axis_time.all_runs),5)} Vmapped Function: {np.round(np.mean(min_max_vmap_time.all_runs),5)} \"\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "mYL758zCYsrR" - }, - "source": [ - "So `vmap` should be similar in performance to manually vectorized code (if everything is implemented well), and much better than naively vectorized code (i.e. for loops). " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "vAO9dOdrtiqI" - }, - "source": [ - "##### pmap - `Advanced`\n", - "\n", - "๐Ÿ’ก**For this subsection, please ensure that colab is using a `TPU` runtime. If no `TPU` runtimes are available, select `Harware Accelerator` - `None` for a cpu runtime.** \n", - "\n", - "Another JAX transform is `pmap`. `pmap` transforms a function written for one device, to a function that can run in parallel, across many devices. \n", - "\n", - "**Difference between `vmap` and `pmap`**:\n", - "\n", - "So both `pmap` and `vmap` transform a function to work over an array, but they differ in implementation. `vmap` adds an extra batch dimension to all the operations in a function, while `pmap` replicates the function and executes each replica on its own XLA device in parallel." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "gUYA277soR-0" - }, - "outputs": [], - "source": [ - "# @title Check the device you are using (Run Cell)\n", - "print(f\"Num devices: {jax.device_count()}\")\n", - "print(f\" Devices: {jax.devices()}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "6qhlBnLs6AYL" - }, - "source": [ - "Let's try and `pmap` a batch of dot products.\n", - "\n", - "Here is an illustration of how we would typically do this sequentially: \n", - "\n", - "[Source](https://www.assemblyai.com/blog/why-you-should-or-shouldnt-be-using-jax-in-2022/)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "fz1i2AwA5_7J" - }, - "outputs": [], - "source": [ - "# @title Illustration of Sequential Dot Product (Run me)\n", - "from IPython.display import HTML\n", - "\n", - "HTML(\n", - " ''\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "MTmWNFZ08f8n" - }, - "source": [ - "Here is the code implementation of this:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "GqTuMldJ9Uv5" - }, - "outputs": [], - "source": [ - "# Let's generate a batch of size 8, each with a matrix of size (500, 600)\n", - "\n", - "# Let create 8 keys, 1 for each batch\n", - "keys = jax.random.split(jax.random.PRNGKey(0), 8)\n", - "\n", - "# Let create our batches\n", - "mats = jnp.stack([jax.random.normal(key, (500, 600)) for key in keys])\n", - "\n", - "\n", - "def dot_product_sequential():\n", - " @jit\n", - " def avg_dot_prod(mats):\n", - " result = []\n", - " # Loop through batch and compute dp\n", - " for mat in mats:\n", - " # dot product between the a mat and mat.T (transposed version)\n", - " result.append(jnp.dot(mat, mat.T))\n", - " return jnp.stack(result)\n", - "\n", - " avg_dot_prod(mats).block_until_ready()\n", - "\n", - "\n", - "run_sequential = %timeit -o -n 5 dot_product_sequential()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "fBEtecJX-0AW" - }, - "source": [ - "Here is an illustration of how we would do this in parallel \n", - "\n", - "[Source](https://www.assemblyai.com/blog/why-you-should-or-shouldnt-be-using-jax-in-2022/)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Uswxurmn-5oC", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Illustration of Parallel Dot Product (Run me)\n", - "from IPython.display import HTML\n", - "\n", - "HTML(\n", - " ''\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "sGsq8iTA_N9U" - }, - "source": [ - "Here is code implementation of batched dot products:" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "0ygFWDfQIoeC" - }, - "source": [ - "First, we will create `8` random matrices (one for each available tpu devices - colab tpu's have 8 available [devices](https://cloud.google.com/tpu/docs/system-architecture-tpu-vm) or the 8 cpu cores as we configured)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "MZLMx06_K_qR" - }, - "outputs": [], - "source": [ - "# Let create 8 keys, 1 for each batch\n", - "keys = jax.random.split(jax.random.PRNGKey(0), 8)\n", - "\n", - "# Each replicated pmapped function get a different key\n", - "mats = pmap(lambda key: jax.random.normal(key, (500, 600)))(keys)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "6BkMsaOtLISj" - }, - "source": [ - "The leading dimension here needs to equal the dimension of available devices (since we are sending a batch to each device)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "gWrdv_2wLG4T" - }, - "outputs": [], - "source": [ - "print(mats.shape)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "HnqblcUsLaKZ" - }, - "source": [ - "Using `pmap` to generate the batches ensures these batches are of type `ShardedDeviceArray`. This is similar to an ndarray, except each batch/shared is stored in the memory of multiple devices, so they can be used in subsequent `pmap` operations without moving data around between devices (GPU/TPU) and hosts (cpu). " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "JAeaBCvcLQWg" - }, - "outputs": [], - "source": [ - "print(type(mats))" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "PVz0gOWG9pkr" - }, - "outputs": [], - "source": [ - "def dot_product_parallel():\n", - "\n", - " # Run a local matmul on each device in parallel (no data transfer)\n", - " result = pmap(lambda x: jnp.dot(x, x.T))(\n", - " mats\n", - " ).block_until_ready() # result.shape is (8, 5000, 5000)\n", - "\n", - "\n", - "run_parallel = %timeit -o -n 5 dot_product_parallel()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "64gfyF3ENQzU" - }, - "source": [ - "It is simple as that. Our dot product now runs in parallel across available devices (cpu, gpus or tpus). As we have more cores/devices, this code will automatically scale! " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "5qcQXSbANP_M", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Let's plot the performance difference (Run Cell)\n", - "\n", - "jax_parallel_time = np.mean(run_parallel.all_runs)\n", - "jax_seq_time = np.mean(run_sequential.all_runs)\n", - "\n", - "\n", - "data = {\"JAX (seq)\": jax_seq_time, \"JAX (parallel - pmap)\": jax_parallel_time}\n", - "\n", - "plot_performance(data, title=\"Average time taken for Seq vs Parallel Dot Product\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "-0j8iJRFUz6v" - }, - "source": [ - "For some problems, the speed can be directly proportional to the number of devices -- $Nx$ speed up for $N$ devices! \n", - "\n", - "We showed an example of using `pmap` for *pure* parallelism, where there is no communication between devices. JAX also has various operations for communication across distributed devices ( more on this [here](https://jax.readthedocs.io/en/latest/jax-101/06-parallelism.html#communication-between-devices).)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "aB0503xgmSFh" - }, - "source": [ - "# **Part 2 - From Linear to Non-Linear Regression**\n", - "\n", - "Now that we know some basics of JAX, we can build some simple models!\n", - "\n", - "We will start by learning the basics of Linear Regression and then move on to Polynomial Regression. Finally, we will show how we can use [Haiku](https://github.com/deepmind/dm-haiku) and [Optax](https://github.com/deepmind/optax) to make training our models simpler and more convenient. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XrWSN-zaWAhJ" - }, - "source": [ - "## **2.1 Linear Regression** - ๐Ÿ“ˆ `Beginner`\n", - "\n", - "With a long history spanning from the 19th century [[Gauss, 1809](https://cir.nii.ac.jp/crid/1573950399668535168), [Legendre, 1805](https://play.google.com/store/books/details?id=7C9RAAAAYAAJ&rdid=book-7C9RAAAAYAAJ&rdot=1)] , linear regression is one of the simplest and most popular methods for solving regression problems (problems where we are predicting a continuous variable). \n", - "\n", - "Linear regression aims to find a function $f$ that maps our **inputs $x$**, where $x \\in R^D$ (*$x$ is a real number of dimension $D$*), to the corresponding **output/target - $y$**, where $y \\in R^1$ (output is a single real number). \n", - "\n", - "Put simply, we are trying to model the relationship between one or more independent variables (our inputs - $x$) and our dependent variable (our output - $y$). In Machine Learning, we model this relationship so that we can make predictions.\n", - "\n", - "For simplicity, we will focus on simple Linear Regression, where we have a single input $x$ ($x \\in R^1$)." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "AcyM6XRj1cDz" - }, - "source": [ - "### Regression Toy Example - Housing Prices" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "15_2U2klS1ER" - }, - "source": [ - "Let's say we have a dataset of housing sizes (in $m^2$) and their prices (in 100 000s of Tunisian dinar - TND). \n", - "\n", - "|Size of House in $m^2$ (input - $x$) | Price (100 000s of TND) (output - $y$) \n", - "--- | --- | \n", - "|210|4|\n", - "|160|3.3|\n", - "|240|3.7|\n", - "|140|2.3|\n", - "|300|5.4|" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "i34mTepJBpha" - }, - "source": [ - "Let's build this simple dataset, with 5 elements." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "5zfvznFJ1bi4" - }, - "outputs": [], - "source": [ - "x_data_list = [210, 160, 240, 140, 300]\n", - "y_data_list = [4, 3.3, 3.7, 2.3, 5.4]" - ] - }, - { - "cell_type": "code", - "source": [ - "# @title Let's plot our dataset. (Run Cell)\n", - "def plot_basic_data(parameters_list=None, title=\"Observed data\", axis_pad=1):\n", - " xlim = [min(x_data_list) - axis_pad, max(x_data_list) + axis_pad]\n", - " ylim = [min(y_data_list) - axis_pad, max(y_data_list) + axis_pad]\n", - " fig, ax = plt.subplots()\n", - "\n", - " if parameters_list is not None:\n", - " x_pred = np.linspace(xlim[0], xlim[1], 100)\n", - " for parameters in parameters_list:\n", - " y_pred = parameters[0] + parameters[1] * x_pred\n", - " ax.plot(x_pred, y_pred, \":\", color=[1, 0.7, 0.6])\n", - "\n", - " parameters = parameters_list[-1]\n", - " y_pred = parameters[0] + parameters[1] * x_pred\n", - " ax.plot(x_pred, y_pred, \"-\", color=[1, 0, 0], lw=2)\n", - "\n", - " ax.plot(x_data_list, y_data_list, \"ob\")\n", - " ax.set(xlabel=\"Input x\", ylabel=\"Output y\", title=title, xlim=xlim, ylim=ylim)\n", - " ax.grid()\n", - "\n", - "\n", - "plot_basic_data()" - ], - "metadata": { - "cellView": "form", - "id": "uLB0Z3uGHGnV" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "vnoEkgimTQ6V" - }, - "source": [ - "### **Optimization by Trial-and-Error**\n", - "\n", - "Let's say we would like to predict these $y$ (outputs) values given the $x$ (inputs). \n", - "\n", - "We can start modeling this by using a simple linear function: \n", - "
\n", - "$f(x) = \\color{red}{w} x + \\color{red}{b}$\n", - "
\n", - "\n", - ", where $x$ is our inputs and $\\color{red}{b}$ and $\\color{red}{w}$ are our model parameters.\n", - "\n", - "Usually, we learn the model parameters, but let's try to find these parameters by hand!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "A_8hyJrhdy6v" - }, - "outputs": [], - "source": [ - "# RUN ME\n", - "parameters_list = [] # Used to track which parameters were tried." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "FLvxEOBtWrSF" - }, - "source": [ - "**Exercise 2.1** \n", - "1. Move the two sliders below to set $\\color{red}{b}$ and $\\color{red}{w}$. \n", - "2. Is your $f(x)$ close to the blue data points? Can you find a better fit?\n", - "3. Repeat 1-2 until you have found a good enough fit. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "iYl7LM7kWYNG", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Choose model parameters. { run: \"auto\" }\n", - "b = 3 # @param {type:\"slider\", min:-5, max:5, step:1}\n", - "w = -0.03 # @param {type:\"slider\", min:-0.05, max:0.05, step:0.01}\n", - "print(\"Plotting line\", w, \"* x +\", b)\n", - "parameters = [b, w]\n", - "parameters_list.append(parameters)\n", - "plot_basic_data(\n", - " parameters_list, title=\"Observed data and my first predictions\", axis_pad=12\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "UCNWBHuBa9rj" - }, - "source": [ - "**Weights and Bias**\n", - "\n", - "What was the impact of changing $\\color{red}{b}$ and $\\color{red}{w}$?\n", - "\n", - "- $\\color{red}{w}$ is our weights. This represents the slope of our function.\n", - "- $\\color{red}{b}$ is our bias (also called the *intercept*). This is the value of our model when all features are zero ($x=0$). This shifts the line, without changing the slope." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XfUfPrRGeG2B" - }, - "source": [ - "**You're a born optimizer!**" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "ubqjOzjTXuRw", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Let's plot the optimization trajectory you took. (Run Cell)\n", - "fig, ax = plt.subplots()\n", - "opt = {\n", - " \"head_width\": 0.01,\n", - " \"head_length\": 0.2,\n", - " \"length_includes_head\": True,\n", - " \"color\": \"r\",\n", - "}\n", - "if parameters_list is not None:\n", - " b_old = parameters_list[0][0]\n", - " w_old = parameters_list[0][1]\n", - " for i in range(1, len(parameters_list)):\n", - " b_next = parameters_list[i][0]\n", - " w_next = parameters_list[i][1]\n", - " ax.arrow(b_old, w_old, b_next - b_old, w_next - w_old, **opt)\n", - " b_old, w_old = b_next, w_next\n", - "\n", - " ax.scatter(b_old, w_old, s=200, marker=\"o\", color=\"y\")\n", - " bs = [parameters[0] for parameters in parameters_list]\n", - " ws = [parameters[1] for parameters in parameters_list]\n", - " ax.scatter(bs, ws, s=40, marker=\"o\", color=\"k\")\n", - "\n", - "ax.set(\n", - " xlabel=\"Bias b\",\n", - " ylabel=\"Weight w\",\n", - " title=\"My sequence of b's and w's\",\n", - " xlim=[-5, 5],\n", - " ylim=[-0.05, 0.05],\n", - ")\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Sqp1VK0KkLmF" - }, - "source": [ - "**Exercise 2.2 - Group Task**:\n", - "\n", - "*How did your neighbour do?*\n", - "- Did they change $\\color{red}{b}$ and $\\color{red}{w}$ with big steps or small steps each time?\n", - "- Did they start with small steps, and then progressed to bigger steps? Or the other way round? What about you?\n", - "- Did the magnitude of your previous steps influence your next choice? Why? Or why not?\n", - "- Did you all converge to roughly the same endpoint for $\\color{red}{b}$ and $\\color{red}{w}$, or did your sequences end up in different places?" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "oLGAp30ZDnJ5" - }, - "source": [ - "### **Loss Function**\n", - "\n", - "You tweaked $\\color{red}{b}$ and $\\color{red}{w}$ to find a good fit by hand. This isn't optimal (*imagine doing this for 10s to 1000s of parameters*), so we would like to automate this learning process. \n", - "\n", - "Before we discuss how to fit the model, we need to determine a measure of fitness, also referred to as a **loss function**. This loss quantifies the difference between the predictions that our model made ($f(x)$) and the true values/targets ($y$).\n", - "\n", - "When you manually adjusted your weights $\\color{red}{b}$ and $\\color{red}{w}$, you probably looked at how close each $f(x)$ was to the $y$ that it tries to predict.\n", - "Maybe you glanced at the distance from the red line to each of the blue dots, and imagined the average of the distances (marked in purple) below. If the average was small, your fit was good!\n", - "\n", - "\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "source": [ - "\n", - "> Notation Reminder:\n", - "- $x$ - our inputs.\n", - "- $f(x)$ or $\\hat{y}$ - our model predictions.\n", - "- $y$ - the value we are trying to predict/our targets. " - ], - "metadata": { - "id": "0i6mLJXV-lXQ" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "SFGMikcOgqOb" - }, - "source": [ - "#### **Formalizing the Loss Function**\n", - "\n", - "**Indexing**\n", - "\n", - "To formalize this notion, from the image above, let $x_1 = 1$, $x_2 = 2$, $x_3 = 3$... and let $y_1 = 3$, $y_2 = 2$, $y_3 = 3$... The blue dots are therefore a sequence of input-output $(x, y)$ pairs.\n", - "Assuming that the order of the data points doesn't matter, and $i = 1, ..., N$ (where $N=5$ in our case) indexes the data, e.g. $x_1,y_1$ refer to the input and output of the first element in our dataset (e.g. $x_1,y_1$ is (1,3) in the image). \n", - "\n", - "**Error**\n", - "\n", - "The green lines above, also known as **error** or **cost**, tell us the distance between the prediction and target value for a specific example (i.e how well the prediction matches the real data). A long line means that we have a large error and our prediction for that example is not optimal, while a short line indicates our prediction is close to the true label. \n", - "\n", - "In the image, the error is simply the distance between the true label and our model's prediction ( $y$ - $f(x)$), but there can be various formulations of the error term. A popular function is the squared error. \n", - "\n", - "Squared error can be formulated as follows: \n", - "
\n", - "$\\mathrm{error}(\\color{red}{b}, \\color{red}{w} ; x_i, y_i) = (y_i - \\underbrace{(\\color{red}{w} x_i + \\color{red}{b})}_{f(x_i)})^2$ \n", - "
\n", - "\n", - ", where $\\color{red}{b}$ and $\\color{red}{w}$ are our parameters, $x_i,y_i$ is the specific input, output pair that we are calculating the error for. \n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "JgPhoXMIL2eE" - }, - "source": [ - "**Exercise 2.3 - Code Task:** Implement Squared Error, using the formulae above. \n", - "\n", - "**Useful methods:** [`jnp.dot`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.dot.html)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "nSt4XdrrpQ0r" - }, - "outputs": [], - "source": [ - "def squared_error(b, w, x, y):\n", - " # first calculate f(x_i), also sometimes referred to as yhat\n", - " yhat = ...\n", - " # then calculate the squared error\n", - " error = ...\n", - " return error" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "pi3ysDp3OLsx" - }, - "outputs": [], - "source": [ - "# @title Check if answer is correct (Run me)!\n", - "\n", - "\n", - "def check_squared_error(squared_error):\n", - " b = 3.77\n", - " w = 0.05\n", - "\n", - " correct_error = [105.47291, 71.740906, 145.68492, 71.740906, 178.75693]\n", - "\n", - " for i in range(len(x_data_list)):\n", - " x_i = x_data_list[i]\n", - " y_i = y_data_list[i]\n", - " error = squared_error(b, w, x_i, y_i)\n", - " assert jnp.equal(\n", - " error, correct_error[i]\n", - " ), f\"Incorrect implementation. Value: {error} Expected Value: {correct_error[i]}. Parameters (b,w,x_i,y_i): {b,w,x_i,y_i} \"\n", - "\n", - " print(\"Implementation is correct!\")\n", - "\n", - "\n", - "check_squared_error(squared_error)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "qzqqVhRW3gY3", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Answer to code task (Try not to peek until you've given it a good try!')\n", - "def squared_error(b, w, x, y):\n", - " yhat = jnp.dot(w, x) + b\n", - " error = jnp.square(yhat - y)\n", - " return error\n", - "\n", - "\n", - "check_squared_error(squared_error)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "gzbNQ_Lz5SGX" - }, - "source": [ - "**Loss Function - Mean Squared Error**\n", - "\n", - "Now we have a way to quantify the error of our model per **example**. However, what we really care about is the quality of our model across our **entire training dataset**. Like there are many types of error functions, there are also many ways to quantify our loss across the whole dataset.\n", - "\n", - "A common loss function is **mean squared error (MSE)**, where we simply average the error across the training set. \n", - "\n", - "**MSE** is formulated as follows:\n", - "
\n", - "$\\mathrm{loss}(\\color{red}{b}, \\color{red}{w}) = \\frac{1}{ \\color{blue}{2}N} \\sum_{i=1}^N \\Big(y_i - \\underbrace{(\\color{red}{w} x_i + \\color{red}{b})}_{f(x_i)} \\Big)^2$, \n", - "
\n", - "\n", - "where $N$ is our number of training examples and $\\color{blue}{\\frac{1}{2}}$ is a constant factor that makes taking the derivative more convenient (more on this later).\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "vSoFrFx48vHL" - }, - "source": [ - "**Plot our loss**\n", - "\n", - "Let's code our loss function. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "0W9M7QubOEMM" - }, - "outputs": [], - "source": [ - "# MSE\n", - "def loss(b, w):\n", - " # init loss of size of b\n", - " loss = 0 * b\n", - " for x, y in zip(x_data_list, y_data_list):\n", - " loss += squared_error(b, w, x, y)\n", - " N = len(x_data_list)\n", - " return loss / (2 * (N))" - ] - }, - { - "cell_type": "markdown", - "source": [ - "Now that we have a loss function, we can plot the loss of our model, using the sequence of manually chosen values of $\\color{red}{b}$ and $\\color{red}{w}$ from above." - ], - "metadata": { - "id": "JsG0vQamfdJQ" - } - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "mSIbd-xtfU2S" - }, - "outputs": [], - "source": [ - "# @title Plot our Loss (Run Me)\n", - "from matplotlib import cm\n", - "\n", - "bs, ws = np.linspace(-5, 5, num=25), np.linspace(-0.05, 0.05, num=25)\n", - "b_grid, w_grid = np.meshgrid(bs, ws)\n", - "loss_grid = loss(b_grid, w_grid)\n", - "\n", - "\n", - "def plot_loss(parameters_list, title, show_stops=False):\n", - " fig, ax = plt.subplots(1, 2, figsize=(18, 8), subplot_kw={\"projection\": \"3d\"})\n", - " ax[0].view_init(10, -30)\n", - " ax[1].view_init(30, -30)\n", - "\n", - " if parameters_list is not None:\n", - " b_old = parameters_list[0][0]\n", - " w_old = parameters_list[0][1]\n", - " loss_old = loss(b_old, w_old)\n", - " ls = [loss_old]\n", - "\n", - " for i in range(1, len(parameters_list)):\n", - " b_next = parameters_list[i][0]\n", - " w_next = parameters_list[i][1]\n", - " loss_next = loss(b_next, w_next)\n", - " ls.append(loss_next)\n", - "\n", - " ax[0].plot(\n", - " [b_old, b_next],\n", - " [w_old, w_next],\n", - " [loss_old, loss_next],\n", - " color=\"red\",\n", - " alpha=0.8,\n", - " lw=2,\n", - " )\n", - " ax[1].plot(\n", - " [b_old, b_next],\n", - " [w_old, w_next],\n", - " [loss_old, loss_next],\n", - " color=\"red\",\n", - " alpha=0.8,\n", - " lw=2,\n", - " )\n", - " b_old, w_old, loss_old = b_next, w_next, loss_next\n", - "\n", - " if show_stops:\n", - " ax[0].scatter(b_old, w_old, loss_old, s=100, marker=\"o\", color=\"y\")\n", - " ax[1].scatter(b_old, w_old, loss_old, s=100, marker=\"o\", color=\"y\")\n", - " bs = [parameters[0] for parameters in parameters_list]\n", - " ws = [parameters[1] for parameters in parameters_list]\n", - " ax[0].scatter(bs, ws, ls, s=40, marker=\"o\", color=\"k\")\n", - " ax[1].scatter(bs, ws, ls, s=40, marker=\"o\", color=\"k\")\n", - " else:\n", - " ax[0].scatter(b_old, w_old, loss_old, s=40, marker=\"o\", color=\"k\")\n", - " ax[1].scatter(b_old, w_old, loss_old, s=40, marker=\"o\", color=\"k\")\n", - "\n", - " ax[0].plot_surface(\n", - " b_grid,\n", - " w_grid,\n", - " loss_grid,\n", - " cmap=cm.coolwarm,\n", - " linewidth=0,\n", - " alpha=0.4,\n", - " antialiased=False,\n", - " )\n", - " ax[1].plot_surface(\n", - " b_grid,\n", - " w_grid,\n", - " loss_grid,\n", - " cmap=cm.coolwarm,\n", - " linewidth=0,\n", - " alpha=0.4,\n", - " antialiased=False,\n", - " )\n", - " ax[0].set(xlabel=\"Bias b\", ylabel=\"Weight w\", zlabel=\"Loss\", title=title)\n", - " ax[1].set(xlabel=\"Bias b\", ylabel=\"Weight w\", zlabel=\"Loss\", title=title)\n", - " plt.show()\n", - "\n", - "\n", - "plot_loss(\n", - " parameters_list,\n", - " \"An example loss function and my sequence of b's and w's\",\n", - " show_stops=True,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Z41ZIC5W-Dip" - }, - "source": [ - "Your sequence of choices for $\\color{red}{b}$ and $\\color{red}{w}$ are also plotted on the $(\\color{red}{b}, \\color{red}{w})$ axis.\n", - "Does your sequence progressively move toward a parameter setting for which the loss function is small?\n", - "We plotted two views of the loss function, so that it is easier to see the minimum *and* the function." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "fg5Hi4783Gus" - }, - "source": [ - "### **Gradient descent: No more tuning parameters by hand!**\n", - "\n", - "When you manually tweaked $\\color{red}{b}$ and $\\color{red}{w}$, you tried to adjust your model to find a better fit. If you were an experienced manual parameter adjuster, you might even have adjusted the $\\color{red}{b}$ and $\\color{red}{w}$ so that the fit gets *better* with each adjustment.\n", - "\n", - "Gradient descent is a method that tries to minimize the loss function by iteratively updating our weights $\\color{red}{b}$ and $\\color{red}{w}$. How do we know how to update our weights? That is where **gradients** come in! The gradients of the weights tell us how to update their values in order to minimize our loss. \n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "8j7PKzZJumYX" - }, - "source": [ - "##### **Gradients** \n", - "Using our **loss**, we would like to know how to adjust $\\color{red}{b}$ **and** $\\color{red}{w}$ in order to minimize our loss. We can use partial derivatives and the chain rule to figure out how to update our parameters.\n", - "\n", - "> **Partial derivatives** are used when we have a function of several variables and we want to know how a function changes as a result of a specific variable. To calculate this, we take the derivative of the loss, with respect to one of those variables, with the others variables held constant. If we know this for all the variables in our loss function, we can update our parameters to decrease our loss. \n", - ">\n", - "> For example, for a function $f(x,y)$, $\\frac{\\partial{f}}{\\partial{x}}$ (*read partial derivative of $f$ with respect to $x$*), tells us how $f$ changes with respect to changes in $x$ and $\\frac{\\partial{f}}{\\partial{y}}$, tells us how $f$ changes with respect to changes in $y$. \n", - "\n", - "\n", - "> The **chain rule** tells us how to differentiate composite functions (functions of a functions/function within a function). The rule is as follows: $$\\frac{d}{d x}[f(g(x))]=f^{\\prime}(g(x)) g^{\\prime}(x)$$\n", - "\n", - "\n", - "You can read more here - [partial derivatives](https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/introduction-to-partial-derivatives), the [chain rule](https://www.khanacademy.org/math/ap-calculus-ab/ab-differentiation-2-new/ab-3-1a/a/chain-rule-review) and [practical on optimization](https://github.com/deep-learning-indaba/indaba-pracs-2019/blob/master/1b_build_tensorflow.ipynb).\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_1EZcdHH2cE2" - }, - "source": [ - "**Exercise 2.4 - (Optional) Math Task:**\n", - "\n", - "Using our loss,\n", - "\n", - "
\n", - "$\\mathrm{loss}(\\color{red}{b}, \\color{red}{w}) = \\frac{1}{ \\color{blue}{2}N} \\sum_{i=1}^N \\Big(y_i - \\underbrace{(\\color{red}{w} x_i + \\color{red}{b})}_{f(x_i)} \\Big)^2$, \n", - "
\n", - "\n", - "Can you derive \n", - "$\\frac{\\partial \\mathcal{L}}{\\partial w}$ and $\\frac{\\partial \\mathcal{L}}{\\partial b}$ by hand? *For notation simplicity, we will refer to the loss $\\mathrm{loss}(\\color{red}{b}, \\color{red}{w})$ as $\\mathcal{L}$.*\n", - "\n", - "**Useful methods:** [Partial derivatives](https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/introduction-to-partial-derivatives), [Sum Rule](https://www.khanacademy.org/math/old-ap-calculus-ab/ab-derivative-rules/ab-basic-diff-rules/a/basic-differentiation-review) and the [chain rule](https://www.khanacademy.org/math/ap-calculus-ab/ab-differentiation-2-new/ab-3-1a/a/chain-rule-review). \n", - "\n", - "\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "source": [ - "**Answer to math task** - Once you have given it a try, you can see the full derivation [here](#scrollTo=9OH9H7ndfuyQ)." - ], - "metadata": { - "id": "ktpXf4w4g3Ag" - } - }, - { - "cell_type": "markdown", - "source": [ - "The two gradients we need are as follows:\n", - "\\begin{aligned}\n", - "&\\frac{\\partial \\mathcal{L}}{\\partial w}=\\frac{1}{N} \\sum_{i=1}^{N}\\left(f(x_i)-y_i\\right) x_i \\\\\n", - "&\\frac{\\partial \\mathcal{L}}{\\partial b}=\\frac{1}{N} \\sum_{i=1}^{N} f(x_i)-y_i\n", - "\\end{aligned}" - ], - "metadata": { - "id": "ODU1rQAemouO" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "BVbRhLNY0TvA" - }, - "source": [ - "In the code snippet below, we compute the two gradients using a for-loop over examples. This is just to illustrate how the gradient is computed. Very soon, we'll throw away the for-loop over data points and do it \"all at once\" in vectorized operations!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "dAeEMynv3GaI" - }, - "outputs": [], - "source": [ - "def manual_grad(b, w):\n", - " grad_b = 0\n", - " grad_w = 0\n", - " for x, y in zip(x_data_list, y_data_list):\n", - " f = w * x + b\n", - " grad_b += f - y\n", - " grad_w += (f - y) * x\n", - " grad_b /= len(x_data_list)\n", - " grad_w /= len(x_data_list)\n", - " return grad_b, grad_w" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "RMt9Qlox28Oa" - }, - "source": [ - "##### **Gradient Descent** \n", - "\n", - "Not that we have the gradients, we can use gradient descent. The general idea is to start with an initial value/guess for the model weights and then repeatedly use the gradients to tweak the parameters $\\color{red}{b}$ and $\\color{red}{w}$ in the right direction. \n", - "\n", - "These updates can be formulated as follows:\n", - "\n", - "$$\\color{red}{b} \\leftarrow \\color{red}{b} - \\color{blue}{\\eta} \\frac{\\partial \\mathcal{L}}{\\partial \\color{red}{b}} $$ \n", - "\n", - "$$\\color{red}{w} \\leftarrow \\color{red}{w} - \\color{blue}{\\eta} \\frac{\\partial \\mathcal{L}}{\\partial \\color{red}{w}} $$ \n", - "\n", - ", where $\\color{blue}{\\eta}$ is the **learning rate** and just tells us how much we are going to scale the gradient before we use it to update our parameters:\n", - "are we going to try to walk downhill with big steps or small steps?" - ] - }, - { - "cell_type": "markdown", - "source": [ - "**Exercise 2.5**\n", - "1. Run the code snippet below, and note the $(\\color{red}{b}, \\color{red}{w})$ trajectory as we use the gradient to (try to) get to the minimum.\n", - "2. Adjust the starting values for $\\color{red}{b}$ or $\\color{red}{w}$ or the value of $\\color{blue}{\\eta}$ and see how the resulting trajectory to the minimum changes.\n", - "3. Can you find a setting for $\\color{blue}{\\eta}$ where things start spiraling out of control and the loss gets bigger and bigger (and not smaller)?\n", - "4. Can you find a setting for $\\color{blue}{\\eta}$ so that we're still far away from the minimum after `200` parameter update steps?\n", - "5. Play around with the `max_grad` variable. Do we always need this? What problem does this solve? (Hint: Trying printing the grads values with `max_grad = None`).\n", - "\n" - ], - "metadata": { - "id": "YsL-Goz8hTOb" - } - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "6AvZzHQx1AKM" - }, - "outputs": [], - "source": [ - "b = 0 # Change me! Try 2, 4\n", - "w = -0.05 # Change me! Try -1, 2\n", - "learning_rate = 0.01 # Change me! Try 0.1, 0.5, ...\n", - "max_grad = 1 # Change me! Try None, 10\n", - "\n", - "parameters_step_list = []\n", - "\n", - "for _ in range(200):\n", - " parameters_step_list.append([b, w])\n", - " grad_b, grad_w = manual_grad(b, w)\n", - " # Naive gradient value clipping - different from standard gradient clipping - which clips the gradient norm.\n", - " if max_grad:\n", - " grad_b = jnp.clip(grad_b, a_min=-max_grad, a_max=max_grad)\n", - " grad_w = jnp.clip(grad_w, a_min=-max_grad, a_max=max_grad)\n", - " b = b - learning_rate * grad_b\n", - " w = w - learning_rate * grad_w\n", - "\n", - "plot_loss(\n", - " parameters_step_list, \"A loss function, and minimizing it with gradient descent\"\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3shLExakrzIW" - }, - "source": [ - "##### **Autodiff using JAX: No more manual gradients!**\n", - "\n", - "In the above example, we calculated the gradients by hand (`manual_grad`). Thanks to automatic differentiation, we don't have to do this! While you can probably derive and code the gradients of the loss function for our linear model without making a mistake somewhere, getting the gradients right for more complex models can be much more work. Much, much more work! \n", - "\n", - "We use JAX to do the automatic differentiation, using the `grad` function as follows:\n", - "```\n", - "auto_grad = jax.grad(loss_function, argnums=(0, 1))\n", - "```\n", - "\n", - "and call it in the same way as we called `manual_grad`. `argnums` tells JAX we want the partial derivative of our function with respect to the first 2 parameters." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "3WiF4oYi1xGK" - }, - "outputs": [], - "source": [ - "x = np.array(x_data_list)\n", - "y = np.array(y_data_list)\n", - "\n", - "\n", - "def loss_function(b, w):\n", - " f = w * x + b\n", - " errors = jnp.square(y - f)\n", - " # Instead of summing over individual data points in a for-loop, and then\n", - " # dividing to get the average, we do it in one go. No more for-loops!\n", - " return 1 / 2 * jnp.mean(errors)\n", - "\n", - "\n", - "# This is it! One line of code.\n", - "auto_grad = jax.grad(loss_function, argnums=(0, 1))\n", - "\n", - "# Let's see if it works. Does auto_grad match our manual version?\n", - "b, w = 2.5, 3.5\n", - "\n", - "grad_b_autograd, grad_w_autograd = auto_grad(b, w)\n", - "print(\"Autograd grad_b:\", grad_b_autograd, \" grad_w\", grad_w_autograd)\n", - "\n", - "grad_b_manual, grad_w_manual = manual_grad(b, w)\n", - "print(\"Manual gradients grad_b:\", grad_b_manual, \" grad_w\", grad_w_manual)\n", - "\n", - "# We use isclose, since the rounding is slightly different.\n", - "assert jnp.isclose(grad_b_autograd, grad_b_manual) and jnp.isclose(\n", - " grad_w_autograd, grad_w_manual\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "okaeUVNf347w" - }, - "source": [ - "Nice! So we can use automatic differentiation and we don't have to manually calculate gradients. " - ] - }, - { - "cell_type": "markdown", - "source": [ - "> **Gradient Descent vs Analytical Solution**\n", - ">\n", - "> So we used gradient descent to learn the weights for our linear model, but other options exist! For linear regression, there exists an [Analytical Solution](https://staff.fnwi.uva.nl/r.vandenboomgaard/MachineLearning/LectureNotes/Regression/LinearRegression/analytical_solution.html). This means we can calculate our weights directly in one step, without having to iterate using numerical methods like gradient descent.\n", - ">\n", - ">*Why use gradient descent then?*\n", - "- `More General` - Gradient Descent is a more general algorithm, that can be applied to problems where analytical solutions aren't feasible to calculate or don't exit e.g. neural networks. \n", - "- `Computational Complexity` - Even when a closed form solution is available, in some cases it may be faster to find the solution using gradient descent. Read more on this [here](https://stats.stackexchange.com/questions/278755/why-use-gradient-descent-for-linear-regression-when-a-closed-form-math-solution).\n" - ], - "metadata": { - "id": "uW5rnjwoVv0m" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "rK3RJPvAf4zm" - }, - "source": [ - "### **Assumptions**\n", - "\n", - "All models have assumptions. One assumption that we made is that our model is a *linear* model, i.e. that our best guess is for $y$ is with $f(x) = \\color{red}{w} x + \\color{red}{b}$. Is this assumption always valid for all kinds of data and datasets?\n", - "\n", - "> More assumptions for [simple linear regression](https://online.stat.psu.edu/stat500/lesson/9/9.2/9.2.3#paragraph--3265)." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Ao93xuXGJhLh" - }, - "source": [ - "## **2.2 From Linear to Polynomial Regression** - `Intermediate`\n", - "\n", - "So far we've looked at data that could be fitted fairly accurately with a single straight line. Despite its simplicity, linear regression tends to be very useful in practice, especially as a starting point in data analysis! However, there are cases where a linear fit is unsatisfying. \n", - "\n", - "Suppose our dataset looked like the following:\n", - "\n", - "\n", - "\n", - "How would we fit a model to this data? One possible option is to increase the complexity of our linear model by attempting to fit a higher-order polynomial, for example, a 4th-degree [polynomial](https://en.wikipedia.org/wiki/Polynomial):\n", - "$\\hat{y} = \\color{red}{w_4}x^4 + \\color{red}{w_3}x^3 + \\color{red}{w_2}x^2 + \\color{red}{w_1}x + \\color{red}{w_0}$. \n", - "\n", - "Do we have to derive a whole new algorithm? Luckily, not! We can still solve for the least squares parameters $\\color{red}{w_4}, \\color{red}{w_3}, \\color{red}{w_2}, \\color{red}{w_1}, \\color{red}{w_0}$ using the same techniques we used for fitting a line. \n", - "\n", - "Given the dataset $\\{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\\}$, we construct a *feature* matrix $\\mathbf{\\Phi}$ by expending original features, being careful to include terms corresponding to each power of $x$, as follows:\n", - "\n", - "$\\mathbf{\\Phi} =\n", - "\\begin{pmatrix}\n", - "x_1^4 & x_1^3 & x_1^2 & x_1 & 1 \\\\\n", - "x_2^4 & x_2^3 & x_2^2 & x_2 & 1 \\\\\n", - "\\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n", - "x_n^4 & x_n^3 & x_n^2 & x_n & 1\n", - "\\end{pmatrix}\n", - "$\n", - "\n", - "And just like before, our $\\mathbf{y}$ vector is $(y_1, y_2, ..., y_n)^\\mathsf{T}$\n", - "\n", - "Next, we fit a 4th-degree polynomial to our data and find that the fit is visually a lot better and captures the wave-like pattern of the data! \n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "XoSIWpUvKtlE" - }, - "outputs": [], - "source": [ - "# @title Polynomial Helper Functions (Run Me)\n", - "def generate_wave_like_dataset(min_x=-1, max_x=1, n=100):\n", - " xs = np.linspace(min_x, max_x, n)\n", - " ys = np.sin(5 * xs) + np.random.normal(size=len(xs), scale=0.1)\n", - " return xs, ys\n", - "\n", - "\n", - "def regression_analytical_solution(X, y):\n", - " return ((np.linalg.inv(X.T.dot(X))).dot(X.T)).dot(y)\n", - "\n", - "\n", - "def gradient_descent(X, y, learning_rate=0.01, num_steps=1000, debug=False):\n", - " report_every = num_steps // 10\n", - "\n", - " def loss(current_w, X, y):\n", - " y_hat = jnp.dot(X, current_w)\n", - " loss = jnp.mean((y_hat - y) ** 2)\n", - " return loss, y_hat\n", - "\n", - " loss_and_grad = jax.value_and_grad(loss, has_aux=True)\n", - " # Initialize the parameters\n", - " key = jax.random.PRNGKey(42)\n", - " w = jax.random.normal(key=key, shape=(X.shape[1],))\n", - "\n", - " # Run a a few steps of gradient descent\n", - " for i in range(num_steps):\n", - " (loss, y_hat), grad = loss_and_grad(w, X, ys)\n", - "\n", - " if i % report_every == 0:\n", - " if debug:\n", - " print(f\"Step {i}: w: {w}, Loss: {loss}, Grad: {grad}\")\n", - " else:\n", - " print(f\"Step {i}: Loss: {loss}\")\n", - "\n", - " w = w - learning_rate * grad\n", - "\n", - " return w\n", - "\n", - "\n", - "def plot_data(y_hat, xs, ys, title):\n", - " plt.figure()\n", - " plt.scatter(xs, ys, label=\"Data\")\n", - " plt.plot(xs, y_hat, \"r\", label=title)\n", - "\n", - " plt.title(title)\n", - " plt.xlabel(\"Input x\")\n", - " plt.ylabel(\"Output y\")\n", - " plt.legend();" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "CcXjMKi0Znr6" - }, - "source": [ - "### **Under-fitting**\n", - "Let's see how our linear model does on our new dataset." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "QmAWgBEIZh0X" - }, - "outputs": [], - "source": [ - "xs, ys = generate_wave_like_dataset(min_x=-1, max_x=1, n=25)\n", - "X = np.vstack([xs, np.ones(len(xs))]).T\n", - "w = regression_analytical_solution(X, ys)\n", - "y_hat = X.dot(w)\n", - "\n", - "plot_data(y_hat, xs, ys, \"Linear regression (analytic minimum)\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "pzlcvE8pZrYj" - }, - "source": [ - "Our linear model has missed the majority of the points in our dataset. This is also known as **under-fitting**, which is when our model is too simple to capture the relationship between the inputs and outputs." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "uwwajy30U9fX" - }, - "source": [ - "### **Over-fitting**\n", - "\n", - "Since our linear model was too simple, we can try a more complicated model.\n", - "\n", - "**Exercise 2.5 - Code Task**: Spend a couple of minutes selecting different parameters (by moving the sliders), to see the best loss you can get using polynomial regression. \n", - "\n", - "1. `degree` - Degree $n$ of a polynomial in this form - $\\hat{y} = \\color{red}{w_n}x^n +\\color{red}{w_{n-1}}x^{n-1}+ ... + \\color{red}{w_2}x^2 + \\color{red}{w_1}x + \\color{red}{w_0}$. \n", - "2. `num_steps` - The number of steps to running gradient descent for. \n", - "3. `learning_rate` - The learning rate used when updating the weights in gradient descent. \n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "eGrB9V66-P9L" - }, - "outputs": [], - "source": [ - "# @title Choose parameters. { run: \"auto\" }\n", - "degree = 3 # @param {type:\"slider\", min:1, max:10, step:1}\n", - "num_steps = 1500 # @param {type:\"slider\", min:1000, max:5000, step:500}\n", - "learning_rate = 0.1 # @param [\"0.2\",\"0.1\", \"0.01\"] {type:\"raw\"}\n", - "\n", - "\n", - "# def create_data_matrix(xs,degree=4):\n", - "# return np.vstack([[np.power(xs,pow) for pow in np.arange(degree)],np.ones(len(xs))]).T\n", - "\n", - "\n", - "def create_data_matrix(xs, degree=4):\n", - " pows = [np.power(xs, pow) for pow in np.arange(1, degree + 1)]\n", - " pows.reverse()\n", - " return np.vstack([pows, np.ones(len(xs))]).T\n", - "\n", - "\n", - "phi = create_data_matrix(xs, degree=degree)\n", - "\n", - "\n", - "w = gradient_descent(phi, ys, learning_rate=learning_rate, num_steps=num_steps)\n", - "y_hat = phi.dot(w)\n", - "\n", - "plot_data(y_hat, xs, ys, \"Polynomial regression (gradient descent steps)\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "tGcJv82aFiLc" - }, - "source": [ - "Let's see how a 10-th degree polynomial fits our data. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "EszayH6Q-z6_" - }, - "outputs": [], - "source": [ - "degree = 10\n", - "num_steps = 5000\n", - "learning_rate = 0.2\n", - "\n", - "\n", - "phi = create_data_matrix(xs, degree=degree)\n", - "w = gradient_descent(phi, ys, learning_rate=learning_rate, num_steps=num_steps)\n", - "y_hat = phi.dot(w)\n", - "\n", - "plot_data(y_hat, xs, ys, \"Polynomial regression (gradient descent steps)\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "o8SPF0UILmXW" - }, - "source": [ - "**What happens if we extend our predictions out a bit?**\n", - "\n", - "Our model fits the majority of the data! This sounds great, but let's see how our model handles new data sampled from the same **data generation process**! \n", - "\n", - "In the plot below we fill in some extra data points from the true function (in orange) for comparison, but bear in mind that these were not used to fit the regression model. We are **extrapolating** the model into a previously unseen region!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Y2d5QywylwTK" - }, - "outputs": [], - "source": [ - "# Recover the analytic solution.\n", - "degree = 10\n", - "phi = create_data_matrix(xs, degree=degree)\n", - "w = regression_analytical_solution(phi, ys)\n", - "\n", - "# Extend the x's and y's.\n", - "more_xs, more_ys = generate_wave_like_dataset(min_x=-1.3, max_x=-1, n=20)\n", - "all_xs = np.concatenate([more_xs, xs])\n", - "all_ys = np.concatenate([more_ys, ys])\n", - "\n", - "# Get the design matrix for the extended data, so that we could make predictions\n", - "# for it.\n", - "phi = create_data_matrix(all_xs, degree=degree)\n", - "\n", - "# Note that we don't recompute w, we use the previously computed values that\n", - "# only saw x values in the range [0, 10]\n", - "y_hat = phi.dot(w)\n", - "\n", - "plt.scatter(xs, ys, label=\"Data\")\n", - "plt.scatter(more_xs, more_ys, label=\"Unseen Data\")\n", - "plt.plot(all_xs, y_hat, \"r\", label=\"Polynomial Regression\")\n", - "\n", - "plt.title(\"A wave-like dataset with the best-fit line\")\n", - "plt.xlabel(\"Input x\")\n", - "plt.ylabel(\"Output y\")\n", - "plt.legend()\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "V3ld4cRlPVGy" - }, - "source": [ - "We see that while the fit looks good in the blue region that the model was fitted on, the fit seems to diverge significantly in the orange region.\n", - "The model is able to **interpolate** well (fill in gaps in the region it was fitted), but it **extrapolates** (outside the fitted region) poorly.\n", - "This is a common concern with models in general, unless you can be sure that you have the correct *inductive biases* (assumptions about the data generating process) built into the model, you should be cautious about extrapolating from it.\n", - "\n", - "The fact that our model has very low training loss and high test loss (unseen data) is a sign of over-fitting. Over-fitting is when our models fits our training data, but fails to generalise to previously unseen data from the same data generating process. This is usually the result of the model having sufficient degrees of freedom to fit the noise in the training data. \n", - "\n" - ] - }, - { - "cell_type": "markdown", - "source": [ - "**Exercise 2.6 - Group Task** \n", - "\n", - "**What shall we do? Pause here!**\n", - "\n", - "Before progressing with this practical, take a moment to think about the problem. In machine learning, there are many practical approaches to getting a model that generalizes well. As you can guess, much theory is devoted to the problem too!\n", - "\n", - "With what you've seen so far, try to explain to your neighbour\n", - "\n", - "1. every factor that you can think of, that could cause a model to generalize poorly;\n", - "2. some ideas that you could think of to improve the model's fit to (unseen) data;\n", - "3. any underlying assumptions that you are making about unseen data.\n", - "\n", - "Don't proceed until you've had a solid discussion on the topic. If someone is tutoring this practical, they might contribute to the discussion!" - ], - "metadata": { - "id": "2feKuHJplo0U" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "sAtms17jtCOU" - }, - "source": [ - "## **2.3 Training Models Using Haiku and Optax** - `Beginner`\n", - "\n", - "For our Linear and Polynomial examples, we only used core JAX to keep track of and optimize our weights. This can be tedious, especially when dealing with larger models and when using more complicated optimization methods. \n", - "\n", - "Luckily, JAX has higher-level neural network libraries such as [Haiku](https://github.com/deepmind/dm-haiku) or [Flax](https://github.com/google/flax), which make building models more convenient, and libraries like [Optax](https://github.com/deepmind/optax), that make gradient processing and optimization more convenient. \n", - "\n", - "In this section, we will briefly go through how to use Haiku and Optax. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "0ySycQo7txoF" - }, - "outputs": [], - "source": [ - "%%capture\n", - "# @title Install Haiku and Optax. (Run Cell)\n", - "!pip install -U dm-haiku\n", - "!pip install -U optax\n", - "# For plotting.\n", - "!pip install livelossplot" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "exuVety_bFhQ" - }, - "source": [ - "### Haiku\n", - "\n", - "[Haiku](https://github.com/deepmind/dm-haiku) is JAX neural network library intended to be familiar to people used to object-oriented programming models (like PyTorch or Tensorflow), by making managing state simpler. \n", - "\n", - "Haiku modules are similar to standard python objects (they have references to their own parameters and functions). However, since JAX operates on *pure functions*, Haiku modules **cannot be directly instantiated**, but rather they need to be **wrapped into pure function transformations.**" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "9wvTzTi-YJTp" - }, - "source": [ - "Let's create a simple linear module." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "H_-3r49B-Orc" - }, - "outputs": [], - "source": [ - "import haiku as hk\n", - "\n", - "\n", - "class MyLinearModel(hk.Module):\n", - " def __init__(self, output_size, name=None):\n", - " super().__init__(name=name)\n", - " self.output_size = output_size\n", - "\n", - " def __call__(self, x):\n", - " j, k = x.shape[-1], self.output_size\n", - " w_init = hk.initializers.TruncatedNormal(1.0 / np.sqrt(j))\n", - " w = hk.get_parameter(\"w\", shape=[j, k], dtype=x.dtype, init=w_init)\n", - " b = hk.get_parameter(\"b\", shape=[k], dtype=x.dtype, init=jnp.ones)\n", - " return jnp.dot(x, w) + b" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3WYb35ffYOSt" - }, - "source": [ - "And attempt to directly **instantiate** it." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "LuZy7pj9-b2m" - }, - "outputs": [], - "source": [ - "# Should raise an error.\n", - "try:\n", - " MyLinearModel(output_size=1)\n", - "except Exception as e:\n", - " print(\"Exception {}\".format(e))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "-XGOeJCH-10P" - }, - "source": [ - "This fails since we are trying to **directly** instantiate `MyLinearModel`. Instead what we should do is wrap our model in a pure functional transform as follows: " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "d1yI7j2h_Esd" - }, - "outputs": [], - "source": [ - "def model_fn(x):\n", - " module = MyLinearModel(output_size=1)\n", - " return module(x)\n", - "\n", - "\n", - "model = hk.without_apply_rng(hk.transform(model_fn))" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "EZ24tXUiaHQa" - }, - "source": [ - "> We use `hk.without_apply_rng` since our model's *inference* (not initialization) is deterministic and hence has no use for a random key when calling `.apply`. \n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "3aWAc_f0BVFU" - }, - "outputs": [], - "source": [ - "model" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Lao8wS3tBjc3" - }, - "source": [ - "Our wrapper object has two methods: \n", - "- `init` - initialize the variables in the model and return these params. \n", - "- `apply` - run a forward pass through our data. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "gTJcV6hjFh6u" - }, - "source": [ - "If we want to get the initial state of our module, we need to call `init` with an example input." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "nt0srU3rQlhL" - }, - "outputs": [], - "source": [ - "# input dimention we are considering\n", - "input_dim = 3\n", - "\n", - "example_x = jnp.arange(input_dim, dtype=jnp.float32)\n", - "rng_key = jax.random.PRNGKey(42)\n", - "\n", - "params = model.init(rng=rng_key, x=example_x)\n", - "print(params)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "VCoYMnZkGKOb" - }, - "source": [ - "We can now call the `apply` method as follows. Note we pass in the `params` variable that holds the current model weights. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "XA8n5cEMGVWC" - }, - "outputs": [], - "source": [ - "new_x = jnp.arange(input_dim, dtype=jnp.float32)\n", - "# example forward pass through our model\n", - "prediction = model.apply(params, new_x)\n", - "print(prediction)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "mmk2jcIHbRlS" - }, - "source": [ - "So that is it! Those are basics of using Haiku modules!" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_3h034w5bWn6" - }, - "source": [ - "### Optax\n", - "\n", - "[Optax](https://github.com/deepmind/optax) is an optimization and gradient processing library in JAX. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "CuWggGFEcdoy" - }, - "source": [ - "In our linear regression section, we manually updated the params of our model (e.g. `w = w - learning_rate * grad_w`). \n", - "\n", - "This wasn't too difficult in our simple case, but for more challenging optimizations, especially when chaining optimizations (e.g. clipping gradient norm and then applying an optimizer update), it becomes trickier to effectively and accurately implement these parameter updates. Luckily, Optax comes to the rescue here! " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "hvecjyZGelIV" - }, - "source": [ - "Here is a simple example of how you create and initialize an optimizer." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "zhqkLvtRe6zf" - }, - "outputs": [], - "source": [ - "import optax\n", - "\n", - "# create optim\n", - "learning_rate = 0.1\n", - "optimizer = optax.adam(learning_rate)\n", - "\n", - "# init optim\n", - "input_dim = 3\n", - "# init weights to pass to our optim\n", - "params = {\"w\": jnp.ones((input_dim,))}\n", - "\n", - "# Obtain the `opt_state` that contains statistics for the optimizer.\n", - "opt_state = optimizer.init(params)\n", - "print(opt_state)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Io4mLeIifxkX" - }, - "source": [ - "Once we have calculated the gradients, we pass them (`grads`) and the `opt_state` to our optimizer to get `updates` that should be applied to the current parameters and `new_opt_state`, which keeps track of the current state of the optimizer. \n", - "\n", - "```\n", - "updates, new_opt_state = optimizer.update(grads, opt_state)\n", - "params = optax.apply_updates(params, updates)\n", - "```" - ] - }, - { - "cell_type": "markdown", - "source": [ - "And that is the basics of Optax. " - ], - "metadata": { - "id": "4p1l2rUWpRZ7" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "7IaqVuRPg3ER" - }, - "source": [ - "### Full Training Loop Using Haiku and Optax ๐Ÿง™\n", - "\n", - "Here we show a full training loop, using Haiku and Optax. For convenience, we introduce structures like `TrainingState` and functions like `init`,`update` and `loss_fn`. Please read through to get comfortable with how you can effectively train JAX models." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "pqZZlOfNuMEn" - }, - "source": [ - "Here we define some helper functions. " - ] - }, - { - "cell_type": "code", - "source": [ - "from typing import Any, MutableMapping, NamedTuple, Tuple\n", - "import time\n", - "from sklearn import datasets\n", - "from sklearn.model_selection import train_test_split\n", - "import haiku as hk\n", - "import optax\n", - "import tensorflow as tf\n", - "import tensorflow_datasets as tfds\n", - "from livelossplot import PlotLosses\n", - "\n", - "# Convenient container for keeping track of training state.\n", - "class TrainingState(NamedTuple):\n", - " \"\"\"Container for the training state.\"\"\"\n", - "\n", - " params: hk.Params\n", - " opt_state: optax.OptState\n", - " step: jnp.DeviceArray\n", - "\n", - "\n", - "# function for our model (same as above)\n", - "def model_fn(x):\n", - " module = MyLinearModel(output_size=1)\n", - " return module(x).ravel()\n", - "\n", - "\n", - "# Load a simple dataset - diabetes (https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html)\n", - "# and convert to an iterator. Although it would be faster to use pure jnp arrays in this example,\n", - "# in practice for large datasets we use iterators.\n", - "# Read here https://www.tensorflow.org/guide/data_performance for best practices.\n", - "def load_dataset(seed, input_dim=3, train_batch_size=32, shuffule_train_data=True):\n", - " # Load the diabetes dataset\n", - " diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True)\n", - "\n", - " # Use only the first input_dim (3) features\n", - " diabetes_X = diabetes_X[:, :input_dim]\n", - "\n", - " X_train, X_test, y_train, y_test = train_test_split(\n", - " diabetes_X, diabetes_y, test_size=0.2, train_size=0.8, random_state=seed\n", - " )\n", - "\n", - " train_dataset = (\n", - " tf.data.Dataset.from_tensor_slices((X_train, y_train)).cache().repeat()\n", - " )\n", - " test_dataset = tf.data.Dataset.from_tensor_slices((X_test, y_test)).cache().repeat()\n", - "\n", - " if shuffule_train_data:\n", - " train_dataset = train_dataset.shuffle(10 * train_batch_size, seed=seed)\n", - "\n", - " train_dataset = train_dataset.batch(train_batch_size)\n", - " # Using full test dataset\n", - " test_dataset = test_dataset.batch(len(X_test))\n", - "\n", - " train_dataset = iter(tfds.as_numpy(train_dataset))\n", - " test_dataset = iter(tfds.as_numpy(test_dataset))\n", - " return train_dataset, test_dataset" - ], - "metadata": { - "id": "LY0t6C4OKzSK" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "Full training and evaluation loop." - ], - "metadata": { - "id": "-rGuA_Y4DHXA" - } - }, - { - "cell_type": "code", - "source": [ - "# first we retrive our model\n", - "model = hk.without_apply_rng(hk.transform(model_fn))\n", - "\n", - "# Then we create the optimiser - chain clipping by gradient norm and using Adam\n", - "learning_rate = 0.01\n", - "optimizer = optax.chain(\n", - " optax.clip_by_global_norm(0.5),\n", - " optax.adam(learning_rate=learning_rate),\n", - ")\n", - "\n", - "# define our loss function\n", - "def loss_fn(params, x, y_true):\n", - " y_pred = model.apply(params, x)\n", - " loss = (y_pred - y_true) ** 2\n", - " return jnp.mean(loss)\n", - "\n", - "\n", - "# Function to initialize our model and optimizer.\n", - "@jax.jit\n", - "def init(rng: jnp.ndarray, data) -> TrainingState:\n", - " \"\"\"\n", - " rng: jax prng seed.\n", - " data: Sample of the dataset used to get correct shape.\n", - " \"\"\"\n", - "\n", - " rng, init_rng = jax.random.split(rng)\n", - " initial_params = model.init(init_rng, data)\n", - " initial_opt_state = optimizer.init(initial_params)\n", - " return TrainingState(\n", - " params=initial_params,\n", - " opt_state=initial_opt_state,\n", - " step=np.array(0),\n", - " )\n", - "\n", - "\n", - "# Function to update our params and keep track of metrics\n", - "@jax.jit\n", - "def update(state: TrainingState, data):\n", - " X, y = data\n", - " loss_value, grads = jax.value_and_grad(loss_fn)(state.params, X, y)\n", - " updates, new_opt_state = optimizer.update(grads, state.opt_state)\n", - " new_params = optax.apply_updates(state.params, updates)\n", - "\n", - " new_state = TrainingState(\n", - " params=new_params,\n", - " opt_state=new_opt_state,\n", - " step=state.step + 1,\n", - " )\n", - " metrics = {\"train_loss\": loss_value, \"step\": state.step}\n", - " return new_state, metrics\n", - "\n", - "\n", - "# Function to evaluate our models\n", - "@jax.jit\n", - "def evaluate(params: hk.Params, test_dataset) -> jnp.ndarray:\n", - " # Here we simply use our loss func/mse to eval our models,\n", - " # but we can use diff functions for loss and evaluation,\n", - " # e.g. in classification we use Cross-entropy classification loss\n", - " # , but we use accuracy as an eval metric.\n", - " x_test, y_test_true = test_dataset\n", - " return loss_fn(params, x_test, y_test_true)\n", - "\n", - "\n", - "# We get our dataset\n", - "seed = 42\n", - "train_dataset, test_dataset = load_dataset(seed=seed, input_dim=10)\n", - "\n", - "# Initialise model params and optimiser;\n", - "rng = jax.random.PRNGKey(seed)\n", - "# We pass an example of the input to get the correct shapes\n", - "state = init(rng, next(train_dataset)[0])\n", - "\n", - "# Time our training\n", - "prev_time = time.time()\n", - "max_steps = 10**5\n", - "eval_every = 5000\n", - "metrics = {}\n", - "plotlosses = PlotLosses()\n", - "\n", - "# Training & evaluation loop.\n", - "for step in range(max_steps):\n", - " state, metrics = update(state, data=next(train_dataset))\n", - "\n", - " # Periodically evaluate on test set.\n", - " if step % eval_every == 0:\n", - " steps_per_sec = eval_every / (time.time() - prev_time)\n", - " prev_time = time.time()\n", - " test_loss = evaluate(state.params, next(test_dataset))\n", - " metrics.update({\"steps_per_sec\": steps_per_sec})\n", - " metrics.update({\"test_loss\": test_loss})\n", - " plotlosses.update(\n", - " {\n", - " \"train_loss\": jnp.mean(metrics[\"train_loss\"]),\n", - " }\n", - " )\n", - " plotlosses.update(\n", - " {\n", - " \"test_loss\": test_loss,\n", - " }\n", - " )\n", - " plotlosses.send()" - ], - "metadata": { - "id": "EsD62L4cUM9r" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "Please try and get comfortable with the above code since we will be using Haiku and Optax in other practicals. If you need assistance, please call a tutor!" - ], - "metadata": { - "id": "03woGcY0pxPb" - } - }, - { - "cell_type": "code", - "source": [ - "# @title Let's plot our predictions vs targets.\n", - "\n", - "X_test, y_test = next(test_dataset)\n", - "pred = model.apply(state.params, X_test)\n", - "\n", - "plt.figure(figsize=(7, 7))\n", - "plt.scatter(y_test, pred, c=\"crimson\")\n", - "\n", - "p1 = max(max(pred), max(y_test))\n", - "p2 = min(min(pred), min(y_test))\n", - "plt.plot([p1, p2], [p1, p2], \"b-\")\n", - "plt.xlabel(\"Actual Values\", fontsize=15)\n", - "plt.ylabel(\"Predictions\", fontsize=15)\n", - "plt.axis(\"equal\")\n", - "plt.show()" - ], - "metadata": { - "cellView": "form", - "id": "uU3aRT3p-QVY" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "So there is some correlation with our predictions and our actual targets. This shows that we are learning a useful model for our data." - ], - "metadata": { - "id": "qgFA9zHOBiLh" - } - }, - { - "cell_type": "markdown", - "source": [ - "You have officially trained a model end-to-end using the latest JAX techniques! ๐Ÿ”ฅ\n", - "\n", - "Although, we have only done simple Linear Regression in this tutorial, you have learned optimization techniques like gradient descent, which can apply to a variety of models! " - ], - "metadata": { - "id": "BMTbY9uv-lIk" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "fV3YG7QOZD-B" - }, - "source": [ - "# **Conclusion**\n", - "**Summary:**\n", - "- JAX combines Autograd and XLA to perform **accelerated** ๐Ÿš€ numerical computations. These computations are achieved using transforms such as `jit`,`grad`,`vmap` and `pmap`.\n", - "- JAX's `grad` function automatically calculates the gradients of your functions for you! \n", - "- Gradient descent is an effective algorithm to learn linear models, but also more complicated models, where analytical solutions don't exist. \n", - "- We need to be careful not to over-fit or under-fit on our datasets. \n", - "- Haiku and Optax make training JAX models more convenient. \n", - "\n", - "\n", - "**Next Steps:** \n", - "\n", - "- If you are interested in going deeper into Linear Regression, we have a Bayesian Linear Regression section in the [Bayesian Deep Learning Prac](https://github.com/deep-learning-indaba/indaba-pracs-2022/blob/main/practicals/Bayesian_Deep_Learning_Prac.ipynb).\n", - "- You can also adapt the model and dataset from the \"*Full Training Loop Using Haiku and Optax*\" section to train your custom models on custom datasets. \n", - "\n", - "\n", - "**References:** \n", - "\n", - "Part 1 \n", - "1. Various JAX [docs](https://jax.readthedocs.io/en/latest/) - specifically [quickstart](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html), [common gotchas](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html), [jitting](\n", - "https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html#), [random numbers](https://jax.readthedocs.io/en/latest/jax-101/05-random-numbers.html) and [pmap](https://jax.readthedocs.io/en/latest/jax-101/06-parallelism.html?highlight=pmap#). \n", - "2. http://matpalm.com/blog/ymxb_pod_slice/\n", - "3. https://roberttlange.github.io/posts/2020/03/blog-post-10/\n", - "4. [Machine Learning with JAX - From Zero to Hero | Tutorial #1](https://www.youtube.com/watch?v=SstuvS-tVc0). \n", - "\n", - "Part 2 \n", - "1. Parts of this section are adapted from [Deepmind's Regression Tutorial](https://github.com/deepmind/educational/blob/master/colabs/summer_schools/intro_to_regression.ipynb). \n", - "2. https://d2l.ai/chapter_linear-networks/linear-regression.html\n", - "3. https://www.cs.toronto.edu/~rgrosse/courses/csc411_f18/slides/lec06-slides.pdf\n", - "4. [Linear Regression Chapter - Mathematics for Machine Learning Book](https://mml-book.github.io/). \n", - "\n", - "\n", - "For other practicals from the Deep Learning Indaba, please visit [here](https://github.com/deep-learning-indaba/indaba-pracs-2022)." - ] - }, - { - "cell_type": "markdown", - "source": [ - "# **Appendix:** \n", - "\n" - ], - "metadata": { - "id": "XrRoSqlxfi7f" - } - }, - { - "cell_type": "markdown", - "source": [ - "## Derivation of partial derivatives for exercise 2.4.\n", - "\n", - "Derive $\\frac{\\partial \\mathcal{L}}{\\partial w}$:\n", - "\\begin{aligned}\n", - "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{ \\partial}{\\partial w} (\\frac{1}{2N} \\sum_{i=1}^N (y_i - (w x_i + b))^2) \\because{Definition of $\\mathcal{L}$} \\\\\n", - " \\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{2N} \\frac{ \\partial }{\\partial w} ( \\sum_{i=1}^N (y_i - (w x_i + b))^2) \\because{Constant multiple rule} \\\\\n", - "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{2N} \\sum_{i=1}^N \\frac{ \\partial }{\\partial w} (y_i - (w x_i + b))^2 \\because{Sum Rule - derivative of sum is sum of derivatives.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{2N} \\sum_{i=1}^N 2 (y_i - (w x_i + b)) \\frac{ \\partial }{\\partial w}(y_i -(w x_i + b)) \\because{Power Rule + Chain Rule.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{2N} \\sum_{i=1}^N 2 (y_i - (w x_i + b)) (-x_i) \\because{Compute derative.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1(2)}{2N} \\sum_{i=1}^N (y_i - (w x_i + b)) (-x_i) \\because{Factor constant out of summation.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{N} \\sum_{i=1}^N -y_ix_i + (w x_i + b)x_i \\because{Multiply brackets and simplify.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{N} \\sum_{i=1}^N (-y_i + (w x_i + b))x_i \\because{Rewrite.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{N} \\sum_{i=1}^N ((w x_i + b) -y_i )x_i \\because{Rewrite.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{N} \\sum_{i=1}^N (f(x_i) -y_i )x_i \\because{Substitute in $f(x_i)$.} \\\\ \n", - "\\end{aligned}\n", - "\n", - "Derive $\\frac{\\partial \\mathcal{L}}{\\partial b}$:\n", - "\\begin{aligned}\n", - "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{ \\partial}{\\partial b} (\\frac{1}{2N} \\sum_{i=1}^N (y_i - (w x_i + b))^2) \\because{Definition of $\\mathcal{L}$} \\\\\n", - "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{2N} \\frac{ \\partial }{\\partial b} ( \\sum_{i=1}^N (y_i - (w x_i + b))^2) \\because{Constant multiple rule} \\\\\n", - "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{2N} \\sum_{i=1}^N \\frac{ \\partial }{\\partial b} (y_i - (w x_i + b))^2 \\because{Sum Rule - derivative of sum is sum of derivatives.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{2N} \\sum_{i=1}^N 2 (y_i - (w x_i + b)) \\frac{ \\partial }{\\partial b}(y_i -(w x_i + b)) \\because{Power Rule + Chain Rule.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{2N} \\sum_{i=1}^N 2 (y_i - (w x_i + b)) (-1) \\because{Compute derative.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1(2)}{2N} \\sum_{i=1}^N (y_i - (w x_i + b)) (-1) \\because{Factor constant out of summation.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{N} \\sum_{i=1}^N (-y_i + (w x_i + b)) \\because{Multiply brackets and simplify.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{N} \\sum_{i=1}^N ((w x_i + b) -y_i ) \\because{Rewrite.} \\\\ \n", - "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{N} \\sum_{i=1}^N (f(x_i) -y_i ) \\because{Substitute in $f(x_i)$.} \\\\ \n", - "\\end{aligned}" - ], - "metadata": { - "id": "9OH9H7ndfuyQ" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "o1ndpYE50BpG" - }, - "source": [ - "# **Feedback**\n", - "\n", - "Please provide feedback that we can use to improve our practicals in the future." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "OIZvkhfRz9Jz", - "cellView": "form" - }, - "outputs": [], - "source": [ - "# @title Generate Feedback Form. (Run Cell)\n", - "from IPython.display import HTML\n", - "\n", - "HTML(\n", - " \"\"\"\n", - "\n", - "\"\"\"\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "oglV4kHMWnIN" - }, - "source": [ - "" - ] - } - ], - "metadata": { - "accelerator": "GPU", - "colab": { - "collapsed_sections": [ - "XrRoSqlxfi7f" - ], - "name": "Introduction_to_ML_using_JAX.ipynb", - "provenance": [] - }, - "gpuClass": "standard", - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - }, - "language_info": { - "name": "python" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "m2s4kN_QPQVe" + }, + "source": [ + "# **Intro to ML using JAX**\n", + "\n", + "\n", + "\n", + "\n", + "\"Open\n", + "\n", + "ยฉ Deep Learning Indaba 2022. Apache License 2.0.\n", + "\n", + "**Authors:** Kale-ab Tessera\n", + "\n", + "**Reviewers:** Javier Antoran, James Allingham, Ruan van der Merwe, \n", + "Sebastian Bodenstein, Laurence Midgley, Joao Guilherme and Elan van Biljon. \n", + "\n", + "**Introduction:** \n", + "\n", + "In this tutorial, we will learn about JAX, a new machine learning framework that has taken deep learning research by storm! JAX is praised for its speed, and we will learn how to achieve these speedups, using core concepts in JAX, such as automatic differentiation (`grad`), parallelization (`pmap`), vectorization (`vmap`), just-in-time compilation (`jit`), and more. We will then use what we have learned to implement Linear Regression effectively while learning some of the fundamentals of optimization.\n", + "\n", + "**Topics:** \n", + "\n", + "Content: `Numerical Computing` , `Supervised Learning` \n", + "Level: `Beginner`\n", + "\n", + "\n", + "**Aims/Learning Objectives:**\n", + "\n", + "- Learn the basics of JAX and its similarities and differences with NumPy.\n", + "- Learn how to use JAX transforms - `jit`, `grad`, `vmap`, and `pmap`.\n", + "- Learn the basics of optimization and how to implement effective training procedures using [Haiku](https://github.com/deepmind/dm-haiku) and [Optax](https://github.com/deepmind/optax). \n", + "\n", + "**Prerequisites:**\n", + "\n", + "- Basic knowledge of [NumPy](https://github.com/numpy/numpy).\n", + "- Basic knowledge of [functional programming](https://en.wikipedia.org/wiki/Functional_programming). \n", + "\n", + "**Outline:** \n", + "\n", + ">[Part 1 - Basics of JAX](#scrollTo=Enx0WUr8tIPf)\n", + "\n", + ">>[1.1 From NumPy โžก Jax - Beginner](#scrollTo=-ZUp8i37dFbU)\n", + "\n", + ">>>[JAX and NumPy - Similarities ๐Ÿค](#scrollTo=CbOEYsWQ6tHv)\n", + "\n", + ">>>[JAX and NumPy - Differences โŒ](#scrollTo=lg4__l4A7yqc)\n", + "\n", + ">>[1.2 Acceleration in JAX ๐Ÿš€ - Beginner, Intermediate, Advanced](#scrollTo=TSj972IWxTo2)\n", + "\n", + ">>>[JAX is backend Agnostic - Beginner](#scrollTo=_bQ9QqT-yKbs)\n", + "\n", + ">>>[JAX Transformations - Beginner, Intermediate, Advanced](#scrollTo=JM_08mXEBRIK)\n", + "\n", + ">>>>[Basic JAX Transformations - jit and grad - Beginner](#scrollTo=cOGuGWtLmP7n)\n", + "\n", + ">>>>[Pure Functions ๐Ÿ’ก - Beginner](#scrollTo=fT56qxXzTVKZ)\n", + "\n", + ">>>>[More Advanced Transforms - vmap and pmap - Intermediate, Advanced](#scrollTo=tvBzh8wiGuLf)\n", + "\n", + "\n", + ">[Part 2 - From Linear to Non-Linear Regression](#scrollTo=aB0503xgmSFh)\n", + "\n", + ">>[2.1 Linear Regression - ๐Ÿ“ˆ Beginner](#scrollTo=XrWSN-zaWAhJ)\n", + "\n", + ">>>[Regression Toy Example - Housing Prices](#scrollTo=AcyM6XRj1cDz)\n", + "\n", + ">>>[Optimization by Trial-and-Error](#scrollTo=vnoEkgimTQ6V)\n", + "\n", + ">>>[Loss Function](#scrollTo=oLGAp30ZDnJ5)\n", + "\n", + ">>>[Gradient descent: No more tuning parameters by hand!](#scrollTo=fg5Hi4783Gus)\n", + "\n", + "\n", + ">>[2.2 From Linear to Polynomial Regression - Intermediate](#scrollTo=Ao93xuXGJhLh)\n", + "\n", + ">>>[Under-fitting](#scrollTo=CcXjMKi0Znr6)\n", + "\n", + ">>>[Over-fitting](#scrollTo=uwwajy30U9fX)\n", + "\n", + ">>[2.3 Training Models Using Haiku and Optax - Beginner](#scrollTo=sAtms17jtCOU)\n", + "\n", + ">>>[Haiku](#scrollTo=exuVety_bFhQ)\n", + "\n", + ">>>[Optax](#scrollTo=_3h034w5bWn6)\n", + "\n", + ">>>[Full Training Loop Using Haiku and Optax ๐Ÿง™](#scrollTo=7IaqVuRPg3ER)\n", + "\n", + ">[Conclusion](#scrollTo=fV3YG7QOZD-B)\n", + "\n", + ">[Appendix:](#scrollTo=XrRoSqlxfi7f)\n", + "\n", + ">>[Derivation of partial derivatives for exercise 2.4.](#scrollTo=9OH9H7ndfuyQ)\n", + "\n", + ">[Feedback](#scrollTo=o1ndpYE50BpG)\n", + "\n", + "\n", + "**Before you start:**\n", + "\n", + "For this practical, you will need to use a GPU to speed up training. To do this, go to the \"Runtime\" menu in Colab, select \"Change runtime type\" and then in the popup menu, choose \"GPU\" in the \"Hardware accelerator\" box.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6EqhIg1odqg0" + }, + "source": [ + "## Installation and Imports" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "4boGA9rYdt9l" + }, + "outputs": [], + "source": [ + "## Install and import anything required. Capture hides the output from the cell.\n", + "# @title Install and import required packages. (Run Cell)\n", + "\n", + "import subprocess\n", + "import os\n", + "\n", + "# Based on https://stackoverflow.com/questions/67504079/how-to-check-if-an-nvidia-gpu-is-available-on-my-system\n", + "try:\n", + " subprocess.check_output('nvidia-smi')\n", + " print(\"a GPU is connected.\")\n", + "except Exception: \n", + " # TPU or CPU\n", + " if \"COLAB_TPU_ADDR\" in os.environ and os.environ[\"COLAB_TPU_ADDR\"]:\n", + " print(\"A TPU is connected.\")\n", + " import jax.tools.colab_tpu\n", + " jax.tools.colab_tpu.setup_tpu()\n", + " else:\n", + " print(\"Only CPU accelerator is connected.\")\n", + " # x8 cpu devices - number of (emulated) host devices\n", + " os.environ[\"XLA_FLAGS\"] = \"--xla_force_host_platform_device_count=8\"\n", + "import jax\n", + "import jax.numpy as jnp\n", + "from jax import grad, jit, vmap, pmap\n", + "\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "YQe1CfDyrkdL" + }, + "outputs": [], + "source": [ + "# @title Helper Functions. (Run Cell)\n", + "import copy\n", + "from typing import Dict\n", + "\n", + "\n", + "def plot_performance(data: Dict, title: str):\n", + " runs = list(data.keys())\n", + " time = list(data.values())\n", + "\n", + " # creating the bar plot\n", + " plt.bar(runs, time, width=0.35)\n", + "\n", + " plt.xlabel(\"Implementation\")\n", + " plt.ylabel(\"Average time taken (in s)\")\n", + " plt.title(title)\n", + " plt.show()\n", + "\n", + " best_perf_key = min(data, key=data.get)\n", + " all_runs_key = copy.copy(runs)\n", + "\n", + " # all_runs_key_except_best\n", + " all_runs_key.remove(best_perf_key)\n", + "\n", + " for k in all_runs_key:\n", + " print(\n", + " f\"{best_perf_key} was {round((data[k]/data[best_perf_key]),2)} times faster than {k} !!!\"\n", + " )" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "yFzjRHUsUQqq" + }, + "outputs": [], + "source": [ + "# @title Check the device you are using (Run Cell)\n", + "print(f\"Num devices: {jax.device_count()}\")\n", + "print(f\" Devices: {jax.devices()}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "0RGo-mOedEV8" + }, + "source": [ + "Text Cell below creates a LaTeX Macro to be used in math equations. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "blMNBku0dB8h" + }, + "source": [ + "$$\n", + "\\newcommand{\\because}[1]{&& \\triangleright \\textrm{#1}}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Enx0WUr8tIPf" + }, + "source": [ + "# **Part 1 - Basics of JAX**\n", + "\n", + "**What is JAX?**\n", + "\n", + "[JAX](https://jax.readthedocs.io/en/latest/index.html) is a python package for writing composable numerical transformations. It leverages [Autograd](https://github.com/hips/autograd) and [XLA](https://www.tensorflow.org/xla) (Accelerated Linear Algebra), to achieve high-performance numerical computing, which is particularly relevant in machine learning.\n", + "\n", + "It provides functionality such as automatic differentiation (`grad`), parallelization (`pmap`), vectorization (`vmap`), just-in-time compilation (`jit`), and more. These transforms operate on [pure functions](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#pure-functions), so JAX encourages a **functional programming** paradigm. Furthermore, the use of XLA allows one to target different kinds of accelerators (CPU, GPU and TPU), without code changes. \n", + "\n", + "JAX is different from frameworks such as PyTorch or Tensorflow (TF). It is more low-level and minimalistic. JAX simply offers a set of primitives (simple operations) like `jit` and `vmap`, and relies on other libraries for other things e.g. using the data loader from PyTorch or TF. Due to JAX's simplicity, it is commonly used with higher-level neural network libraries such as [Haiku](https://github.com/deepmind/dm-haiku) or [Flax](https://github.com/google/flax). (Imagine writing complicated architectures using a NumPy-like interface alone! ๐Ÿ˜ฎ ) " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-ZUp8i37dFbU" + }, + "source": [ + "## **1.1 From NumPy โžก Jax** - `Beginner`\n", + "\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "CbOEYsWQ6tHv" + }, + "source": [ + "### JAX and NumPy - Similarities ๐Ÿค\n", + "\n", + "The main similarity between JAX and NumPy is that they share a similar interface and often, JAX and NumPy arrays can be used interchangeably. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "McStJC-l3qsG" + }, + "source": [ + "#### Similiar Interface" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "KbYfoaujT2F7" + }, + "source": [ + "Let's plot the sine functions using NumPy." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "sgRLq58OTz1t" + }, + "outputs": [], + "source": [ + "# 100 linearly spaced numbers from -np.pi to np.pi\n", + "x = np.linspace(-np.pi, np.pi, 100)\n", + "\n", + "# the function, which is y = sin(x) here\n", + "y = np.sin(x)\n", + "\n", + "# plot the functions\n", + "plt.plot(x, y, \"b\", label=\"y=sin(x)\")\n", + "\n", + "plt.legend(loc=\"upper left\")\n", + "\n", + "# show the plot\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XCEnlC-PU3ps" + }, + "source": [ + "Now using jax. We already imported `jax.numpy` as `jnp` in the first cell." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "kRQf2mNRTlt3" + }, + "outputs": [], + "source": [ + "# 100 linearly spaced numbers from -jnp.pi to jnp.pi\n", + "x = jnp.linspace(-jnp.pi, jnp.pi, 100)\n", + "\n", + "# the function, which is y = sin(x) here\n", + "y = jnp.sin(x)\n", + "\n", + "# plot the functions\n", + "plt.plot(x, y, \"b\", label=\"y=sin(x)\")\n", + "\n", + "plt.legend(loc=\"upper left\")\n", + "\n", + "# show the plot\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "wuNscwHeV_dn" + }, + "source": [ + "**Exercise 1.1 - Code Task:** Can you plot the cosine function using `jnp`?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "5svZFPUCQNsG" + }, + "outputs": [], + "source": [ + "# Plot Cosine using jnp. (UPDATE ME)\n", + "\n", + "# 100 linearly spaced numbers\n", + "# UPDATE ME\n", + "x = ...\n", + "\n", + "# UPDATE ME\n", + "y = ...\n", + "\n", + "\n", + "# plot the functions\n", + "plt.plot(x, y, \"b\", label=\"y=cos(x)\")\n", + "\n", + "plt.legend(loc=\"upper left\")\n", + "\n", + "# show the plot\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "m4AVrGzy6JWR" + }, + "outputs": [], + "source": [ + "# @title Answer to code task (Try not to peek until you've given it a good try!')\n", + "# 100 linearly spaced numbers\n", + "x = jnp.linspace(-jnp.pi, jnp.pi, 100)\n", + "\n", + "y = jnp.cos(x)\n", + "\n", + "# plot the functions\n", + "plt.plot(x, y, \"b\", label=\"y=cos(x)\")\n", + "\n", + "plt.legend(loc=\"upper left\")\n", + "\n", + "# show the plot\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lg4__l4A7yqc" + }, + "source": [ + "### JAX and NumPy - Differences โŒ \n", + "\n", + "Although JAX and NumPy have some similarities, they do have some important differences:\n", + "- Jax arrays are **immutable** (they can't be modified after they are created).\n", + "- The way they handle **randomness** -- JAX handles randomness explicitly." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "dPbOnhE4ZSTi" + }, + "source": [ + "#### JAX arrays are immutable, while NumPy arrays are not.\n", + "\n", + "JAX and NumPy arrays are often interchangeable, **but** Jax arrays are **immutable** (they can't be modified after they are created). Allowing mutations makes transforms difficult and violates conditions for [pure functions](https://en.wikipedia.org/wiki/Pure_function).\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Vdfb1wtd-GkF" + }, + "source": [ + "Let's see this in practice by changing the number at the beginning of an array. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "7r-Los6YZR-f" + }, + "outputs": [], + "source": [ + "# NumPy: mutable arrays\n", + "x = np.arange(10)\n", + "x[0] = 10\n", + "print(x)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "8Y23OWjE_BDA" + }, + "source": [ + "Let's try this in JAX." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "OxjkKpqAZxWo" + }, + "outputs": [], + "source": [ + "# JAX: immutable arrays\n", + "# Should raise an error.\n", + "try:\n", + " x = jnp.arange(10)\n", + " x[0] = 10\n", + "except Exception as e:\n", + " print(\"Exception {}\".format(e))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "VoWT5RBUagW8" + }, + "source": [ + "So it fails! We can't mutate a JAX array once it has been created. To update JAX arrays, we need to use [helper functions](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html) that return an updated copy of the JAX array. \n", + "\n", + "Instead of doing this `x[idx] = y`, we need to do this `x = x.at[idx].set(y)`. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "qJYxkh4qagwO" + }, + "outputs": [], + "source": [ + "x = jnp.arange(10)\n", + "new_x = x.at[0].set(10)\n", + "print(f\" new_x: {new_x} original x: {x}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Ut0meCGB5qD0" + }, + "source": [ + "Note here that `new_x` is a copy and that the original `x` is unchanged. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "oAH4c_smdGQU" + }, + "source": [ + "#### Randomness in NumPy vs JAX \n", + "\n", + "JAX is more explicit in Pseudo Random Number Generation (PRNG) than NumPy and other libraries (such as TensorFlow or PyTorch). [PRNG](https://en.wikipedia.org/wiki/Pseudorandom_number_generator) is the process of algorithmically generating a sequence of numbers, which *approximate* the properties of a sequence of random numbers. \n", + "\n", + "Let's see the differences in how JAX and NumPy generate random numbers." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Q2m376Ethf8m" + }, + "source": [ + "##### In Numpy, PRNG is based on a global `state`.\n", + "\n", + "Let's set the initial seed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "-0t3sjxzdgmP" + }, + "outputs": [], + "source": [ + "# Set random seed\n", + "np.random.seed(42)\n", + "prng_state = np.random.get_state()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "QKVz5atZMMOV" + }, + "outputs": [], + "source": [ + "# @title Helper function to compare prng keys (Run Cell)\n", + "def is_prng_state_the_same(prng_1, prng_2):\n", + " \"\"\"Helper function to compare two prng keys.\"\"\"\n", + " # concat all elements in prng tuple\n", + " list_prng_data_equal = [(a == b) for a, b in zip(prng_1, prng_2)]\n", + " # stack all elements together\n", + " list_prng_data_equal = np.hstack(list_prng_data_equal)\n", + " # check if all elements are the same\n", + " is_prng_equal = all(list_prng_data_equal)\n", + " return is_prng_equal" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nloZ9abah3J3" + }, + "source": [ + "Let's take a few samples from a Gaussian (normal) Distribution and check if PRNG keys/global state change." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "aiUcfX7iSenY" + }, + "outputs": [], + "source": [ + "print(\n", + " f\"sample 1 = {np.random.normal()} Did prng state change: {not is_prng_state_the_same(prng_state,np.random.get_state())}\"\n", + ")\n", + "prng_state = np.random.get_state()\n", + "print(\n", + " f\"sample 2 = {np.random.normal()} Did prng state change: {not is_prng_state_the_same(prng_state,np.random.get_state())}\"\n", + ")\n", + "prng_state = np.random.get_state()\n", + "print(\n", + " f\"sample 3 = {np.random.normal()} Did prng state change: {not is_prng_state_the_same(prng_state,np.random.get_state())}\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nuHkW6V4iLa9" + }, + "source": [ + "Numpy's global random state is updated every time a random number is generated, so *sample 1 != sample 2 != sample 3*. \n", + "\n", + "Having the state automatically updated, makes it difficult to handle randomness in a **reproducible** way across different threads, processes and devices. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lGDU6ckKkzqL" + }, + "source": [ + "##### In JAX, PRNG is explicit.\n", + "\n", + "In JAX, for each random number generation, you need to explicitly pass in a random key/state." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6oKdk5CSmD-f" + }, + "source": [ + "Passing the same state/key results in the same number being generated. This is generally undesirable." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Y-6B0hjtlTmd" + }, + "outputs": [], + "source": [ + "from jax import random\n", + "\n", + "key = random.PRNGKey(42)\n", + "print(f\"sample 1 = {random.normal(key)}\")\n", + "print(f\"sample 2 = {random.normal(key)}\")\n", + "print(f\"sample 3 = {random.normal(key)}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "l0KcwEbZqIaQ" + }, + "source": [ + "To generate different and independent samples, you need to manually **split** the keys. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "v-7BhY0MmEhI" + }, + "outputs": [], + "source": [ + "from jax import random\n", + "\n", + "key = random.PRNGKey(42)\n", + "print(f\"sample 1 = {random.normal(key)}\")\n", + "\n", + "# We split the key -> new key and subkey\n", + "new_key, subkey = random.split(key)\n", + "\n", + "# We use the subkey immediately and keep the new key for future splits.\n", + "# It doesn't really matter which key we keep and which one we use immediately.\n", + "print(f\"sample 2 = {random.normal(subkey)}\")\n", + "\n", + "# We split the new key -> new key2 and subkey\n", + "new_key2, subkey = random.split(new_key)\n", + "print(f\"sample 3 = {random.normal(subkey)}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "2VnTDptmuk-i" + }, + "source": [ + "By using JAX, we can more easily reproduce random number generation in parallel across threads, processes, or even devices by explicitly passing and keeping track of the prng key (without relying on a global state that automatically gets updated). For more details on PRNG in JAX, you can read more [here](https://jax.readthedocs.io/en/latest/jep/263-prng.html). " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "TSj972IWxTo2" + }, + "source": [ + "## **1.2 Acceleration in JAX** ๐Ÿš€ - `Beginner`, `Intermediate`, `Advanced`\n", + "\n", + "JAX leverages Autograd and XLA for accelerating numerical computation. The use of Autograd allows for automatic differentiation (`grad`), while XLA allows JAX to run on multiple accelerators/backends and run transforms like `jit` and `pmap`. JAX also allows you to use `vmap` for automatic vectorization. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_bQ9QqT-yKbs" + }, + "source": [ + "### JAX is backend Agnostic - `Beginner`\n", + "\n", + "Using JAX, you can run the same code on different backends/AI accelerators (e.g. CPU/GPU/TPU), **with no changes in code** (no more `.to(device)` - from frameworks like PyTorch). This means we can easily run linear algebra operations directly on GPU/TPU." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4PbcFsfAibBu" + }, + "source": [ + "**Multiplying Matrices**\n", + "\n", + "Dot products are a common operation in numerical computing and a central part of modern deep learning. They are defined over [vectors](https://en.wikipedia.org/wiki/Coordinate_vector), which can loosely be thought of as a list of multiple scalers (single values). \n", + "\n", + "Formally, given two vectors $\\boldsymbol{x}$,$\\boldsymbol{y}$ $\\in R^n$, their dot product is defined as:\n", + "\n", + "
$\\boldsymbol{x}^{\\top} \\boldsymbol{y}=\\sum_{i=1}^{n} x_{i} y_{i}$
" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "AY1RsVkXaokP" + }, + "source": [ + "Dot Product in NumPy (will run on cpu)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "yj59KkD_HDOs" + }, + "outputs": [], + "source": [ + "size = 1000\n", + "x = np.random.normal(size=(size, size))\n", + "y = np.random.normal(size=(size, size))\n", + "numpy_time = %timeit -o -n 10 a_np = np.dot(y,x.T)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6c_kl-u0KPVY" + }, + "source": [ + "Dot Product using JAX (will run on current runtime - e.g. GPU)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "PHRcHK86KO3w" + }, + "outputs": [], + "source": [ + "size = 1000\n", + "key1, key2 = jax.random.split(jax.random.PRNGKey(42), num=2)\n", + "x = jax.random.normal(key1, shape=(size, size))\n", + "y = jax.random.normal(key2, shape=(size, size))\n", + "jax_time = %timeit -o -n 10 jnp.dot(y, x.T).block_until_ready()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "LMTSpEG3TNah" + }, + "source": [ + "\n", + "> When timing JAX functions, we use `.block_until_ready()` because JAX uses [asynchronous dispatch](https://jax.readthedocs.io/en/latest/async_dispatch.html#async-dispatch). This means JAX doesn't wait for the operation to complete before returning control to your code. To fairly compute the time taken for JAX operations, we therefore block until the operation is done.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "S3vwh6Q724gn" + }, + "source": [ + "How much faster was the dot product in JAX (Using GPU)?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "UkASX9p34A1D" + }, + "outputs": [], + "source": [ + "np_average_time = np.mean(numpy_time.all_runs)\n", + "jax_average_time = np.mean(jax_time.all_runs)\n", + "data = {\"numpy\": np_average_time, \"jax\": jax_average_time}\n", + "\n", + "plot_performance(data, title=\"Average time taken per framework to run dot product\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "X6Rv_OQgBOqr" + }, + "source": [ + "JAX not running much faster? -> Re-run the JAX cell. \n", + "> \"Keep in mind that the first time you run JAX code, it will be slower because it is being compiled. T*his is true even if you donโ€™t use jit in your own code, because JAXโ€™s builtin functions are also jit compiled*.\" - [JAX Docs](https://jax.readthedocs.io/en/latest/faq.html#benchmarking-jax-code).\n", + "\n", + "If you are running on an accelerator, you should see a considerable performance benefit of using JAX, without making any changes to your code! \n", + "\n", + "\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JM_08mXEBRIK" + }, + "source": [ + "### JAX Transformations - `Beginner`, `Intermediate`, `Advanced`\n", + "\n", + "JAX transforms (e.g. jit, grad, vmap, pmap) first convert python functions into an intermediate language called *jaxpr*. Transforms are then applied to this jaxpr representation.\n", + "\n", + "JAX generates jaxpr, in a process known as **tracing**. During tracing, function inputs are wrapped by a tracer object and then JAX records all operations (including regular python code) that occur during the function call. These recorded operations are used to reconstruct the function. \n", + "\n", + "Any python side-effects are not recorded during tracing. JAX transforms and compilations are designed to work only with **pure functions**. For more on tracing and jaxpr, you can read [here](https://jax.readthedocs.io/en/latest/jaxpr.html).\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "cOGuGWtLmP7n" + }, + "source": [ + "#### Basic JAX Transformations - `jit` and `grad` - `Beginner`\n", + "\n", + "In this section, we will explore two basic JAX transforms: \n", + "- jit (Just-in-time compilation) - compiles and caches JAX Python functions so that they can be run efficiently on XLA to *speed up function calls*.\n", + "- grad - *Automatically* compute *gradients* of functions." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "QsJE_U-ZzVol" + }, + "source": [ + "##### jit\n", + "\n", + "Jax dispatches operations to accelerators one at a time. If we have repeated operations, we can use `jit` to compile the function the first time it is called, then subsequent calls will be [cached](https://en.wikipedia.org/wiki/Cache_(computing) (save the compiled version so that it doesn't need to be recompiled everytime we call it). " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uIYsqIp_-Dly" + }, + "source": [ + "Let's compile [ReLU (Rectified Linear Unit)](https://arxiv.org/abs/1803.08375), a popular activation function in deep learning. \n", + "\n", + "ReLU is defined as follows:\n", + "
$f(x)=max(0,x)$
\n", + "\n", + "It can be visualized as follows:\n", + "\n", + "
\n", + "\n", + "
,\n", + "\n", + "where $x$ is the input to the function and $y$ is output of ReLU.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Vm-bN9sQETLV" + }, + "source": [ + "$$f(x)=\\max (0, x)=\\left\\{\\begin{array}{l}x_{i} \\text { if } x_{i}>0 \\\\ 0 \\text { if } x_{i}<=0\\end{array}\\right.$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "dFiuu3BFAKdY" + }, + "source": [ + "**Exercise 1.2 - Code Task:** Complete the ReLU implementation below using standard python." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "1_qMJJbs-Cbe" + }, + "outputs": [], + "source": [ + "# Implement ReLU.\n", + "def relu(x):\n", + " if x > 0:\n", + " return\n", + " # TODO Implement me!\n", + " else:\n", + " return\n", + " # TODO Implement me!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "zCobLakM1esy" + }, + "outputs": [], + "source": [ + "# @title Run to test your function.\n", + "\n", + "\n", + "def plot_relu(relu_function):\n", + " max_int = 5\n", + " # Generete 100 evenly spaced points from -max_int to max_int\n", + " x = np.linspace(-max_int, max_int, 1000)\n", + " y = np.array([relu_function(xi) for xi in x])\n", + " plt.plot(x, y, label=\"ReLU\")\n", + " plt.legend(loc=\"upper left\")\n", + " plt.xticks(np.arange(min(x), max(x) + 1, 1))\n", + " plt.show()\n", + "\n", + "\n", + "def check_relu_function(relu_function):\n", + " # Generete 100 evenly spaced points from -100 to -1\n", + " x = np.linspace(-100, -1, 100)\n", + " y = np.array([relu_function(xi) for xi in x])\n", + " assert (y == 0).all()\n", + "\n", + " # Check if x == 0\n", + " x = 0\n", + " y = relu_function(x)\n", + " assert y == 0\n", + "\n", + " # Generete 100 evenly spaced points from 0 to 100\n", + " x = np.linspace(0, 100, 100)\n", + " y = np.array([relu_function(xi) for xi in x])\n", + " assert np.allclose(x, y)\n", + "\n", + " print(\"Your ReLU function is correct!\")\n", + "\n", + "\n", + "check_relu_function(relu)\n", + "plot_relu(relu)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "Kken6_XvDdOK" + }, + "outputs": [], + "source": [ + "# @title Answer to code task (Try not to peek until you've given it a good try!')\n", + "def relu(x):\n", + " if x > 0:\n", + " return x\n", + " else:\n", + " return 0\n", + "\n", + "\n", + "check_relu_function(relu)\n", + "plot_relu(relu)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "2mgIAyE2Fx3O" + }, + "source": [ + "Let's try to `jit` this function to speed up compilation and then try to call it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "4YDkiNlRF6jn" + }, + "outputs": [], + "source": [ + "relu_jit = jax.jit(relu)\n", + "\n", + "key = jax.random.PRNGKey(42)\n", + "# Gen 1000000 random numbers and pass them to relu\n", + "num_random_numbers = 1000000\n", + "x = jax.random.normal(key, (num_random_numbers,))\n", + "\n", + "# Should raise an error.\n", + "try:\n", + " relu_jit(x)\n", + "except Exception as e:\n", + " print(\"Exception {}\".format(e))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "y7q33C4pHOQW" + }, + "source": [ + "**Why does this fail?**\n", + "\n", + "\n", + "> As mentioned above, JAX transforms first converts python functions into an intermediate language called *jaxpr*. Jaxpr only captures what is executed on the parameters given to it during tracing, so this means during conditional calls, jaxpr only considers the branch taken.\n", + "> \n", + "> When jit-compiling a function, we want to compile and cache a version of the function that can handle multiple different argument types (so we don't have to recompile for each function evaluation). For example, when we compile a function on an array `jnp.array([1., 2., 3.], jnp.float32)`, we would likely also want to use the compiled function for `jnp.array([4., 5., 6.], jnp.float32)`. \n", + "> \n", + "> To achieve this, JAX traces your code based on abstract values. The default abstraction level is a ShapedArray - array that has a fixed size and dtype, for example, if we trace a function using `ShapedArray((3,), jnp.float32)`, it can be reused for any concrete array of size 3, and float32 dtype. \n", + "> \n", + "> This does come with some challenges. Tracing that relies on concrete values becomes tricky and sometimes results in `ConcretizationTypeError` as in the ReLU function above. Furthermore, when tracing a function with conditional statements (\"if ...\"), JAX doesn't know which branch to take when tracing and so tracing can't occur.\n", + "\n", + "**TLDR**: JAX tracing doesn't work well with conditional statements (\"if ...\"). \n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uLswU8aMEQ9K" + }, + "source": [ + "To solve this, we have two options:\n", + "- Use static arguments to make sure JAX traces on a concrete value level - this is not ideal if you need to retrace a lot. Example - bottom of this [section](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#python-control-flow-jit).\n", + "- Use builtin JAX condition flow primitives such as [`lax.cond`](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.cond.html) or [`jnp.where`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.where.html). " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "SX8k4R7daBpP" + }, + "source": [ + "**Exercise 1.3 - Code Task** : Let's convert our ReLU function above to work with jit.\n", + "\n", + "**Useful methods:** [`jnp.where`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.where.html) (or [`jnp.maximum`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.maximum.html), if you prefer.) " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "p-4mXLwqaK-b" + }, + "outputs": [], + "source": [ + "# Implement a jittable ReLU\n", + "def relu(x):\n", + " # TODO Implement ME!\n", + " return ..." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "B5fq_QRoaaG5" + }, + "outputs": [], + "source": [ + "# @title Run to test your function.\n", + "check_relu_function(relu)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "XLtBaplGxlS3" + }, + "outputs": [], + "source": [ + "# @title Answer to code task (Try not to peek until you've given it a good try!')\n", + "def relu(x):\n", + " return jnp.where(x > 0, x, 0)\n", + " # Another option - return jnp.maximum(x,0)\n", + "\n", + "\n", + "check_relu_function(relu)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "KYogDOCLiLXN" + }, + "outputs": [], + "source": [ + "# @title Now let's see the performance benefit of using jit! (Run me)\n", + "\n", + "# jit our function\n", + "relu_jit = jax.jit(relu)\n", + "\n", + "# generate random input\n", + "key = jax.random.PRNGKey(42)\n", + "num_random_numbers = 1000000\n", + "x = jax.random.normal(key, (num_random_numbers,))\n", + "\n", + "# time normal jit function\n", + "jax_time = %timeit -o -n 10 relu(x).block_until_ready()\n", + "\n", + "# Warm up/Compile - first run for jitted function\n", + "relu_jit(x).block_until_ready()\n", + "\n", + "# time jitted function\n", + "jax_jit_time = %timeit -o -n 10 relu_jit(x).block_until_ready()\n", + "\n", + "# Let's plot the performance difference\n", + "jax_avg_time = np.mean(jax_time.all_runs)\n", + "jax_jit_avg_time = np.mean(jax_jit_time.all_runs)\n", + "data = {\"JAX (no jit)\": jax_avg_time, \"JAX (with jit)\": jax_jit_avg_time}\n", + "\n", + "plot_performance(data, title=\"Average time taken for ReLU function\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "dxq-z-xzs40s" + }, + "source": [ + "##### grad\n", + "\n", + "`grad` is used to automatically compute the gradient of a function in JAX. It can be applied to Python and NumPy functions, which means you can differentiate through loops, branches, recursion, and closures. \n", + "\n", + "`grad` takes in a function `f` and returns a function. If `f` is a mathematical function $f$, then `grad(f)` corresponds to $f'$ (Lagrange's notation), with `grad(f)(x)` corresponding to $f'(x)$.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "C49R8EOs-GHe" + }, + "source": [ + "Let's take a simple function $f(x)=6x^4-9x+4$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "lUMepl6J-dQP" + }, + "outputs": [], + "source": [ + "f = lambda x: 6 * x**4 - 9 * x + 4" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9ayvrkpiBiu4" + }, + "source": [ + "We can compute the gradient of this function - $f'(x)$ and evaluate it at $x=3$." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "YNm9hS2S-vJk" + }, + "outputs": [], + "source": [ + "dfdx = grad(f)\n", + "dfdx_3 = dfdx(3.0)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "UcRUywsnF3LZ" + }, + "source": [ + "**Exercise 1.4 - Math Task**: Can you calculate $f'(2)$ by hand?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "PybYK6NEFWrD" + }, + "outputs": [], + "source": [ + "answer = 0 # @param {type:\"integer\"}\n", + "\n", + "dfdx_2 = dfdx(2.0)\n", + "\n", + "assert (\n", + " answer == dfdx_2\n", + "), \"Incorrect answer, hint https://en.wikipedia.org/wiki/Power_rule#Statement_of_the_power_rule\"\n", + "\n", + "print(\"Nice, you got the correct answer!\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "CAwlhxIlRPp9" + }, + "outputs": [], + "source": [ + "# @title Answer to math task (Try not to run until you've given it a good try!') \n", + "%%latex \n", + "\\begin{aligned}\n", + "f(x) & = 6x^4-9x+4 \\\\\n", + "f'(x) & = 24x^3 -9 && \\triangleright \\textrm{Power Rule.} \\\\ \n", + "f'(2) & = 24(2)^3 -9 = 183 && \\triangleright \\textrm{Substituting x=2} \\\\\n", + "\\end{aligned}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "wcB5ZjojH67Q" + }, + "source": [ + "We can also chain `grad` to calculate higher order deratives. \n", + "\n", + "We can calculate $f'''(x)$ as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "013SFq7BE54W" + }, + "outputs": [], + "source": [ + "d3dx = grad(grad(grad(f)))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "7_r9VQGoIsa6" + }, + "source": [ + "**Exercise 1.5 - Math Task**: How about $f'''(2)$ by hand?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "WZUArv4TInPg" + }, + "outputs": [], + "source": [ + "answer = 0 # @param {type:\"integer\"}\n", + "\n", + "d3dx_2 = d3dx(2.0)\n", + "\n", + "assert answer == d3dx_2, \"Incorrect answer, hint ...\"\n", + "\n", + "print(\"Nice, you got the correct answer!\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "TCC7SkH8MMVk" + }, + "outputs": [], + "source": [ + "# @title Answer to math task (Try not to run until you've given it a good try!') \n", + "%%latex \n", + "\n", + "\\begin{aligned}\n", + "f(x) & = 6x^4-9x+4 \\\\\n", + "f'(x) & = 24x^3 -9 && \\triangleright \\textrm{Power Rule.} \\\\\n", + "f''(x) & = 72x^2 && \\triangleright \\textrm{Power Rule.} \\\\\n", + "f'''(x) & = 144x && \\triangleright \\textrm{Power Rule.} \\\\\n", + "f'''(2) & = 144(2)=288 && \\triangleright \\textrm{Substituting x=2} \\\\ \n", + "\\end{aligned}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "c3QgJNU9XYyz" + }, + "source": [ + "Another useful method is `value_and_grad`, where we can get the value ($f(x)$) and gradient ($f'(x)$). " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "x3zeSv6gXuyd" + }, + "outputs": [], + "source": [ + "from jax import value_and_grad\n", + "\n", + "f_x, dy_dx = value_and_grad(f)(2.0)\n", + "print(f\"f(x): {f_x} fโ€ฒ(x): {dy_dx} \")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_vUr-B6gSxnu" + }, + "source": [ + "> For partial derivatives, you need to use the [`argnums`](https://jax.readthedocs.io/en/latest/_autosummary/jax.grad.html) param to specify which variables you want to differentiate with respect to. \n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MktOLPnwvnH3" + }, + "source": [ + "**Exercise 1.6 - Group Task:** Chat with neighbour/think about how JAX's automatic differentiation compares to other libraries such as Pytorch or Tensorflow. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rvXlE7z02M2D" + }, + "source": [ + "Another useful application related to `grad` is when you want your `grad` function to return auxiliary (extra) data, that you don't want differentiated. You can use the `has_aux` parameter to do this (example in \"Auxiliary data\" section in [here](https://github.com/google/jax/blob/main/docs/jax-101/01-jax-basics.ipynb))." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fT56qxXzTVKZ" + }, + "source": [ + "#### Pure Functions ๐Ÿ’ก - `Beginner`\n", + "\n", + "So we have learned about `jit` and `grad`. Before we move on, let's make sure we understand [**pure functions**](https://en.wikipedia.org/wiki/Pure_function). \n", + "\n", + "JAX transformation and compilation are designed to work reliably on **pure functions**. These functions have the following properties:\n", + "1. All **input** data is passed through the **function's parameters**. \n", + "2. All **results** are output through the **function's return**. \n", + "3. The function always returns the same **result** if invoked with the **same inputs**. What if your function involves randomness? Pass in the random seed!\n", + "4. **No [side-effects](https://en.wikipedia.org/wiki/Side_effect_(computer_science))** - no mutation of non-local variables or input/output streams. \n", + "\n", + "Let's see what could happen if we don't stick to using pure functions." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Mad7l7s0CtT1" + }, + "source": [ + "##### Side Effects" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "xkQWTE2Xe955" + }, + "source": [ + "Let's call print within a function." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "S9aeUdUoBmCg" + }, + "outputs": [], + "source": [ + "def impure_print_side_effect(x):\n", + " print(\"Print me!\") # This is a side-effect\n", + " return x\n", + "\n", + "\n", + "# The side-effects appear during the first run\n", + "print(\"First call: \", jax.jit(impure_print_side_effect)(4.0))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nu4rnyS7ox_L" + }, + "source": [ + "As expected, the print statement is called.\n", + "\n", + "Let's call this function again. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "-wnkIqAxfDeJ" + }, + "outputs": [], + "source": [ + "# Subsequent runs with parameters of same type and shape may not show the side-effect\n", + "# This is because JAX now invokes a cached compilation of the function\n", + "print(\"Second call: \", jax.jit(impure_print_side_effect)(5.0))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "64rNvVnwo-eB" + }, + "source": [ + "Ah, no print statement! Since JAX cached the compilation of the function, `print()` calls will only happen during tracing and not every time the function is called. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Mp_CkOL-o86t" + }, + "outputs": [], + "source": [ + "# JAX re-runs the Python function when the type or shape of the argument changes\n", + "print(\n", + " \"Third call, different type: \", jax.jit(impure_print_side_effect)(jnp.array([5.0]))\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XFogrIf5fbLU" + }, + "source": [ + "In this case, we called the function with a different shaped object and so it triggered the re-tracing of the function and print was called again. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "pqV6_25GCxHL" + }, + "source": [ + "To print values in compiled functions, use [host callbacks](https://jax.readthedocs.io/en/latest/jax.experimental.host_callback.html?highlight=print#jax.experimental.host_callback.id_print)([example](https://github.com/google/jax/issues/196#issuecomment-1191155679)) or if your jax version>=0.3.16, you can use [`jax.debug.print`](https://jax.readthedocs.io/en/latest/debugging/print_breakpoint.html). \n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "EqL1-TGaC8Ir" + }, + "source": [ + "##### Globals" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "t8dzJog8tMe_" + }, + "source": [ + "Using global variables can also lead to some undesired consequences!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "vwAkKrDiCXO6" + }, + "outputs": [], + "source": [ + "g = 0.0\n", + "\n", + "\n", + "def impure_uses_globals(x):\n", + " return x + g\n", + "\n", + "\n", + "# JAX captures the value of the global during the first run\n", + "print(\"First call: \", jax.jit(impure_uses_globals)(4.0))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "pWNE8B5btcfc" + }, + "source": [ + "This prints 4, using the original value of `g`.\n", + "\n", + "Let's update `g` and call our function again." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "mLMpdQZwtUEL" + }, + "outputs": [], + "source": [ + "g = 10.0 # Update the global\n", + "\n", + "# Subsequent runs may silently use the cached value of the globals\n", + "print(\"Second call: \", jax.jit(impure_uses_globals)(4.0))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "o3-ygEx0tpBX" + }, + "source": [ + "Even though we updated our global variable, this still prints 4, using the original value of `g`. This is because the value of `g` was cached." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "LDecWNyktWDN" + }, + "outputs": [], + "source": [ + "# JAX re-runs the Python function when the type or shape of the argument changes\n", + "# This will end up reading the latest value of the global\n", + "print(\"Third call, different type: \", jax.jit(impure_uses_globals)(jnp.array([4.0])))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3mIZaXOqt5ix" + }, + "source": [ + "Similar to the side-effects example, re-tracing gets triggered when the shape of our input has changed. In this case, our function now uses the updated value of `g`." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "aLis2BV04BQK" + }, + "source": [ + "Since the global variables are cached, it is still okay to use global **constants** inside jax functions." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JAbqUwp0uPta" + }, + "source": [ + "#### JAX transforms <-> Pure Functions \n", + "In summary, JAX transforms should only be used with pure functions!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tvBzh8wiGuLf" + }, + "source": [ + "#### More Advanced Transforms - `vmap` and `pmap` - `Intermediate`, `Advanced`\n", + "\n", + "JAX also provides transforms that allow you automatically vectorize (`vmap`) and parallelize (`pmap`) your code. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "RCUB9YkCnCFb" + }, + "source": [ + "##### vmap - `Intermediate`\n", + "\n", + "vmap (Vectorizing map) automatically vectorizes your python functions. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "e858lqfYKd4d" + }, + "source": [ + "Let's define a simple function that calculates the min and max of an input." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "-6qalyXgDsKB" + }, + "outputs": [], + "source": [ + "def min_max(x):\n", + " return jnp.array([jnp.min(x), jnp.max(x)])" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "muSIsUkgKlxh" + }, + "source": [ + "We can apply this function to the vector - `[0, 1, 2, 3, 4]` and get the min and max values." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "F5wIeGieKsWG" + }, + "outputs": [], + "source": [ + "x = jnp.arange(5)\n", + "min_max(x)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_PkC7NnPLNXq" + }, + "source": [ + "What about if we want to apply this to a batch/list of vectors (i.e. calculate the min and max independently across multiple batches)? " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hRngFfwCMHLd" + }, + "source": [ + "Let's create our batch - 3 vectors of size 5." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "EKuh459OD6jx" + }, + "outputs": [], + "source": [ + "batch_size = 3\n", + "batched_x = np.arange(15).reshape((batch_size, -1))\n", + "print(batched_x)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hApYpVEvNS1y" + }, + "source": [ + "**Exercise 1.7 - Question**: What do you think would be the result if we passed batch_x into `min_max`?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "gu6C3J0kMrtj" + }, + "outputs": [], + "source": [ + "batch_min_max_output = [[0,4],[5,9],[10,14]] # @param [\"[[0,4],[5,9],[10,14]]\", \"[[0,10],[1,11],[2,12],[3,13],[4,14]]\", \"[0,14]\"] {type:\"raw\"}\n", + "\n", + "assert (batch_min_max_output == np.array(min_max(batched_x))).all(), \"Incorrect answer.\"\n", + "\n", + "print(\"Nice, you got the correct answer!\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6K0weiHOOb8L" + }, + "source": [ + "So the above is not what we want. The `min` and `max` is applied across the entire batch, when we want the min and max per vector/mini-batch. \n", + "\n", + "We can also manually batch this by `jnp.stack` and a for loop, as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "q8RdAqr8N-Fd" + }, + "outputs": [], + "source": [ + "@jit\n", + "def manual_batch_min_max_loop(batched_x):\n", + " min_max_result_list = []\n", + " for x in batched_x:\n", + " min_max_result_list.append(min_max(x))\n", + " return jnp.stack(min_max_result_list)\n", + "\n", + "\n", + "print(manual_batch_min_max_loop(batched_x))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "jmu3VVtMR0GV" + }, + "source": [ + "Or, just manually updating the `axis` in `jnp.min` and `jnp.max`. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "lzxmORv-RcUg" + }, + "outputs": [], + "source": [ + "@jit\n", + "def manual_batch_min_max_axis(batched_x):\n", + " return jnp.array([jnp.min(batched_x, axis=1), jnp.max(batched_x, axis=1)]).T\n", + "\n", + "\n", + "print(manual_batch_min_max_axis(batched_x))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "CetKYASUSE4Q" + }, + "source": [ + "These approaches both work, but we need to change our function to work with batches. We can't just run the same code across a batch of data.\n", + "\n", + "There is where `vmap` becomes useful! Using `vmap` we can write a function once, as if it is working on a single element, and then use `vmap` to automatically vectorize it! " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "s2F8WUNQROkQ" + }, + "outputs": [], + "source": [ + "# define our vmap function using our original single vector function\n", + "@jit\n", + "def min_max_vmap(batched_x):\n", + " return vmap(min_max)(batched_x)\n", + "\n", + "\n", + "# Run it on a single vecor\n", + "## We add extra dimention in a single vector, shape changes from (5,) to (1,5), which makes the vmapping possible\n", + "x_with_leading_dim = jax.numpy.expand_dims(x, axis=0)\n", + "print(f\"Single vector: {min_max_vmap(x_with_leading_dim)}\")\n", + "\n", + "# Run it on batch of vectors\n", + "print(f\"Batch/list of vector:{min_max_vmap(batched_x)}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-3bome92VRL6" + }, + "source": [ + "So this is really convenient, but what about performance? " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "O1Nb4uniUUor" + }, + "outputs": [], + "source": [ + "batched_x = np.arange(50000).reshape((500, 100))\n", + "\n", + "# Trace the functions with first call\n", + "manual_batch_min_max_loop(batched_x).block_until_ready()\n", + "manual_batch_min_max_axis(batched_x).block_until_ready()\n", + "min_max_vmap(batched_x).block_until_ready()\n", + "\n", + "min_max_forloop_time = %timeit -o -n 10 manual_batch_min_max_loop(batched_x).block_until_ready()\n", + "min_max_axis_time = %timeit -o -n 10 manual_batch_min_max_axis(batched_x).block_until_ready()\n", + "min_max_vmap_time = %timeit -o -n 10 min_max_vmap(batched_x).block_until_ready()\n", + "\n", + "print(\n", + " f\"Avg Times (lower is better) - Naive Implementation: {np.round(np.mean(min_max_forloop_time.all_runs),5)} Manually Vectorized: {np.round(np.mean(min_max_axis_time.all_runs),5)} Vmapped Function: {np.round(np.mean(min_max_vmap_time.all_runs),5)} \"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "mYL758zCYsrR" + }, + "source": [ + "So `vmap` should be similar in performance to manually vectorized code (if everything is implemented well), and much better than naively vectorized code (i.e. for loops). " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "vAO9dOdrtiqI" + }, + "source": [ + "##### pmap - `Advanced`\n", + "\n", + "๐Ÿ’ก**For this subsection, please ensure that colab is using a `TPU` runtime. If no `TPU` runtimes are available, select `Harware Accelerator` - `None` for a cpu runtime.** \n", + "\n", + "Another JAX transform is `pmap`. `pmap` transforms a function written for one device, to a function that can run in parallel, across many devices. \n", + "\n", + "**Difference between `vmap` and `pmap`**:\n", + "\n", + "So both `pmap` and `vmap` transform a function to work over an array, but they differ in implementation. `vmap` adds an extra batch dimension to all the operations in a function, while `pmap` replicates the function and executes each replica on its own XLA device in parallel." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "gUYA277soR-0" + }, + "outputs": [], + "source": [ + "# @title Check the device you are using (Run Cell)\n", + "print(f\"Num devices: {jax.device_count()}\")\n", + "print(f\" Devices: {jax.devices()}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6qhlBnLs6AYL" + }, + "source": [ + "Let's try and `pmap` a batch of dot products.\n", + "\n", + "Here is an illustration of how we would typically do this sequentially: \n", + "\n", + "[Source](https://www.assemblyai.com/blog/why-you-should-or-shouldnt-be-using-jax-in-2022/)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "fz1i2AwA5_7J" + }, + "outputs": [], + "source": [ + "# @title Illustration of Sequential Dot Product (Run me)\n", + "from IPython.display import HTML\n", + "\n", + "HTML(\n", + " ''\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MTmWNFZ08f8n" + }, + "source": [ + "Here is the code implementation of this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "GqTuMldJ9Uv5" + }, + "outputs": [], + "source": [ + "# Let's generate a batch of size 8, each with a matrix of size (500, 600)\n", + "\n", + "# Let create 8 keys, 1 for each batch\n", + "keys = jax.random.split(jax.random.PRNGKey(0), 8)\n", + "\n", + "# Let create our batches\n", + "mats = jnp.stack([jax.random.normal(key, (500, 600)) for key in keys])\n", + "\n", + "\n", + "def dot_product_sequential():\n", + " @jit\n", + " def avg_dot_prod(mats):\n", + " result = []\n", + " # Loop through batch and compute dp\n", + " for mat in mats:\n", + " # dot product between the a mat and mat.T (transposed version)\n", + " result.append(jnp.dot(mat, mat.T))\n", + " return jnp.stack(result)\n", + "\n", + " avg_dot_prod(mats).block_until_ready()\n", + "\n", + "\n", + "run_sequential = %timeit -o -n 5 dot_product_sequential()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fBEtecJX-0AW" + }, + "source": [ + "Here is an illustration of how we would do this in parallel \n", + "\n", + "[Source](https://www.assemblyai.com/blog/why-you-should-or-shouldnt-be-using-jax-in-2022/)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "Uswxurmn-5oC" + }, + "outputs": [], + "source": [ + "# @title Illustration of Parallel Dot Product (Run me)\n", + "from IPython.display import HTML\n", + "\n", + "HTML(\n", + " ''\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "sGsq8iTA_N9U" + }, + "source": [ + "Here is code implementation of batched dot products:" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "0ygFWDfQIoeC" + }, + "source": [ + "First, we will create `8` random matrices (one for each available tpu devices - colab tpu's have 8 available [devices](https://cloud.google.com/tpu/docs/system-architecture-tpu-vm) or the 8 cpu cores as we configured)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "MZLMx06_K_qR" + }, + "outputs": [], + "source": [ + "# Let create 8 keys, 1 for each batch\n", + "keys = jax.random.split(jax.random.PRNGKey(0), 8)\n", + "\n", + "# Each replicated pmapped function get a different key\n", + "mats = pmap(lambda key: jax.random.normal(key, (500, 600)))(keys)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "6BkMsaOtLISj" + }, + "source": [ + "The leading dimension here needs to equal the dimension of available devices (since we are sending a batch to each device)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "gWrdv_2wLG4T" + }, + "outputs": [], + "source": [ + "print(mats.shape)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "HnqblcUsLaKZ" + }, + "source": [ + "Using `pmap` to generate the batches ensures these batches are of type `ShardedDeviceArray`. This is similar to an ndarray, except each batch/shared is stored in the memory of multiple devices, so they can be used in subsequent `pmap` operations without moving data around between devices (GPU/TPU) and hosts (cpu). " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "JAeaBCvcLQWg" + }, + "outputs": [], + "source": [ + "print(type(mats))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "PVz0gOWG9pkr" + }, + "outputs": [], + "source": [ + "def dot_product_parallel():\n", + "\n", + " # Run a local matmul on each device in parallel (no data transfer)\n", + " result = pmap(lambda x: jnp.dot(x, x.T))(\n", + " mats\n", + " ).block_until_ready() # result.shape is (8, 5000, 5000)\n", + "\n", + "\n", + "run_parallel = %timeit -o -n 5 dot_product_parallel()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "64gfyF3ENQzU" + }, + "source": [ + "It is simple as that. Our dot product now runs in parallel across available devices (cpu, gpus or tpus). As we have more cores/devices, this code will automatically scale! " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "5qcQXSbANP_M" + }, + "outputs": [], + "source": [ + "# @title Let's plot the performance difference (Run Cell)\n", + "\n", + "jax_parallel_time = np.mean(run_parallel.all_runs)\n", + "jax_seq_time = np.mean(run_sequential.all_runs)\n", + "\n", + "\n", + "data = {\"JAX (seq)\": jax_seq_time, \"JAX (parallel - pmap)\": jax_parallel_time}\n", + "\n", + "plot_performance(data, title=\"Average time taken for Seq vs Parallel Dot Product\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-0j8iJRFUz6v" + }, + "source": [ + "For some problems, the speed can be directly proportional to the number of devices -- $Nx$ speed up for $N$ devices! \n", + "\n", + "We showed an example of using `pmap` for *pure* parallelism, where there is no communication between devices. JAX also has various operations for communication across distributed devices ( more on this [here](https://jax.readthedocs.io/en/latest/jax-101/06-parallelism.html#communication-between-devices).)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "aB0503xgmSFh" + }, + "source": [ + "# **Part 2 - From Linear to Non-Linear Regression**\n", + "\n", + "Now that we know some basics of JAX, we can build some simple models!\n", + "\n", + "We will start by learning the basics of Linear Regression and then move on to Polynomial Regression. Finally, we will show how we can use [Haiku](https://github.com/deepmind/dm-haiku) and [Optax](https://github.com/deepmind/optax) to make training our models simpler and more convenient. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XrWSN-zaWAhJ" + }, + "source": [ + "## **2.1 Linear Regression** - ๐Ÿ“ˆ `Beginner`\n", + "\n", + "With a long history spanning from the 19th century [[Gauss, 1809](https://cir.nii.ac.jp/crid/1573950399668535168), [Legendre, 1805](https://play.google.com/store/books/details?id=7C9RAAAAYAAJ&rdid=book-7C9RAAAAYAAJ&rdot=1)] , linear regression is one of the simplest and most popular methods for solving regression problems (problems where we are predicting a continuous variable). \n", + "\n", + "Linear regression aims to find a function $f$ that maps our **inputs $x$**, where $x \\in R^D$ (*$x$ is a real number of dimension $D$*), to the corresponding **output/target - $y$**, where $y \\in R^1$ (output is a single real number). \n", + "\n", + "Put simply, we are trying to model the relationship between one or more independent variables (our inputs - $x$) and our dependent variable (our output - $y$). In Machine Learning, we model this relationship so that we can make predictions.\n", + "\n", + "For simplicity, we will focus on simple Linear Regression, where we have a single input $x$ ($x \\in R^1$)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "AcyM6XRj1cDz" + }, + "source": [ + "### Regression Toy Example - Housing Prices" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "15_2U2klS1ER" + }, + "source": [ + "Let's say we have a dataset of housing sizes (in $m^2$) and their prices (in 100 000s of Tunisian dinar - TND). \n", + "\n", + "|Size of House in $m^2$ (input - $x$) | Price (100 000s of TND) (output - $y$) \n", + "--- | --- | \n", + "|210|4|\n", + "|160|3.3|\n", + "|240|3.7|\n", + "|140|2.3|\n", + "|300|5.4|" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "i34mTepJBpha" + }, + "source": [ + "Let's build this simple dataset, with 5 elements." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "5zfvznFJ1bi4" + }, + "outputs": [], + "source": [ + "x_data_list = [210, 160, 240, 140, 300]\n", + "y_data_list = [4, 3.3, 3.7, 2.3, 5.4]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "uLB0Z3uGHGnV" + }, + "outputs": [], + "source": [ + "# @title Let's plot our dataset. (Run Cell)\n", + "def plot_basic_data(parameters_list=None, title=\"Observed data\", axis_pad=1):\n", + " xlim = [min(x_data_list) - axis_pad, max(x_data_list) + axis_pad]\n", + " ylim = [min(y_data_list) - axis_pad, max(y_data_list) + axis_pad]\n", + " fig, ax = plt.subplots()\n", + "\n", + " if parameters_list is not None:\n", + " x_pred = np.linspace(xlim[0], xlim[1], 100)\n", + " for parameters in parameters_list:\n", + " y_pred = parameters[0] + parameters[1] * x_pred\n", + " ax.plot(x_pred, y_pred, \":\", color=[1, 0.7, 0.6])\n", + "\n", + " parameters = parameters_list[-1]\n", + " y_pred = parameters[0] + parameters[1] * x_pred\n", + " ax.plot(x_pred, y_pred, \"-\", color=[1, 0, 0], lw=2)\n", + "\n", + " ax.plot(x_data_list, y_data_list, \"ob\")\n", + " ax.set(xlabel=\"Input x\", ylabel=\"Output y\", title=title, xlim=xlim, ylim=ylim)\n", + " ax.grid()\n", + "\n", + "\n", + "plot_basic_data()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "vnoEkgimTQ6V" + }, + "source": [ + "### **Optimization by Trial-and-Error**\n", + "\n", + "Let's say we would like to predict these $y$ (outputs) values given the $x$ (inputs). \n", + "\n", + "We can start modeling this by using a simple linear function: \n", + "
\n", + "$f(x) = \\color{red}{w} x + \\color{red}{b}$\n", + "
\n", + "\n", + ", where $x$ is our inputs and $\\color{red}{b}$ and $\\color{red}{w}$ are our model parameters.\n", + "\n", + "Usually, we learn the model parameters, but let's try to find these parameters by hand!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "A_8hyJrhdy6v" + }, + "outputs": [], + "source": [ + "# RUN ME\n", + "parameters_list = [] # Used to track which parameters were tried." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "FLvxEOBtWrSF" + }, + "source": [ + "**Exercise 2.1** \n", + "1. Move the two sliders below to set $\\color{red}{b}$ and $\\color{red}{w}$. \n", + "2. Is your $f(x)$ close to the blue data points? Can you find a better fit?\n", + "3. Repeat 1-2 until you have found a good enough fit. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "iYl7LM7kWYNG" + }, + "outputs": [], + "source": [ + "# @title Choose model parameters. { run: \"auto\" }\n", + "b = 3 # @param {type:\"slider\", min:-5, max:5, step:1}\n", + "w = -0.03 # @param {type:\"slider\", min:-0.05, max:0.05, step:0.01}\n", + "print(\"Plotting line\", w, \"* x +\", b)\n", + "parameters = [b, w]\n", + "parameters_list.append(parameters)\n", + "plot_basic_data(\n", + " parameters_list, title=\"Observed data and my first predictions\", axis_pad=12\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "UCNWBHuBa9rj" + }, + "source": [ + "**Weights and Bias**\n", + "\n", + "What was the impact of changing $\\color{red}{b}$ and $\\color{red}{w}$?\n", + "\n", + "- $\\color{red}{w}$ is our weights. This represents the slope of our function.\n", + "- $\\color{red}{b}$ is our bias (also called the *intercept*). This is the value of our model when all features are zero ($x=0$). This shifts the line, without changing the slope." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XfUfPrRGeG2B" + }, + "source": [ + "**You're a born optimizer!**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "ubqjOzjTXuRw" + }, + "outputs": [], + "source": [ + "# @title Let's plot the optimization trajectory you took. (Run Cell)\n", + "fig, ax = plt.subplots()\n", + "opt = {\n", + " \"head_width\": 0.01,\n", + " \"head_length\": 0.2,\n", + " \"length_includes_head\": True,\n", + " \"color\": \"r\",\n", + "}\n", + "if parameters_list is not None:\n", + " b_old = parameters_list[0][0]\n", + " w_old = parameters_list[0][1]\n", + " for i in range(1, len(parameters_list)):\n", + " b_next = parameters_list[i][0]\n", + " w_next = parameters_list[i][1]\n", + " ax.arrow(b_old, w_old, b_next - b_old, w_next - w_old, **opt)\n", + " b_old, w_old = b_next, w_next\n", + "\n", + " ax.scatter(b_old, w_old, s=200, marker=\"o\", color=\"y\")\n", + " bs = [parameters[0] for parameters in parameters_list]\n", + " ws = [parameters[1] for parameters in parameters_list]\n", + " ax.scatter(bs, ws, s=40, marker=\"o\", color=\"k\")\n", + "\n", + "ax.set(\n", + " xlabel=\"Bias b\",\n", + " ylabel=\"Weight w\",\n", + " title=\"My sequence of b's and w's\",\n", + " xlim=[-5, 5],\n", + " ylim=[-0.05, 0.05],\n", + ")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Sqp1VK0KkLmF" + }, + "source": [ + "**Exercise 2.2 - Group Task**:\n", + "\n", + "*How did your neighbour do?*\n", + "- Did they change $\\color{red}{b}$ and $\\color{red}{w}$ with big steps or small steps each time?\n", + "- Did they start with small steps, and then progressed to bigger steps? Or the other way round? What about you?\n", + "- Did the magnitude of your previous steps influence your next choice? Why? Or why not?\n", + "- Did you all converge to roughly the same endpoint for $\\color{red}{b}$ and $\\color{red}{w}$, or did your sequences end up in different places?" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "oLGAp30ZDnJ5" + }, + "source": [ + "### **Loss Function**\n", + "\n", + "You tweaked $\\color{red}{b}$ and $\\color{red}{w}$ to find a good fit by hand. This isn't optimal (*imagine doing this for 10s to 1000s of parameters*), so we would like to automate this learning process. \n", + "\n", + "Before we discuss how to fit the model, we need to determine a measure of fitness, also referred to as a **loss function**. This loss quantifies the difference between the predictions that our model made ($f(x)$) and the true values/targets ($y$).\n", + "\n", + "When you manually adjusted your weights $\\color{red}{b}$ and $\\color{red}{w}$, you probably looked at how close each $f(x)$ was to the $y$ that it tries to predict.\n", + "Maybe you glanced at the distance from the red line to each of the blue dots, and imagined the average of the distances (marked in purple) below. If the average was small, your fit was good!\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "0i6mLJXV-lXQ" + }, + "source": [ + "\n", + "> Notation Reminder:\n", + "- $x$ - our inputs.\n", + "- $f(x)$ or $\\hat{y}$ - our model predictions.\n", + "- $y$ - the value we are trying to predict/our targets. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "SFGMikcOgqOb" + }, + "source": [ + "#### **Formalizing the Loss Function**\n", + "\n", + "**Indexing**\n", + "\n", + "To formalize this notion, from the image above, let $x_1 = 1$, $x_2 = 2$, $x_3 = 3$... and let $y_1 = 3$, $y_2 = 2$, $y_3 = 3$... The blue dots are therefore a sequence of input-output $(x, y)$ pairs.\n", + "Assuming that the order of the data points doesn't matter, and $i = 1, ..., N$ (where $N=5$ in our case) indexes the data, e.g. $x_1,y_1$ refer to the input and output of the first element in our dataset (e.g. $x_1,y_1$ is (1,3) in the image). \n", + "\n", + "**Error**\n", + "\n", + "The green lines above, also known as **error** or **cost**, tell us the distance between the prediction and target value for a specific example (i.e how well the prediction matches the real data). A long line means that we have a large error and our prediction for that example is not optimal, while a short line indicates our prediction is close to the true label. \n", + "\n", + "In the image, the error is simply the distance between the true label and our model's prediction ( $y$ - $f(x)$), but there can be various formulations of the error term. A popular function is the squared error. \n", + "\n", + "Squared error can be formulated as follows: \n", + "
\n", + "$\\mathrm{error}(\\color{red}{b}, \\color{red}{w} ; x_i, y_i) = (y_i - \\underbrace{(\\color{red}{w} x_i + \\color{red}{b})}_{f(x_i)})^2$ \n", + "
\n", + "\n", + ", where $\\color{red}{b}$ and $\\color{red}{w}$ are our parameters, $x_i,y_i$ is the specific input, output pair that we are calculating the error for. \n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JgPhoXMIL2eE" + }, + "source": [ + "**Exercise 2.3 - Code Task:** Implement Squared Error, using the formulae above. \n", + "\n", + "**Useful methods:** [`jnp.dot`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.dot.html)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nSt4XdrrpQ0r" + }, + "outputs": [], + "source": [ + "def squared_error(b, w, x, y):\n", + " # first calculate f(x_i), also sometimes referred to as yhat\n", + " yhat = ...\n", + " # then calculate the squared error\n", + " error = ...\n", + " return error" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "pi3ysDp3OLsx" + }, + "outputs": [], + "source": [ + "# @title Check if answer is correct (Run me)!\n", + "\n", + "\n", + "def check_squared_error(squared_error):\n", + " b = 3.77\n", + " w = 0.05\n", + "\n", + " correct_error = [105.47291, 71.740906, 145.68492, 71.740906, 178.75693]\n", + "\n", + " for i in range(len(x_data_list)):\n", + " x_i = x_data_list[i]\n", + " y_i = y_data_list[i]\n", + " error = squared_error(b, w, x_i, y_i)\n", + " assert jnp.isclose(\n", + " error, correct_error[i]\n", + " ), f\"Incorrect implementation. Value: {error} Expected Value: {correct_error[i]}. Parameters (b,w,x_i,y_i): {b,w,x_i,y_i} \"\n", + "\n", + " print(\"Implementation is correct!\")\n", + "\n", + "\n", + "check_squared_error(squared_error)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "qzqqVhRW3gY3" + }, + "outputs": [], + "source": [ + "# @title Answer to code task (Try not to peek until you've given it a good try!')\n", + "def squared_error(b, w, x, y):\n", + " yhat = jnp.dot(w, x) + b\n", + " error = jnp.square(yhat - y)\n", + " return error\n", + "\n", + "\n", + "check_squared_error(squared_error)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gzbNQ_Lz5SGX" + }, + "source": [ + "**Loss Function - Mean Squared Error**\n", + "\n", + "Now we have a way to quantify the error of our model per **example**. However, what we really care about is the quality of our model across our **entire training dataset**. Like there are many types of error functions, there are also many ways to quantify our loss across the whole dataset.\n", + "\n", + "A common loss function is **mean squared error (MSE)**, where we simply average the error across the training set. \n", + "\n", + "**MSE** is formulated as follows:\n", + "
\n", + "$\\mathrm{loss}(\\color{red}{b}, \\color{red}{w}) = \\frac{1}{ \\color{blue}{2}N} \\sum_{i=1}^N \\Big(y_i - \\underbrace{(\\color{red}{w} x_i + \\color{red}{b})}_{f(x_i)} \\Big)^2$, \n", + "
\n", + "\n", + "where $N$ is our number of training examples and $\\color{blue}{\\frac{1}{2}}$ is a constant factor that makes taking the derivative more convenient (more on this later).\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "vSoFrFx48vHL" + }, + "source": [ + "**Plot our loss**\n", + "\n", + "Let's code our loss function. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "0W9M7QubOEMM" + }, + "outputs": [], + "source": [ + "# MSE\n", + "def loss(b, w):\n", + " # init loss of size of b\n", + " loss = 0 * b\n", + " for x, y in zip(x_data_list, y_data_list):\n", + " loss += squared_error(b, w, x, y)\n", + " N = len(x_data_list)\n", + " return loss / (2 * (N))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JsG0vQamfdJQ" + }, + "source": [ + "Now that we have a loss function, we can plot the loss of our model, using the sequence of manually chosen values of $\\color{red}{b}$ and $\\color{red}{w}$ from above." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "mSIbd-xtfU2S" + }, + "outputs": [], + "source": [ + "# @title Plot our Loss (Run Me)\n", + "from matplotlib import cm\n", + "\n", + "bs, ws = np.linspace(-5, 5, num=25), np.linspace(-0.05, 0.05, num=25)\n", + "b_grid, w_grid = np.meshgrid(bs, ws)\n", + "loss_grid = loss(b_grid, w_grid)\n", + "\n", + "\n", + "def plot_loss(parameters_list, title, show_stops=False):\n", + " fig, ax = plt.subplots(1, 2, figsize=(18, 8), subplot_kw={\"projection\": \"3d\"})\n", + " ax[0].view_init(10, -30)\n", + " ax[1].view_init(30, -30)\n", + "\n", + " if parameters_list is not None:\n", + " b_old = parameters_list[0][0]\n", + " w_old = parameters_list[0][1]\n", + " loss_old = loss(b_old, w_old)\n", + " ls = [loss_old]\n", + "\n", + " for i in range(1, len(parameters_list)):\n", + " b_next = parameters_list[i][0]\n", + " w_next = parameters_list[i][1]\n", + " loss_next = loss(b_next, w_next)\n", + " ls.append(loss_next)\n", + "\n", + " ax[0].plot(\n", + " [b_old, b_next],\n", + " [w_old, w_next],\n", + " [loss_old, loss_next],\n", + " color=\"red\",\n", + " alpha=0.8,\n", + " lw=2,\n", + " )\n", + " ax[1].plot(\n", + " [b_old, b_next],\n", + " [w_old, w_next],\n", + " [loss_old, loss_next],\n", + " color=\"red\",\n", + " alpha=0.8,\n", + " lw=2,\n", + " )\n", + " b_old, w_old, loss_old = b_next, w_next, loss_next\n", + "\n", + " if show_stops:\n", + " ax[0].scatter(b_old, w_old, loss_old, s=100, marker=\"o\", color=\"y\")\n", + " ax[1].scatter(b_old, w_old, loss_old, s=100, marker=\"o\", color=\"y\")\n", + " bs = [parameters[0] for parameters in parameters_list]\n", + " ws = [parameters[1] for parameters in parameters_list]\n", + " ax[0].scatter(bs, ws, ls, s=40, marker=\"o\", color=\"k\")\n", + " ax[1].scatter(bs, ws, ls, s=40, marker=\"o\", color=\"k\")\n", + " else:\n", + " ax[0].scatter(b_old, w_old, loss_old, s=40, marker=\"o\", color=\"k\")\n", + " ax[1].scatter(b_old, w_old, loss_old, s=40, marker=\"o\", color=\"k\")\n", + "\n", + " ax[0].plot_surface(\n", + " b_grid,\n", + " w_grid,\n", + " loss_grid,\n", + " cmap=cm.coolwarm,\n", + " linewidth=0,\n", + " alpha=0.4,\n", + " antialiased=False,\n", + " )\n", + " ax[1].plot_surface(\n", + " b_grid,\n", + " w_grid,\n", + " loss_grid,\n", + " cmap=cm.coolwarm,\n", + " linewidth=0,\n", + " alpha=0.4,\n", + " antialiased=False,\n", + " )\n", + " ax[0].set(xlabel=\"Bias b\", ylabel=\"Weight w\", zlabel=\"Loss\", title=title)\n", + " ax[1].set(xlabel=\"Bias b\", ylabel=\"Weight w\", zlabel=\"Loss\", title=title)\n", + " plt.show()\n", + "\n", + "\n", + "plot_loss(\n", + " parameters_list,\n", + " \"An example loss function and my sequence of b's and w's\",\n", + " show_stops=True,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Z41ZIC5W-Dip" + }, + "source": [ + "Your sequence of choices for $\\color{red}{b}$ and $\\color{red}{w}$ are also plotted on the $(\\color{red}{b}, \\color{red}{w})$ axis.\n", + "Does your sequence progressively move toward a parameter setting for which the loss function is small?\n", + "We plotted two views of the loss function, so that it is easier to see the minimum *and* the function." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fg5Hi4783Gus" + }, + "source": [ + "### **Gradient descent: No more tuning parameters by hand!**\n", + "\n", + "When you manually tweaked $\\color{red}{b}$ and $\\color{red}{w}$, you tried to adjust your model to find a better fit. If you were an experienced manual parameter adjuster, you might even have adjusted the $\\color{red}{b}$ and $\\color{red}{w}$ so that the fit gets *better* with each adjustment.\n", + "\n", + "Gradient descent is a method that tries to minimize the loss function by iteratively updating our weights $\\color{red}{b}$ and $\\color{red}{w}$. How do we know how to update our weights? That is where **gradients** come in! The gradients of the weights tell us how to update their values in order to minimize our loss. \n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "8j7PKzZJumYX" + }, + "source": [ + "##### **Gradients** \n", + "Using our **loss**, we would like to know how to adjust $\\color{red}{b}$ **and** $\\color{red}{w}$ in order to minimize our loss. We can use partial derivatives and the chain rule to figure out how to update our parameters.\n", + "\n", + "> **Partial derivatives** are used when we have a function of several variables and we want to know how a function changes as a result of a specific variable. To calculate this, we take the derivative of the loss, with respect to one of those variables, with the others variables held constant. If we know this for all the variables in our loss function, we can update our parameters to decrease our loss. \n", + ">\n", + "> For example, for a function $f(x,y)$, $\\frac{\\partial{f}}{\\partial{x}}$ (*read partial derivative of $f$ with respect to $x$*), tells us how $f$ changes with respect to changes in $x$ and $\\frac{\\partial{f}}{\\partial{y}}$, tells us how $f$ changes with respect to changes in $y$. \n", + "\n", + "\n", + "> The **chain rule** tells us how to differentiate composite functions (functions of a functions/function within a function). The rule is as follows: $$\\frac{d}{d x}[f(g(x))]=f^{\\prime}(g(x)) g^{\\prime}(x)$$\n", + "\n", + "\n", + "You can read more here - [partial derivatives](https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/introduction-to-partial-derivatives), the [chain rule](https://www.khanacademy.org/math/ap-calculus-ab/ab-differentiation-2-new/ab-3-1a/a/chain-rule-review) and [practical on optimization](https://github.com/deep-learning-indaba/indaba-pracs-2019/blob/master/1b_build_tensorflow.ipynb).\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_1EZcdHH2cE2" + }, + "source": [ + "**Exercise 2.4 - (Optional) Math Task:**\n", + "\n", + "Using our loss,\n", + "\n", + "
\n", + "$\\mathrm{loss}(\\color{red}{b}, \\color{red}{w}) = \\frac{1}{ \\color{blue}{2}N} \\sum_{i=1}^N \\Big(y_i - \\underbrace{(\\color{red}{w} x_i + \\color{red}{b})}_{f(x_i)} \\Big)^2$, \n", + "
\n", + "\n", + "Can you derive \n", + "$\\frac{\\partial \\mathcal{L}}{\\partial w}$ and $\\frac{\\partial \\mathcal{L}}{\\partial b}$ by hand? *For notation simplicity, we will refer to the loss $\\mathrm{loss}(\\color{red}{b}, \\color{red}{w})$ as $\\mathcal{L}$.*\n", + "\n", + "**Useful methods:** [Partial derivatives](https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/introduction-to-partial-derivatives), [Sum Rule](https://www.khanacademy.org/math/old-ap-calculus-ab/ab-derivative-rules/ab-basic-diff-rules/a/basic-differentiation-review) and the [chain rule](https://www.khanacademy.org/math/ap-calculus-ab/ab-differentiation-2-new/ab-3-1a/a/chain-rule-review). \n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ktpXf4w4g3Ag" + }, + "source": [ + "**Answer to math task** - Once you have given it a try, you can see the full derivation [here](#scrollTo=9OH9H7ndfuyQ)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ODU1rQAemouO" + }, + "source": [ + "The two gradients we need are as follows:\n", + "\\begin{aligned}\n", + "&\\frac{\\partial \\mathcal{L}}{\\partial w}=\\frac{1}{N} \\sum_{i=1}^{N}\\left(f(x_i)-y_i\\right) x_i \\\\\n", + "&\\frac{\\partial \\mathcal{L}}{\\partial b}=\\frac{1}{N} \\sum_{i=1}^{N} f(x_i)-y_i\n", + "\\end{aligned}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "BVbRhLNY0TvA" + }, + "source": [ + "In the code snippet below, we compute the two gradients using a for-loop over examples. This is just to illustrate how the gradient is computed. Very soon, we'll throw away the for-loop over data points and do it \"all at once\" in vectorized operations!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "dAeEMynv3GaI" + }, + "outputs": [], + "source": [ + "def manual_grad(b, w):\n", + " grad_b = 0\n", + " grad_w = 0\n", + " for x, y in zip(x_data_list, y_data_list):\n", + " f = w * x + b\n", + " grad_b += f - y\n", + " grad_w += (f - y) * x\n", + " grad_b /= len(x_data_list)\n", + " grad_w /= len(x_data_list)\n", + " return grad_b, grad_w" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "RMt9Qlox28Oa" + }, + "source": [ + "##### **Gradient Descent** \n", + "\n", + "Not that we have the gradients, we can use gradient descent. The general idea is to start with an initial value/guess for the model weights and then repeatedly use the gradients to tweak the parameters $\\color{red}{b}$ and $\\color{red}{w}$ in the right direction. \n", + "\n", + "These updates can be formulated as follows:\n", + "\n", + "$$\\color{red}{b} \\leftarrow \\color{red}{b} - \\color{blue}{\\eta} \\frac{\\partial \\mathcal{L}}{\\partial \\color{red}{b}} $$ \n", + "\n", + "$$\\color{red}{w} \\leftarrow \\color{red}{w} - \\color{blue}{\\eta} \\frac{\\partial \\mathcal{L}}{\\partial \\color{red}{w}} $$ \n", + "\n", + ", where $\\color{blue}{\\eta}$ is the **learning rate** and just tells us how much we are going to scale the gradient before we use it to update our parameters:\n", + "are we going to try to walk downhill with big steps or small steps?" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "YsL-Goz8hTOb" + }, + "source": [ + "**Exercise 2.5**\n", + "1. Run the code snippet below, and note the $(\\color{red}{b}, \\color{red}{w})$ trajectory as we use the gradient to (try to) get to the minimum.\n", + "2. Adjust the starting values for $\\color{red}{b}$ or $\\color{red}{w}$ or the value of $\\color{blue}{\\eta}$ and see how the resulting trajectory to the minimum changes.\n", + "3. Can you find a setting for $\\color{blue}{\\eta}$ where things start spiraling out of control and the loss gets bigger and bigger (and not smaller)?\n", + "4. Can you find a setting for $\\color{blue}{\\eta}$ so that we're still far away from the minimum after `200` parameter update steps?\n", + "5. Play around with the `max_grad` variable. Do we always need this? What problem does this solve? (Hint: Trying printing the grads values with `max_grad = None`).\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "6AvZzHQx1AKM" + }, + "outputs": [], + "source": [ + "b = 0 # Change me! Try 2, 4\n", + "w = -0.05 # Change me! Try -1, 2\n", + "learning_rate = 0.01 # Change me! Try 0.1, 0.5, ...\n", + "max_grad = 1 # Change me! Try None, 10\n", + "\n", + "parameters_step_list = []\n", + "\n", + "for _ in range(200):\n", + " parameters_step_list.append([b, w])\n", + " grad_b, grad_w = manual_grad(b, w)\n", + " # Naive gradient value clipping - different from standard gradient clipping - which clips the gradient norm.\n", + " if max_grad:\n", + " grad_b = jnp.clip(grad_b, a_min=-max_grad, a_max=max_grad)\n", + " grad_w = jnp.clip(grad_w, a_min=-max_grad, a_max=max_grad)\n", + " b = b - learning_rate * grad_b\n", + " w = w - learning_rate * grad_w\n", + "\n", + "plot_loss(\n", + " parameters_step_list, \"A loss function, and minimizing it with gradient descent\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3shLExakrzIW" + }, + "source": [ + "##### **Autodiff using JAX: No more manual gradients!**\n", + "\n", + "In the above example, we calculated the gradients by hand (`manual_grad`). Thanks to automatic differentiation, we don't have to do this! While you can probably derive and code the gradients of the loss function for our linear model without making a mistake somewhere, getting the gradients right for more complex models can be much more work. Much, much more work! \n", + "\n", + "We use JAX to do the automatic differentiation, using the `grad` function as follows:\n", + "```\n", + "auto_grad = jax.grad(loss_function, argnums=(0, 1))\n", + "```\n", + "\n", + "and call it in the same way as we called `manual_grad`. `argnums` tells JAX we want the partial derivative of our function with respect to the first 2 parameters." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "3WiF4oYi1xGK" + }, + "outputs": [], + "source": [ + "x = np.array(x_data_list)\n", + "y = np.array(y_data_list)\n", + "\n", + "\n", + "def loss_function(b, w):\n", + " f = w * x + b\n", + " errors = jnp.square(y - f)\n", + " # Instead of summing over individual data points in a for-loop, and then\n", + " # dividing to get the average, we do it in one go. No more for-loops!\n", + " return 1 / 2 * jnp.mean(errors)\n", + "\n", + "\n", + "# This is it! One line of code.\n", + "auto_grad = jax.grad(loss_function, argnums=(0, 1))\n", + "\n", + "# Let's see if it works. Does auto_grad match our manual version?\n", + "b, w = 2.5, 3.5\n", + "\n", + "grad_b_autograd, grad_w_autograd = auto_grad(b, w)\n", + "print(\"Autograd grad_b:\", grad_b_autograd, \" grad_w\", grad_w_autograd)\n", + "\n", + "grad_b_manual, grad_w_manual = manual_grad(b, w)\n", + "print(\"Manual gradients grad_b:\", grad_b_manual, \" grad_w\", grad_w_manual)\n", + "\n", + "# We use isclose, since the rounding is slightly different.\n", + "assert jnp.isclose(grad_b_autograd, grad_b_manual) and jnp.isclose(\n", + " grad_w_autograd, grad_w_manual\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "okaeUVNf347w" + }, + "source": [ + "Nice! So we can use automatic differentiation and we don't have to manually calculate gradients. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uW5rnjwoVv0m" + }, + "source": [ + "> **Gradient Descent vs Analytical Solution**\n", + ">\n", + "> So we used gradient descent to learn the weights for our linear model, but other options exist! For linear regression, there exists an [Analytical Solution](https://staff.fnwi.uva.nl/r.vandenboomgaard/MachineLearning/LectureNotes/Regression/LinearRegression/analytical_solution.html). This means we can calculate our weights directly in one step, without having to iterate using numerical methods like gradient descent.\n", + ">\n", + ">*Why use gradient descent then?*\n", + "- `More General` - Gradient Descent is a more general algorithm, that can be applied to problems where analytical solutions aren't feasible to calculate or don't exit e.g. neural networks. \n", + "- `Computational Complexity` - Even when a closed form solution is available, in some cases it may be faster to find the solution using gradient descent. Read more on this [here](https://stats.stackexchange.com/questions/278755/why-use-gradient-descent-for-linear-regression-when-a-closed-form-math-solution).\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rK3RJPvAf4zm" + }, + "source": [ + "### **Assumptions**\n", + "\n", + "All models have assumptions. One assumption that we made is that our model is a *linear* model, i.e. that our best guess is for $y$ is with $f(x) = \\color{red}{w} x + \\color{red}{b}$. Is this assumption always valid for all kinds of data and datasets?\n", + "\n", + "> More assumptions for [simple linear regression](https://online.stat.psu.edu/stat500/lesson/9/9.2/9.2.3#paragraph--3265)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Ao93xuXGJhLh" + }, + "source": [ + "## **2.2 From Linear to Polynomial Regression** - `Intermediate`\n", + "\n", + "So far we've looked at data that could be fitted fairly accurately with a single straight line. Despite its simplicity, linear regression tends to be very useful in practice, especially as a starting point in data analysis! However, there are cases where a linear fit is unsatisfying. \n", + "\n", + "Suppose our dataset looked like the following:\n", + "\n", + "\n", + "\n", + "How would we fit a model to this data? One possible option is to increase the complexity of our linear model by attempting to fit a higher-order polynomial, for example, a 4th-degree [polynomial](https://en.wikipedia.org/wiki/Polynomial):\n", + "$\\hat{y} = \\color{red}{w_4}x^4 + \\color{red}{w_3}x^3 + \\color{red}{w_2}x^2 + \\color{red}{w_1}x + \\color{red}{w_0}$. \n", + "\n", + "Do we have to derive a whole new algorithm? Luckily, not! We can still solve for the least squares parameters $\\color{red}{w_4}, \\color{red}{w_3}, \\color{red}{w_2}, \\color{red}{w_1}, \\color{red}{w_0}$ using the same techniques we used for fitting a line. \n", + "\n", + "Given the dataset $\\{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\\}$, we construct a *feature* matrix $\\mathbf{\\Phi}$ by expending original features, being careful to include terms corresponding to each power of $x$, as follows:\n", + "\n", + "$\\mathbf{\\Phi} =\n", + "\\begin{pmatrix}\n", + "x_1^4 & x_1^3 & x_1^2 & x_1 & 1 \\\\\n", + "x_2^4 & x_2^3 & x_2^2 & x_2 & 1 \\\\\n", + "\\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n", + "x_n^4 & x_n^3 & x_n^2 & x_n & 1\n", + "\\end{pmatrix}\n", + "$\n", + "\n", + "And just like before, our $\\mathbf{y}$ vector is $(y_1, y_2, ..., y_n)^\\mathsf{T}$\n", + "\n", + "Next, we fit a 4th-degree polynomial to our data and find that the fit is visually a lot better and captures the wave-like pattern of the data! \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "XoSIWpUvKtlE" + }, + "outputs": [], + "source": [ + "# @title Polynomial Helper Functions (Run Me)\n", + "def generate_wave_like_dataset(min_x=-1, max_x=1, n=100):\n", + " xs = np.linspace(min_x, max_x, n)\n", + " ys = np.sin(5 * xs) + np.random.normal(size=len(xs), scale=0.1)\n", + " return xs, ys\n", + "\n", + "\n", + "def regression_analytical_solution(X, y):\n", + " return ((np.linalg.inv(X.T.dot(X))).dot(X.T)).dot(y)\n", + "\n", + "\n", + "def gradient_descent(X, y, learning_rate=0.01, num_steps=1000, debug=False):\n", + " report_every = num_steps // 10\n", + "\n", + " def loss(current_w, X, y):\n", + " y_hat = jnp.dot(X, current_w)\n", + " loss = jnp.mean((y_hat - y) ** 2)\n", + " return loss, y_hat\n", + "\n", + " loss_and_grad = jax.value_and_grad(loss, has_aux=True)\n", + " # Initialize the parameters\n", + " key = jax.random.PRNGKey(42)\n", + " w = jax.random.normal(key=key, shape=(X.shape[1],))\n", + "\n", + " # Run a a few steps of gradient descent\n", + " for i in range(num_steps):\n", + " (loss, y_hat), grad = loss_and_grad(w, X, ys)\n", + "\n", + " if i % report_every == 0:\n", + " if debug:\n", + " print(f\"Step {i}: w: {w}, Loss: {loss}, Grad: {grad}\")\n", + " else:\n", + " print(f\"Step {i}: Loss: {loss}\")\n", + "\n", + " w = w - learning_rate * grad\n", + "\n", + " return w\n", + "\n", + "\n", + "def plot_data(y_hat, xs, ys, title):\n", + " plt.figure()\n", + " plt.scatter(xs, ys, label=\"Data\")\n", + " plt.plot(xs, y_hat, \"r\", label=title)\n", + "\n", + " plt.title(title)\n", + " plt.xlabel(\"Input x\")\n", + " plt.ylabel(\"Output y\")\n", + " plt.legend();" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "CcXjMKi0Znr6" + }, + "source": [ + "### **Under-fitting**\n", + "Let's see how our linear model does on our new dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "QmAWgBEIZh0X" + }, + "outputs": [], + "source": [ + "xs, ys = generate_wave_like_dataset(min_x=-1, max_x=1, n=25)\n", + "X = np.vstack([xs, np.ones(len(xs))]).T\n", + "w = regression_analytical_solution(X, ys)\n", + "y_hat = X.dot(w)\n", + "\n", + "plot_data(y_hat, xs, ys, \"Linear regression (analytic minimum)\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "pzlcvE8pZrYj" + }, + "source": [ + "Our linear model has missed the majority of the points in our dataset. This is also known as **under-fitting**, which is when our model is too simple to capture the relationship between the inputs and outputs." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uwwajy30U9fX" + }, + "source": [ + "### **Over-fitting**\n", + "\n", + "Since our linear model was too simple, we can try a more complicated model.\n", + "\n", + "**Exercise 2.5 - Code Task**: Spend a couple of minutes selecting different parameters (by moving the sliders), to see the best loss you can get using polynomial regression. \n", + "\n", + "1. `degree` - Degree $n$ of a polynomial in this form - $\\hat{y} = \\color{red}{w_n}x^n +\\color{red}{w_{n-1}}x^{n-1}+ ... + \\color{red}{w_2}x^2 + \\color{red}{w_1}x + \\color{red}{w_0}$. \n", + "2. `num_steps` - The number of steps to running gradient descent for. \n", + "3. `learning_rate` - The learning rate used when updating the weights in gradient descent. \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "eGrB9V66-P9L" + }, + "outputs": [], + "source": [ + "# @title Choose parameters. { run: \"auto\" }\n", + "degree = 3 # @param {type:\"slider\", min:1, max:10, step:1}\n", + "num_steps = 1500 # @param {type:\"slider\", min:1000, max:5000, step:500}\n", + "learning_rate = 0.1 # @param [\"0.2\",\"0.1\", \"0.01\"] {type:\"raw\"}\n", + "\n", + "\n", + "# def create_data_matrix(xs,degree=4):\n", + "# return np.vstack([[np.power(xs,pow) for pow in np.arange(degree)],np.ones(len(xs))]).T\n", + "\n", + "\n", + "def create_data_matrix(xs, degree=4):\n", + " pows = [np.power(xs, pow) for pow in np.arange(1, degree + 1)]\n", + " pows.reverse()\n", + " return np.vstack([pows, np.ones(len(xs))]).T\n", + "\n", + "\n", + "phi = create_data_matrix(xs, degree=degree)\n", + "\n", + "\n", + "w = gradient_descent(phi, ys, learning_rate=learning_rate, num_steps=num_steps)\n", + "y_hat = phi.dot(w)\n", + "\n", + "plot_data(y_hat, xs, ys, \"Polynomial regression (gradient descent steps)\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tGcJv82aFiLc" + }, + "source": [ + "Let's see how a 10-th degree polynomial fits our data. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "EszayH6Q-z6_" + }, + "outputs": [], + "source": [ + "degree = 10\n", + "num_steps = 5000\n", + "learning_rate = 0.2\n", + "\n", + "\n", + "phi = create_data_matrix(xs, degree=degree)\n", + "w = gradient_descent(phi, ys, learning_rate=learning_rate, num_steps=num_steps)\n", + "y_hat = phi.dot(w)\n", + "\n", + "plot_data(y_hat, xs, ys, \"Polynomial regression (gradient descent steps)\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "o8SPF0UILmXW" + }, + "source": [ + "**What happens if we extend our predictions out a bit?**\n", + "\n", + "Our model fits the majority of the data! This sounds great, but let's see how our model handles new data sampled from the same **data generation process**! \n", + "\n", + "In the plot below we fill in some extra data points from the true function (in orange) for comparison, but bear in mind that these were not used to fit the regression model. We are **extrapolating** the model into a previously unseen region!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Y2d5QywylwTK" + }, + "outputs": [], + "source": [ + "# Recover the analytic solution.\n", + "degree = 10\n", + "phi = create_data_matrix(xs, degree=degree)\n", + "w = regression_analytical_solution(phi, ys)\n", + "\n", + "# Extend the x's and y's.\n", + "more_xs, more_ys = generate_wave_like_dataset(min_x=-1.3, max_x=-1, n=20)\n", + "all_xs = np.concatenate([more_xs, xs])\n", + "all_ys = np.concatenate([more_ys, ys])\n", + "\n", + "# Get the design matrix for the extended data, so that we could make predictions\n", + "# for it.\n", + "phi = create_data_matrix(all_xs, degree=degree)\n", + "\n", + "# Note that we don't recompute w, we use the previously computed values that\n", + "# only saw x values in the range [0, 10]\n", + "y_hat = phi.dot(w)\n", + "\n", + "plt.scatter(xs, ys, label=\"Data\")\n", + "plt.scatter(more_xs, more_ys, label=\"Unseen Data\")\n", + "plt.plot(all_xs, y_hat, \"r\", label=\"Polynomial Regression\")\n", + "\n", + "plt.title(\"A wave-like dataset with the best-fit line\")\n", + "plt.xlabel(\"Input x\")\n", + "plt.ylabel(\"Output y\")\n", + "plt.legend()\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "V3ld4cRlPVGy" + }, + "source": [ + "We see that while the fit looks good in the blue region that the model was fitted on, the fit seems to diverge significantly in the orange region.\n", + "The model is able to **interpolate** well (fill in gaps in the region it was fitted), but it **extrapolates** (outside the fitted region) poorly.\n", + "This is a common concern with models in general, unless you can be sure that you have the correct *inductive biases* (assumptions about the data generating process) built into the model, you should be cautious about extrapolating from it.\n", + "\n", + "The fact that our model has very low training loss and high test loss (unseen data) is a sign of over-fitting. Over-fitting is when our models fits our training data, but fails to generalise to previously unseen data from the same data generating process. This is usually the result of the model having sufficient degrees of freedom to fit the noise in the training data. \n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "2feKuHJplo0U" + }, + "source": [ + "**Exercise 2.6 - Group Task** \n", + "\n", + "**What shall we do? Pause here!**\n", + "\n", + "Before progressing with this practical, take a moment to think about the problem. In machine learning, there are many practical approaches to getting a model that generalizes well. As you can guess, much theory is devoted to the problem too!\n", + "\n", + "With what you've seen so far, try to explain to your neighbour\n", + "\n", + "1. every factor that you can think of, that could cause a model to generalize poorly;\n", + "2. some ideas that you could think of to improve the model's fit to (unseen) data;\n", + "3. any underlying assumptions that you are making about unseen data.\n", + "\n", + "Don't proceed until you've had a solid discussion on the topic. If someone is tutoring this practical, they might contribute to the discussion!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "sAtms17jtCOU" + }, + "source": [ + "## **2.3 Training Models Using Haiku and Optax** - `Beginner`\n", + "\n", + "For our Linear and Polynomial examples, we only used core JAX to keep track of and optimize our weights. This can be tedious, especially when dealing with larger models and when using more complicated optimization methods. \n", + "\n", + "Luckily, JAX has higher-level neural network libraries such as [Haiku](https://github.com/deepmind/dm-haiku) or [Flax](https://github.com/google/flax), which make building models more convenient, and libraries like [Optax](https://github.com/deepmind/optax), that make gradient processing and optimization more convenient. \n", + "\n", + "In this section, we will briefly go through how to use Haiku and Optax. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "0ySycQo7txoF" + }, + "outputs": [], + "source": [ + "%%capture\n", + "# @title Install Haiku and Optax. (Run Cell)\n", + "!pip install -U dm-haiku\n", + "!pip install -U optax\n", + "# For plotting.\n", + "!pip install livelossplot" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "exuVety_bFhQ" + }, + "source": [ + "### Haiku\n", + "\n", + "[Haiku](https://github.com/deepmind/dm-haiku) is JAX neural network library intended to be familiar to people used to object-oriented programming models (like PyTorch or Tensorflow), by making managing state simpler. \n", + "\n", + "Haiku modules are similar to standard python objects (they have references to their own parameters and functions). However, since JAX operates on *pure functions*, Haiku modules **cannot be directly instantiated**, but rather they need to be **wrapped into pure function transformations.**" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9wvTzTi-YJTp" + }, + "source": [ + "Let's create a simple linear module." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "H_-3r49B-Orc" + }, + "outputs": [], + "source": [ + "import haiku as hk\n", + "\n", + "\n", + "class MyLinearModel(hk.Module):\n", + " def __init__(self, output_size, name=None):\n", + " super().__init__(name=name)\n", + " self.output_size = output_size\n", + "\n", + " def __call__(self, x):\n", + " j, k = x.shape[-1], self.output_size\n", + " w_init = hk.initializers.TruncatedNormal(1.0 / np.sqrt(j))\n", + " w = hk.get_parameter(\"w\", shape=[j, k], dtype=x.dtype, init=w_init)\n", + " b = hk.get_parameter(\"b\", shape=[k], dtype=x.dtype, init=jnp.ones)\n", + " return jnp.dot(x, w) + b" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3WYb35ffYOSt" + }, + "source": [ + "And attempt to directly **instantiate** it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "LuZy7pj9-b2m" + }, + "outputs": [], + "source": [ + "# Should raise an error.\n", + "try:\n", + " MyLinearModel(output_size=1)\n", + "except Exception as e:\n", + " print(\"Exception {}\".format(e))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-XGOeJCH-10P" + }, + "source": [ + "This fails since we are trying to **directly** instantiate `MyLinearModel`. Instead what we should do is wrap our model in a pure functional transform as follows: " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "d1yI7j2h_Esd" + }, + "outputs": [], + "source": [ + "def model_fn(x):\n", + " module = MyLinearModel(output_size=1)\n", + " return module(x)\n", + "\n", + "\n", + "model = hk.without_apply_rng(hk.transform(model_fn))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "EZ24tXUiaHQa" + }, + "source": [ + "> We use `hk.without_apply_rng` since our model's *inference* (not initialization) is deterministic and hence has no use for a random key when calling `.apply`. \n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "3aWAc_f0BVFU" + }, + "outputs": [], + "source": [ + "model" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Lao8wS3tBjc3" + }, + "source": [ + "Our wrapper object has two methods: \n", + "- `init` - initialize the variables in the model and return these params. \n", + "- `apply` - run a forward pass through our data. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gTJcV6hjFh6u" + }, + "source": [ + "If we want to get the initial state of our module, we need to call `init` with an example input." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nt0srU3rQlhL" + }, + "outputs": [], + "source": [ + "# input dimention we are considering\n", + "input_dim = 3\n", + "\n", + "example_x = jnp.arange(input_dim, dtype=jnp.float32)\n", + "rng_key = jax.random.PRNGKey(42)\n", + "\n", + "params = model.init(rng=rng_key, x=example_x)\n", + "print(params)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "VCoYMnZkGKOb" + }, + "source": [ + "We can now call the `apply` method as follows. Note we pass in the `params` variable that holds the current model weights. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "XA8n5cEMGVWC" + }, + "outputs": [], + "source": [ + "new_x = jnp.arange(input_dim, dtype=jnp.float32)\n", + "# example forward pass through our model\n", + "prediction = model.apply(params, new_x)\n", + "print(prediction)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "mmk2jcIHbRlS" + }, + "source": [ + "So that is it! Those are basics of using Haiku modules!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_3h034w5bWn6" + }, + "source": [ + "### Optax\n", + "\n", + "[Optax](https://github.com/deepmind/optax) is an optimization and gradient processing library in JAX. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "CuWggGFEcdoy" + }, + "source": [ + "In our linear regression section, we manually updated the params of our model (e.g. `w = w - learning_rate * grad_w`). \n", + "\n", + "This wasn't too difficult in our simple case, but for more challenging optimizations, especially when chaining optimizations (e.g. clipping gradient norm and then applying an optimizer update), it becomes trickier to effectively and accurately implement these parameter updates. Luckily, Optax comes to the rescue here! " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "hvecjyZGelIV" + }, + "source": [ + "Here is a simple example of how you create and initialize an optimizer." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "zhqkLvtRe6zf" + }, + "outputs": [], + "source": [ + "import optax\n", + "\n", + "# create optim\n", + "learning_rate = 0.1\n", + "optimizer = optax.adam(learning_rate)\n", + "\n", + "# init optim\n", + "input_dim = 3\n", + "# init weights to pass to our optim\n", + "params = {\"w\": jnp.ones((input_dim,))}\n", + "\n", + "# Obtain the `opt_state` that contains statistics for the optimizer.\n", + "opt_state = optimizer.init(params)\n", + "print(opt_state)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Io4mLeIifxkX" + }, + "source": [ + "Once we have calculated the gradients, we pass them (`grads`) and the `opt_state` to our optimizer to get `updates` that should be applied to the current parameters and `new_opt_state`, which keeps track of the current state of the optimizer. \n", + "\n", + "```\n", + "updates, new_opt_state = optimizer.update(grads, opt_state)\n", + "params = optax.apply_updates(params, updates)\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4p1l2rUWpRZ7" + }, + "source": [ + "And that is the basics of Optax. " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "7IaqVuRPg3ER" + }, + "source": [ + "### Full Training Loop Using Haiku and Optax ๐Ÿง™\n", + "\n", + "Here we show a full training loop, using Haiku and Optax. For convenience, we introduce structures like `TrainingState` and functions like `init`,`update` and `loss_fn`. Please read through to get comfortable with how you can effectively train JAX models." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "pqZZlOfNuMEn" + }, + "source": [ + "Here we define some helper functions. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "LY0t6C4OKzSK" + }, + "outputs": [], + "source": [ + "from typing import Any, MutableMapping, NamedTuple, Tuple\n", + "import time\n", + "from sklearn import datasets\n", + "from sklearn.model_selection import train_test_split\n", + "import haiku as hk\n", + "import optax\n", + "import tensorflow as tf\n", + "import tensorflow_datasets as tfds\n", + "from livelossplot import PlotLosses\n", + "\n", + "# Convenient container for keeping track of training state.\n", + "class TrainingState(NamedTuple):\n", + " \"\"\"Container for the training state.\"\"\"\n", + "\n", + " params: hk.Params\n", + " opt_state: optax.OptState\n", + " step: jnp.DeviceArray\n", + "\n", + "\n", + "# function for our model (same as above)\n", + "def model_fn(x):\n", + " module = MyLinearModel(output_size=1)\n", + " return module(x).ravel()\n", + "\n", + "\n", + "# Load a simple dataset - diabetes (https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html)\n", + "# and convert to an iterator. Although it would be faster to use pure jnp arrays in this example,\n", + "# in practice for large datasets we use iterators.\n", + "# Read here https://www.tensorflow.org/guide/data_performance for best practices.\n", + "def load_dataset(seed, input_dim=3, train_batch_size=32, shuffule_train_data=True):\n", + " # Load the diabetes dataset\n", + " diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True)\n", + "\n", + " # Use only the first input_dim (3) features\n", + " diabetes_X = diabetes_X[:, :input_dim]\n", + "\n", + " X_train, X_test, y_train, y_test = train_test_split(\n", + " diabetes_X, diabetes_y, test_size=0.2, train_size=0.8, random_state=seed\n", + " )\n", + "\n", + " train_dataset = (\n", + " tf.data.Dataset.from_tensor_slices((X_train, y_train)).cache().repeat()\n", + " )\n", + " test_dataset = tf.data.Dataset.from_tensor_slices((X_test, y_test)).cache().repeat()\n", + "\n", + " if shuffule_train_data:\n", + " train_dataset = train_dataset.shuffle(10 * train_batch_size, seed=seed)\n", + "\n", + " train_dataset = train_dataset.batch(train_batch_size)\n", + " # Using full test dataset\n", + " test_dataset = test_dataset.batch(len(X_test))\n", + "\n", + " train_dataset = iter(tfds.as_numpy(train_dataset))\n", + " test_dataset = iter(tfds.as_numpy(test_dataset))\n", + " return train_dataset, test_dataset" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-rGuA_Y4DHXA" + }, + "source": [ + "Full training and evaluation loop." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "EsD62L4cUM9r" + }, + "outputs": [], + "source": [ + "# first we retrive our model\n", + "model = hk.without_apply_rng(hk.transform(model_fn))\n", + "\n", + "# Then we create the optimiser - chain clipping by gradient norm and using Adam\n", + "learning_rate = 0.01\n", + "optimizer = optax.chain(\n", + " optax.clip_by_global_norm(0.5),\n", + " optax.adam(learning_rate=learning_rate),\n", + ")\n", + "\n", + "# define our loss function\n", + "def loss_fn(params, x, y_true):\n", + " y_pred = model.apply(params, x)\n", + " loss = (y_pred - y_true) ** 2\n", + " return jnp.mean(loss)\n", + "\n", + "\n", + "# Function to initialize our model and optimizer.\n", + "@jax.jit\n", + "def init(rng: jnp.ndarray, data) -> TrainingState:\n", + " \"\"\"\n", + " rng: jax prng seed.\n", + " data: Sample of the dataset used to get correct shape.\n", + " \"\"\"\n", + "\n", + " rng, init_rng = jax.random.split(rng)\n", + " initial_params = model.init(init_rng, data)\n", + " initial_opt_state = optimizer.init(initial_params)\n", + " return TrainingState(\n", + " params=initial_params,\n", + " opt_state=initial_opt_state,\n", + " step=np.array(0),\n", + " )\n", + "\n", + "\n", + "# Function to update our params and keep track of metrics\n", + "@jax.jit\n", + "def update(state: TrainingState, data):\n", + " X, y = data\n", + " loss_value, grads = jax.value_and_grad(loss_fn)(state.params, X, y)\n", + " updates, new_opt_state = optimizer.update(grads, state.opt_state)\n", + " new_params = optax.apply_updates(state.params, updates)\n", + "\n", + " new_state = TrainingState(\n", + " params=new_params,\n", + " opt_state=new_opt_state,\n", + " step=state.step + 1,\n", + " )\n", + " metrics = {\"train_loss\": loss_value, \"step\": state.step}\n", + " return new_state, metrics\n", + "\n", + "\n", + "# Function to evaluate our models\n", + "@jax.jit\n", + "def evaluate(params: hk.Params, test_dataset) -> jnp.ndarray:\n", + " # Here we simply use our loss func/mse to eval our models,\n", + " # but we can use diff functions for loss and evaluation,\n", + " # e.g. in classification we use Cross-entropy classification loss\n", + " # , but we use accuracy as an eval metric.\n", + " x_test, y_test_true = test_dataset\n", + " return loss_fn(params, x_test, y_test_true)\n", + "\n", + "\n", + "# We get our dataset\n", + "seed = 42\n", + "train_dataset, test_dataset = load_dataset(seed=seed, input_dim=10)\n", + "\n", + "# Initialise model params and optimiser;\n", + "rng = jax.random.PRNGKey(seed)\n", + "# We pass an example of the input to get the correct shapes\n", + "state = init(rng, next(train_dataset)[0])\n", + "\n", + "# Time our training\n", + "prev_time = time.time()\n", + "max_steps = 10**5\n", + "eval_every = 5000\n", + "metrics = {}\n", + "plotlosses = PlotLosses()\n", + "\n", + "# Training & evaluation loop.\n", + "for step in range(max_steps):\n", + " state, metrics = update(state, data=next(train_dataset))\n", + "\n", + " # Periodically evaluate on test set.\n", + " if step % eval_every == 0:\n", + " steps_per_sec = eval_every / (time.time() - prev_time)\n", + " prev_time = time.time()\n", + " test_loss = evaluate(state.params, next(test_dataset))\n", + " metrics.update({\"steps_per_sec\": steps_per_sec})\n", + " metrics.update({\"test_loss\": test_loss})\n", + " plotlosses.update(\n", + " {\n", + " \"train_loss\": jnp.mean(metrics[\"train_loss\"]),\n", + " }\n", + " )\n", + " plotlosses.update(\n", + " {\n", + " \"test_loss\": test_loss,\n", + " }\n", + " )\n", + " plotlosses.send()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "03woGcY0pxPb" + }, + "source": [ + "Please try and get comfortable with the above code since we will be using Haiku and Optax in other practicals. If you need assistance, please call a tutor!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "uU3aRT3p-QVY" + }, + "outputs": [], + "source": [ + "# @title Let's plot our predictions vs targets.\n", + "\n", + "X_test, y_test = next(test_dataset)\n", + "pred = model.apply(state.params, X_test)\n", + "\n", + "plt.figure(figsize=(7, 7))\n", + "plt.scatter(y_test, pred, c=\"crimson\")\n", + "\n", + "p1 = max(max(pred), max(y_test))\n", + "p2 = min(min(pred), min(y_test))\n", + "plt.plot([p1, p2], [p1, p2], \"b-\")\n", + "plt.xlabel(\"Actual Values\", fontsize=15)\n", + "plt.ylabel(\"Predictions\", fontsize=15)\n", + "plt.axis(\"equal\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qgFA9zHOBiLh" + }, + "source": [ + "So there is some correlation with our predictions and our actual targets. This shows that we are learning a useful model for our data." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "BMTbY9uv-lIk" + }, + "source": [ + "You have officially trained a model end-to-end using the latest JAX techniques! ๐Ÿ”ฅ\n", + "\n", + "Although, we have only done simple Linear Regression in this tutorial, you have learned optimization techniques like gradient descent, which can apply to a variety of models! " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fV3YG7QOZD-B" + }, + "source": [ + "# **Conclusion**\n", + "**Summary:**\n", + "- JAX combines Autograd and XLA to perform **accelerated** ๐Ÿš€ numerical computations. These computations are achieved using transforms such as `jit`,`grad`,`vmap` and `pmap`.\n", + "- JAX's `grad` function automatically calculates the gradients of your functions for you! \n", + "- Gradient descent is an effective algorithm to learn linear models, but also more complicated models, where analytical solutions don't exist. \n", + "- We need to be careful not to over-fit or under-fit on our datasets. \n", + "- Haiku and Optax make training JAX models more convenient. \n", + "\n", + "\n", + "**Next Steps:** \n", + "\n", + "- If you are interested in going deeper into Linear Regression, we have a Bayesian Linear Regression section in the [Bayesian Deep Learning Prac](https://github.com/deep-learning-indaba/indaba-pracs-2022/blob/main/practicals/Bayesian_Deep_Learning_Prac.ipynb).\n", + "- You can also adapt the model and dataset from the \"*Full Training Loop Using Haiku and Optax*\" section to train your custom models on custom datasets. \n", + "\n", + "\n", + "**References:** \n", + "\n", + "Part 1 \n", + "1. Various JAX [docs](https://jax.readthedocs.io/en/latest/) - specifically [quickstart](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html), [common gotchas](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html), [jitting](\n", + "https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html#), [random numbers](https://jax.readthedocs.io/en/latest/jax-101/05-random-numbers.html) and [pmap](https://jax.readthedocs.io/en/latest/jax-101/06-parallelism.html?highlight=pmap#). \n", + "2. http://matpalm.com/blog/ymxb_pod_slice/\n", + "3. https://roberttlange.github.io/posts/2020/03/blog-post-10/\n", + "4. [Machine Learning with JAX - From Zero to Hero | Tutorial #1](https://www.youtube.com/watch?v=SstuvS-tVc0). \n", + "\n", + "Part 2 \n", + "1. Parts of this section are adapted from [Deepmind's Regression Tutorial](https://github.com/deepmind/educational/blob/master/colabs/summer_schools/intro_to_regression.ipynb). \n", + "2. https://d2l.ai/chapter_linear-networks/linear-regression.html\n", + "3. https://www.cs.toronto.edu/~rgrosse/courses/csc411_f18/slides/lec06-slides.pdf\n", + "4. [Linear Regression Chapter - Mathematics for Machine Learning Book](https://mml-book.github.io/). \n", + "\n", + "\n", + "For other practicals from the Deep Learning Indaba, please visit [here](https://github.com/deep-learning-indaba/indaba-pracs-2022)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XrRoSqlxfi7f" + }, + "source": [ + "# **Appendix:** \n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9OH9H7ndfuyQ" + }, + "source": [ + "## Derivation of partial derivatives for exercise 2.4.\n", + "\n", + "Derive $\\frac{\\partial \\mathcal{L}}{\\partial w}$:\n", + "\\begin{aligned}\n", + "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{ \\partial}{\\partial w} (\\frac{1}{2N} \\sum_{i=1}^N (y_i - (w x_i + b))^2) \\because{Definition of $\\mathcal{L}$} \\\\\n", + " \\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{2N} \\frac{ \\partial }{\\partial w} ( \\sum_{i=1}^N (y_i - (w x_i + b))^2) \\because{Constant multiple rule} \\\\\n", + "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{2N} \\sum_{i=1}^N \\frac{ \\partial }{\\partial w} (y_i - (w x_i + b))^2 \\because{Sum Rule - derivative of sum is sum of derivatives.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{2N} \\sum_{i=1}^N 2 (y_i - (w x_i + b)) \\frac{ \\partial }{\\partial w}(y_i -(w x_i + b)) \\because{Power Rule + Chain Rule.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{2N} \\sum_{i=1}^N 2 (y_i - (w x_i + b)) (-x_i) \\because{Compute derative.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1(2)}{2N} \\sum_{i=1}^N (y_i - (w x_i + b)) (-x_i) \\because{Factor constant out of summation.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{N} \\sum_{i=1}^N -y_ix_i + (w x_i + b)x_i \\because{Multiply brackets and simplify.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{N} \\sum_{i=1}^N (-y_i + (w x_i + b))x_i \\because{Rewrite.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{N} \\sum_{i=1}^N ((w x_i + b) -y_i )x_i \\because{Rewrite.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial w} & = \\frac{1}{N} \\sum_{i=1}^N (f(x_i) -y_i )x_i \\because{Substitute in $f(x_i)$.} \\\\ \n", + "\\end{aligned}\n", + "\n", + "Derive $\\frac{\\partial \\mathcal{L}}{\\partial b}$:\n", + "\\begin{aligned}\n", + "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{ \\partial}{\\partial b} (\\frac{1}{2N} \\sum_{i=1}^N (y_i - (w x_i + b))^2) \\because{Definition of $\\mathcal{L}$} \\\\\n", + "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{2N} \\frac{ \\partial }{\\partial b} ( \\sum_{i=1}^N (y_i - (w x_i + b))^2) \\because{Constant multiple rule} \\\\\n", + "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{2N} \\sum_{i=1}^N \\frac{ \\partial }{\\partial b} (y_i - (w x_i + b))^2 \\because{Sum Rule - derivative of sum is sum of derivatives.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{2N} \\sum_{i=1}^N 2 (y_i - (w x_i + b)) \\frac{ \\partial }{\\partial b}(y_i -(w x_i + b)) \\because{Power Rule + Chain Rule.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{2N} \\sum_{i=1}^N 2 (y_i - (w x_i + b)) (-1) \\because{Compute derative.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1(2)}{2N} \\sum_{i=1}^N (y_i - (w x_i + b)) (-1) \\because{Factor constant out of summation.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{N} \\sum_{i=1}^N (-y_i + (w x_i + b)) \\because{Multiply brackets and simplify.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{N} \\sum_{i=1}^N ((w x_i + b) -y_i ) \\because{Rewrite.} \\\\ \n", + "\\frac{\\partial \\mathcal{L}}{\\partial b} & = \\frac{1}{N} \\sum_{i=1}^N (f(x_i) -y_i ) \\because{Substitute in $f(x_i)$.} \\\\ \n", + "\\end{aligned}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "o1ndpYE50BpG" + }, + "source": [ + "# **Feedback**\n", + "\n", + "Please provide feedback that we can use to improve our practicals in the future." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "OIZvkhfRz9Jz" + }, + "outputs": [], + "source": [ + "# @title Generate Feedback Form. (Run Cell)\n", + "from IPython.display import HTML\n", + "\n", + "HTML(\n", + " \"\"\"\n", + "\n", + "\"\"\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "oglV4kHMWnIN" + }, + "source": [ + "" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "collapsed_sections": [ + "XrRoSqlxfi7f" + ], + "name": "Introduction_to_ML_using_JAX.ipynb", + "provenance": [] + }, + "gpuClass": "standard", + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.5" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +}