Code#

Our convention is to follow PEP8 unless there is a good reason to do otherwise.

One good reason is to get closer to mathematical notation in a given lecture.

Hence it’s fine to use capitals for matrices, etc.

Operators are typically surrounded by spaces, as in a * b and a + b, but we write a**b for \(a^b\).

Variable Naming Conventions#

Unicode Variable Names#

Prefer Unicode symbols for Greek letters commonly used in economics:

  • Use α instead of alpha

  • Use β instead of beta

  • Use γ instead of gamma

  • Use δ instead of delta

  • Use ε instead of epsilon

  • Use σ instead of sigma

  • Use θ instead of theta

  • Use ρ instead of rho

This makes the code more readable and closer to mathematical notation used in economic literature.

Example:

# ✅ Preferred: Unicode variables
def utility_function(c, α=0.5, β=0.95):
    """CRRA utility function with discount factor."""
    return (c**(1-α) - 1) / (1-α) * β

# ❌ Avoid: Spelled-out Greek letters  
def utility_function(c, alpha=0.5, beta=0.95):
    """CRRA utility function with discount factor."""
    return (c**(1-alpha) - 1) / (1-alpha) * beta

Guiding principle

QuantEcon lecture’s should run in a base installation of Anaconda python.

Any packages (that are not included in anaconda) need to be installed at the top of the lecture.

An example:

In addition to what’s in Anaconda, this lecture will need the following libraries:

In the example above we install the quantecon and yfinance packages.

We use tags: [hide-output] as the output is not central to the lecture.

There are a couple of exceptions to this guideline.

  • when the software involves specific configuration for the hardware (i.e. gpu computing), or

  • if additional software needs to be installed on your system via apt or some other binary source.

JAX#

When using jax you should not install jax at the top of your lecture.

This may install jax[cpu] which will run but is not the optimal configuration for executing the lecture.

The following admonition can be used.

```{admonition} GPU
:class: warning

This lecture is accelerated via [hardware](status:machine-details) that has access to a GPU and JAX for GPU programming.

Free GPUs are available on Google Colab. To use this option, please click on the play icon top right, select Colab, and set the runtime environment to include a GPU.

Alternatively, if you have your own GPU, you can follow the [instructions](https://github.com/google/jax) for installing JAX with GPU support. If you would like to install JAX running on the `cpu` only you can use `pip install jax[cpu]`
```

which will render as

GPU

This lecture is accelerated via hardware that has access to a GPU and JAX for GPU programming.

Free GPUs are available on Google Colab. To use this option, please click on the play icon top right, select Colab, and set the runtime environment to include a GPU.

Alternatively, if you have your own GPU, you can follow the instructions for installing JAX with GPU support. If you would like to install jax running on the cpu only you can use pip install jax[cpu]

The jax[gpu] package needs to be properly installed via Docker or GitHub Actions.

Please consult with Matt McKay should you need to update these settings.

JAX Sequence Generation Patterns#

When generating sequences iteratively in JAX, use functional patterns that align with JAX’s design philosophy. This section provides guidelines for common patterns when generating time series or iterative sequences.

Core Pattern: generate_path Function#

Use a standardized generate_path function for iterative sequence generation:

@partial(jax.jit, static_argnames=['f', 'num_steps'])
def generate_path(f, initial_state, num_steps, **kwargs):
    """
    Generate a time series by repeatedly applying an update rule.

    Given a map f, initial state x_0, and model parameters θ, this
    function computes and returns the sequence {x_t}_{t=0}^{T-1} when

        x_{t+1} = f(x_t, t, θ) 

    Args:
        f: Update function mapping (x_t, t, θ) -> x_{t+1}
        initial_state: Initial state x_0
        num_steps: Number of time steps T to simulate
        **kwargs: Optional extra arguments passed to f

    Returns:
        Array of shape (dim(x), T) containing the time series path
        [x_0, x_1, x_2, ..., x_{T-1}]
    """
    def update_wrapper(state, t):
        """Wrapper function that adapts f for use with JAX scan."""
        next_state = f(state, t, **kwargs)
        return next_state, state

    _, path = jax.lax.scan(update_wrapper,
                    initial_state, jnp.arange(num_steps))
    return path.T

Function Naming Conventions for Updates#

Use consistent naming for update functions:

  • Use descriptive names that indicate what is being updated: stock_update, rate_update, value_update

  • Follow the pattern: [quantity]_update where quantity describes the state being updated

  • Include the time step parameter even if unused for consistency: f(state, t, model)

Example:

@jax.jit
def stock_update(current_stocks, time_step, model):
    """Apply transition matrix to get next period's stocks."""
    A, A_hat, g = compute_matrices(model)
    next_stocks = A @ current_stocks
    return next_stocks

@jax.jit
def rate_update(current_rates, time_step, model):
    """Apply normalized transition matrix for next period's rates."""
    A, A_hat, g = compute_matrices(model)
    next_rates = A_hat @ current_rates
    return next_rates

Replace Imperative Loops with Functional Patterns#

❌ Avoid: Imperative loop patterns

# Don't use explicit loops for sequence generation
result = []
state = initial_state
for t in range(num_steps):
    state = update_function(state, data[t])
    result.append(state)
X_path = np.array(result)

✅ Prefer: Functional sequence generation

# Use generate_path with update functions
X_path = generate_path(stock_update, initial_state, num_steps, model=model)

See also

For more comprehensive JAX conversion patterns, see JAX Conversion Guidelines.

NumPy random number generation#

Use np.random.default_rng() (the Generator API) instead of the legacy np.random.* functions.

The legacy API (np.random.seed, np.random.randn, np.random.rand, etc.) is considered legacy, and NumPy recommends the Generator API for new code. It mutates a single hidden global RandomState, which makes reproducibility fragile because state is shared across all code that touches np.random. The Generator API gives an explicit, local generator (using PCG64) that we pass around — the same direction JAX has taken with explicit PRNG state.

Basic pattern#

import numpy as np

rng = np.random.default_rng(1234)   # create once, near the top of the cell
x = rng.standard_normal(100)        # draw from this rng only
u = rng.uniform(0, 1, size=(3, 3))

Create rng once in the natural flow of a lecture and reuse it across subsequent cells. In self-contained exercise solutions, define a local rng at the top of each solution block.

Migration cheat sheet#

Legacy (avoid)

Generator (preferred)

np.random.seed(s)

rng = np.random.default_rng(s)

np.random.randn(n)

rng.standard_normal(n)

np.random.randn(m, n)

rng.standard_normal((m, n))

np.random.rand(n)

rng.random(n)

np.random.rand(m, n)

rng.random((m, n))

np.random.randint(a, b, size=n)

rng.integers(a, b, size=n)

np.random.uniform(a, b, size=n)

rng.uniform(a, b, size=n)

np.random.normal(μ, σ, size=n)

rng.normal(μ, σ, size=n)

np.random.beta(a, b, size=n)

rng.beta(a, b, size=n)

np.random.binomial(n, p, size=s)

rng.binomial(n, p, size=s)

np.random.choice(a, size=n)

rng.choice(a, size=n)

np.random.shuffle(x)

rng.shuffle(x)

from numpy.random import randn

use rng.standard_normal() — don’t import legacy functions

Note

rng.standard_normal takes a shape tuple (e.g. (m, n)), unlike np.random.randn(m, n) which takes positional integers.

Numba (@njit / @jit)#

Numba supports Generator objects (since v0.56). Create the generator in Python-land and pass it in as an argument:

# Before (legacy global state)
np.random.seed(1234)

@njit
def simulate(n):
    return np.random.randn(n)

result = simulate(100)

# After (explicit Generator)
rng = np.random.default_rng(1234)

@njit
def simulate(rng, n):
    return rng.standard_normal(n)

result = simulate(rng, 100)

@jitclass#

Generator objects cannot be stored as @jitclass attributes. Pass rng to methods that need it:

@jitclass(firm_data)
class Firm:
    def __init__(self, s=10, S=100, μ=1.0, σ=0.5):
        self.s, self.S, self.μ, self.σ = s, S, μ, σ
        
    def update(self, x, rng):
        Z = rng.standard_normal()
        D = np.exp(self.μ + self.σ * Z)
        ...
      
rng = np.random.default_rng(1234)
firm = Firm()
firm.update(x, rng)

Sampling inside @jit(parallel=True) loops#

Warning

Numba’s Generator support is not thread-safe. Do not call rng.standard_normal() (or any other rng.* method) from inside a prange loop in a @jit(parallel=True) function — concurrent threads sharing one generator will corrupt its internal state and silently produce wrong results.

Default pattern: draw the shocks outside the loop. In almost every QuantEcon lecture the draws inside a simulation are i.i.d. shocks that don’t depend on loop order, so you can generate them once in Python-land with rng and let prange simply index into the array:

rng = np.random.default_rng(1234)

@njit(parallel=True)
def simulate(shocks, out):
    n = shocks.shape[0]
    for i in prange(n):
        out[i] = f(shocks[i])         # ✅ no RNG call inside prange

shocks = rng.standard_normal(1_000_000)   # drawn once, in Python-land
out = np.empty_like(shocks)
simulate(shocks, out)

This is the preferred approach: it is thread-safe, reproducible from a single rng, and usually faster (one vectorised draw beats a million per-iteration RNG calls). When migrating a lecture and you find np.random.* calls inside a prange loop, the right move is to lift them out and remove the legacy call entirely — not to leave it in place.

Narrow exception. A few algorithms genuinely require sampling inside the loop — for example, MCMC with rejection sampling (the number of draws per iteration depends on acceptances), or memory-constrained settings where pre-allocating the full shock array is infeasible. Only in these cases, use the legacy np.random.* API inside the prange loop and add a comment explaining why the draws cannot be lifted out. Treat this as the exception, not a general-purpose fallback.

Updating an existing lecture#

When migrating a lecture, after rebuilding compare the rendered preview against the live site. Figures will differ in exact values (PCG64 vs MT19937) but the shape, scale and qualitative behaviour should match — and the surrounding narrative should still hold. Flag in the PR if it doesn’t.

See also

See QuantEcon/meta#299 for the migration tracker, and lecture-python-programming#538 / #541 for worked examples. NumPy background: Random sampling docs, migration guide, NEP 19.

Binary packages with Python frontends#

The graphviz package is a python interface to a local installation of graphviz and is useful for rendering DOT source code.

If you need to use graphviz you should:

  1. Install pip install graphviz at the top of your lecture

  2. Check if graphviz is getting installed in .github/workflows/ci.yml for preview builds

  3. Add the below note admonition to your lecture.

graphviz

If you are running this lecture locally it requires graphviz to be installed on your computer. Installation instructions for graphviz can be found here

which will render as

graphviz

If you are running this lecture locally it requires graphviz to be installed on your computer. Installation instructions for graphviz can be found here

Performance Timing Patterns#

Timer Context Manager#

Use the modern qe.Timer() context manager instead of manual timing patterns.

The QuantEcon library provides a Timer context manager that replaces older timing approaches like tic/tac/toc functions with cleaner, more reliable syntax.

❌ Avoid: Manual timing patterns

import time

# Old pattern - verbose and error-prone
start_time = time.time()
result = expensive_computation()
end_time = time.time()
print(f"Elapsed time: {(end_time - start_time) * 1000:.6f} ms")

✅ Preferred: Timer context manager

import quantecon as qe

# Modern pattern - clean and reliable
with qe.Timer():
    result = expensive_computation()
# Output: 0.05 seconds elapsed

Timer Features and Usage Patterns#

The Timer context manager supports various usage patterns:

Basic Timing#

import quantecon as qe

with qe.Timer():
    result = expensive_computation()
# Output: 0.05 seconds elapsed

Custom Messages and Units#

# Custom message with milliseconds
with qe.Timer("Computing eigenvalues", unit="milliseconds"):
    eigenvals = compute_eigenvalues(matrix)
# Output: Computing eigenvalues: 50.25 ms elapsed

# Microseconds for very fast operations
with qe.Timer("Quick calculation", unit="microseconds"):
    result = simple_operation()
# Output: Quick calculation: 125.4 μs elapsed

Silent Mode for Method Comparison#

# Store timing without printing for performance comparisons
timer = qe.Timer(silent=True)
with timer:
    result = expensive_computation()
elapsed_time = timer.elapsed  # Access stored time

# Compare multiple methods
methods = [method_a, method_b, method_c]
timers = []

for method in methods:
    timer = qe.Timer(f"{method.__name__}", silent=True)
    with timer:
        method(data)
    timers.append((method.__name__, timer.elapsed))

# Find fastest method
fastest = min(timers, key=lambda x: x[1])
print(f"Fastest method: {fastest[0]} ({fastest[1]:.6f}s)")

Precision Control#

# Control decimal places in output
with qe.Timer("High precision timing", precision=8):
    result = computation()
# Output: High precision timing: 0.12345678 seconds elapsed

Migration from Legacy Patterns#

Replace tic/tac/toc patterns:

# Old approach
from quantecon.util.timing import tic, tac, toc

tic()
result = computation()
toc()

# New approach
with qe.Timer():
    result = computation()

Note

The tic/tac/toc functions remain available for backward compatibility, but new code should use the Timer context manager for better readability and reliability.

Benchmarking with timeit#

Use qe.timeit() for statistical performance analysis across multiple runs.

The QuantEcon library provides a timeit function that performs multiple runs of code and computes statistical measures, making it ideal for rigorous performance benchmarking.

Key Features#

  • Multiple runs: Automatically executes code multiple times to reduce noise

  • Statistical analysis: Computes mean, standard deviation, min, and max execution times

  • Flexible input: Accepts functions, callables, or lambda expressions

  • Customizable: Control number of runs and other parameters

Basic Usage#

import quantecon as qe
import numpy as np

# Define a function to benchmark
def matrix_multiplication():
    A = np.random.rand(100, 100)
    B = np.random.rand(100, 100)
    return A @ B

# Benchmark with multiple runs
result = qe.timeit(matrix_multiplication, number=100)
print(result)
# Output: Statistical summary with mean, std, min, max times

Using Lambda Functions#

Tip

Use lambda: to pass functions with arguments or create inline benchmarks:

# Benchmark function with arguments
data = np.random.rand(1000, 1000)
result = qe.timeit(lambda: np.linalg.eig(data), number=50)

# Benchmark inline code
result = qe.timeit(lambda: sorted([random.random() for _ in range(1000)]), number=100)

This pattern is particularly useful when you need to pass specific arguments or test variations of the same algorithm.


### Comparison with Jupyter `%timeit`

While Jupyter's `%timeit` magic command is convenient for interactive exploration, `qe.timeit()` offers several advantages for programmatic benchmarking:

| Feature | `%timeit` | `qe.timeit()` |
|---------|-----------|---------------|
| **Environment** | Jupyter only | Any Python environment |
| **Return value** | Prints only | Returns statistical object |
| **Integration** | Interactive | Programmatic workflows |
| **Automation** | Manual | Scriptable comparisons |
| **Statistics** | Basic timing | Comprehensive statistics |