Monte Carlo Example — Quasi-Monte Carlo Integration¶
This example demonstrates integrationdifferentiation.integrate_monte_carlo()
(IMSL QMC — Halton quasi-random sampling):
Estimate \(\pi\) by integrating the unit-circle indicator \(f(x,y) = \mathbf{1}[x^2+y^2 \le 1]\) over \([-1,1]^2\) (exact: \(\pi \approx 3.14159\))
Integrate \(g(x) = e^{-x^2}\) over \([0,3]\) (exact: \(\mathrm{erf}(3)\,\sqrt{\pi}/2 \approx 0.88621\))
Show convergence of both estimates as \(N\) grows from 100 to 100 000
Example Code¶
"""IMSL QMC example: quasi-Monte Carlo integration.
Demonstrates :func:`integrationdifferentiation.integrate_monte_carlo`
(IMSL QMC / Halton quasi-random sampling):
- Estimate π by integrating the indicator ``f(x,y) = 1 if x²+y²≤1`` over
``[-1, 1]²``. Exact answer: π ≈ 3.14159.
- Integrate ``g(x) = exp(-x²)`` over ``[0, 3]`` via Monte Carlo.
Exact answer: ``erf(3) * √π / 2 ≈ 0.88623``.
- Show convergence as ``N`` increases from 100 to 100 000.
Outputs:
- Convergence table printed to stdout
- SVG plot saved to ``test_output/example_imsl_monte_carlo.svg``
"""
from __future__ import annotations
from pathlib import Path
from typing import Dict, List
import matplotlib.pyplot as plt
import numpy as np
from integrationdifferentiation import integrate_monte_carlo
def run_demo_imsl_monte_carlo() -> Dict[str, object]:
"""Run quasi-Monte Carlo convergence study.
Args:
None
Returns:
Dict[str, object]: Keys ``pi_estimates`` (list), ``gauss_estimates`` (list),
``n_vals`` (list), ``plot_path`` (str).
"""
from scipy.special import erf
exact_pi = np.pi
exact_gauss = float(erf(3.0) * np.sqrt(np.pi) / 2.0)
n_vals: List[int] = [100, 300, 1000, 3000, 10000, 30000, 100000]
pi_estimates: List[float] = []
gauss_estimates: List[float] = []
pi_errors: List[float] = []
gauss_errors: List[float] = []
# --- circle indicator: 4 * ∫_{-1}^{1} ∫_{-1}^{1} I(x²+y²≤1) dx dy = π
def circle_indicator(xy: np.ndarray) -> float:
return 1.0 if float(xy[0]) ** 2 + float(xy[1]) ** 2 <= 1.0 else 0.0
# --- Gaussian integrand: x can be a float (1D MC fallback) or array
def gauss_func(x) -> float:
val = float(x[0]) if hasattr(x, "__len__") else float(x)
return float(np.exp(-val ** 2))
print("\nIMSL QMC Example: Monte Carlo Integration")
print("=" * 70)
print(f"{'N':>8} {'π estimate':>12} {'|Δπ|':>10} {'Gauss est.':>12} {'|ΔG|':>10}")
print("-" * 70)
for n in n_vals:
# QMC integrates the indicator over [-1,1]²: result = area_of_circle = π
r_pi = integrate_monte_carlo(circle_indicator, [(-1.0, 1.0), (-1.0, 1.0)], n_points=n, seed=42)
pi_est = r_pi.value
pi_err = abs(pi_est - exact_pi)
pi_estimates.append(pi_est)
pi_errors.append(pi_err)
# Gaussian integral estimate
r_g = integrate_monte_carlo(gauss_func, [(0.0, 3.0)], n_points=n, seed=7)
g_est = r_g.value
g_err = abs(g_est - exact_gauss)
gauss_estimates.append(g_est)
gauss_errors.append(g_err)
print(f"{n:>8} {pi_est:>12.7f} {pi_err:>10.3e} {g_est:>12.7f} {g_err:>10.3e}")
print("=" * 70)
print(f"{'Exact':>8} {exact_pi:>12.7f} {'—':>10} {exact_gauss:>12.7f} {'—':>10}")
# ----------------------------------------------------------------- plots
output_dir = Path("test_output")
output_dir.mkdir(parents=True, exist_ok=True)
plot_path = output_dir / "example_imsl_monte_carlo.svg"
fig, axes = plt.subplots(1, 3, figsize=(16, 5))
n_arr = np.array(n_vals, dtype=float)
# Panel 1: π estimate convergence
ax = axes[0]
ax.semilogx(n_arr, pi_estimates, "o-", color="#1d4ed8", lw=2.0, ms=6, label="QMC estimate of π")
ax.axhline(exact_pi, color="#dc2626", lw=1.5, ls="--", label=f"π = {exact_pi:.5f}")
ax.set_xlabel("N (sample points)")
ax.set_ylabel("Estimate of π")
ax.set_title("Monte Carlo estimation of π")
ax.legend(fontsize=9)
ax.grid(True, alpha=0.3)
# Panel 2: Gaussian convergence
ax = axes[1]
ax.semilogx(n_arr, gauss_estimates, "s-", color="#059669", lw=2.0, ms=6, label=r"QMC: $\int_0^3 e^{-x^2}dx$")
ax.axhline(exact_gauss, color="#dc2626", lw=1.5, ls="--", label=f"Exact = {exact_gauss:.5f}")
ax.set_xlabel("N (sample points)")
ax.set_ylabel("Estimate")
ax.set_title(r"Monte Carlo: $\int_0^3 e^{-x^2}\,dx$")
ax.legend(fontsize=9)
ax.grid(True, alpha=0.3)
# Panel 3: absolute error convergence (log-log)
ax = axes[2]
ax.loglog(n_arr, pi_errors, "o-", color="#1d4ed8", lw=2.0, ms=6, label="|error π|")
ax.loglog(n_arr, gauss_errors, "s-", color="#059669", lw=2.0, ms=6, label="|error Gauss|")
# Theoretical N^{-1} reference line
ref = pi_errors[0] * (n_arr[0] / n_arr)
ax.loglog(n_arr, ref, "k--", lw=1.0, alpha=0.5, label=r"$\propto N^{-1}$")
ax.set_xlabel("N (sample points)")
ax.set_ylabel("Absolute error")
ax.set_title("Error convergence (log-log)")
ax.legend(fontsize=9)
ax.grid(True, alpha=0.3, which="both")
fig.suptitle("IMSL QMC: Quasi-Monte Carlo Integration", fontsize=13, fontweight="bold")
fig.tight_layout()
fig.savefig(plot_path, format="svg")
plt.close(fig)
return {
"n_vals": n_vals,
"pi_estimates": pi_estimates,
"gauss_estimates": gauss_estimates,
"plot_path": str(plot_path),
}
if __name__ == "__main__":
run_demo_imsl_monte_carlo()
Plot Output¶
Left: estimate of \(\pi\) vs. sample size \(N\) (semi-log scale). Centre: estimate of \(\int_0^3 e^{-x^2}\,dx\) vs. \(N\). Right: log–log absolute error for both estimates with an \(N^{-1}\) reference line.¶
Console Output¶
IMSL QMC Example: Monte Carlo Integration
======================================================================
N π estimate |Δπ| Gauss est. |ΔG|
----------------------------------------------------------------------
100 3.1300000 1.159e-02 0.8850498 1.158e-03
300 3.1300000 1.159e-02 0.8851801 1.027e-03
1000 3.1380000 3.593e-03 0.8857713 4.361e-04
3000 3.1398333 1.759e-03 0.8859282 2.791e-04
10000 3.1399500 1.643e-03 0.8861842 2.314e-05
30000 3.1416667 7.401e-05 0.8861841 2.322e-05
100000 3.1419250 3.323e-04 0.8862024 4.968e-06
======================================================================
Exact 3.1415927 ΓÇö 0.8862073 ΓÇö