Understanding where a stock price might go — and with what probability — is one of the most practical problems in quantitative finance. Rather than guessing direction, professional traders quantify uncertainty. They build probability distributions around future prices using volatility as the core input. This article walks through exactly that process: constructing a Monte Carlo simulation engine that generates 10,000 forward price paths and wraps them in statistically grounded confidence intervals.
We will implement two parallel models — one driven by historical volatility estimated from realized returns, and one using Black-Scholes implied volatility — and compare their output side by side. The final result is a confidence cone chart showing 68% and 95% probability intervals, a terminal price distribution histogram, and a probability calculator that answers questions like "What is the chance this stock closes above $X in 30 days?" All code is written in Python using yfinance, numpy, pandas, and matplotlib, and works on any ticker with a single parameter change.
Most algo trading content gives you theory.
This gives you the code.3 Python strategies. Fully backtested. Colab notebook included.
Plus a free ebook with 5 more strategies the moment you subscribe.5,000 quant traders already run these:
Subscribe | AlgoEdge Insights
This article covers:
- Section 1 — Geometric Brownian Motion and Volatility Multipliers:** Explains the mathematical model underlying stock price simulation, how volatility scales with time, and what confidence intervals actually mean in this context.
- Section 2 — Python Implementation:** Full code walkthrough including data download, historical volatility estimation, Monte Carlo path generation, confidence cone construction, and visualization.
- Section 3 — Reading the Results:** How to interpret the cone chart, what the terminal distribution reveals, and realistic expectations for the model's predictive accuracy.
- Section 4 — Use Cases:** Practical applications across options pricing intuition, risk management, earnings event analysis, and portfolio stress testing.
- Section 5 — Limitations and Edge Cases:** Honest assessment of where Geometric Brownian Motion breaks down and what assumptions are most likely to fail in live markets.
1. Geometric Brownian Motion and Volatility Multipliers
Stock prices do not move in straight lines. They follow a random process where each day's return is drawn from a distribution centered near zero, with spread determined by volatility. The standard mathematical model for this is Geometric Brownian Motion (GBM), which assumes that log returns are normally distributed and independent across time. While this assumption has well-known flaws, GBM remains the foundational model in derivatives pricing and risk estimation — and for good reason. It is analytically tractable, calibratable from market data, and produces intuitive, interpretable outputs.
The core equation for a single price step under GBM is: S(t+dt) = S(t) * exp((μ - σ²/2)*dt + σ*√dt*Z), where μ is the drift (annualized expected return), σ is annualized volatility, and Z is a standard normal random draw. The term σ*√dt is the volatility multiplier — it scales volatility from annual to the time step being simulated. This square-root-of-time scaling is central to all of options theory. A stock with 20% annual volatility has roughly 20% / √252 ≈ 1.26% daily volatility.
Confidence intervals emerge naturally from running this simulation thousands of times. After 10,000 simulated paths, each ending at a different terminal price, you get an empirical distribution. The 16th and 84th percentiles of that distribution form the 68% confidence band — analogous to one standard deviation in a normal distribution. The 2.5th and 97.5th percentiles form the 95% band. These bands widen over time like a cone, reflecting the compounding uncertainty that grows as the forecast horizon extends.
The key insight is that volatility is the only input that materially changes the shape of this cone. Higher volatility produces a wider cone with fatter tails. By running two versions — one using historical volatility and one using implied volatility backed out from options prices — you can directly see how the market's forward-looking risk estimate differs from realized past risk. When implied volatility is higher than historical, the market is pricing in more uncertainty than history alone would suggest.
2. Python Implementation
2.1 Setup and Parameters
The simulation is fully configurable through a small set of parameters at the top of the script. Change TICKER to any valid Yahoo Finance symbol. HIST_WINDOW controls how many trading days of history are used to estimate realized volatility. N_PATHS sets the number of Monte Carlo simulations — 10,000 is sufficient for stable percentile estimates. HORIZON is the forward simulation period in trading days.
import numpy as np
import pandas as pd
import yfinance as yf
import matplotlib.pyplot as plt
from scipy.stats import norm
import warnings
warnings.filterwarnings("ignore")
# ── Configuration ──────────────────────────────────────────────
TICKER = "ASML" # Change to any Yahoo Finance ticker
HIST_WINDOW = 252 # Trading days for volatility estimation
N_PATHS = 10_000 # Monte Carlo simulation paths
HORIZON = 30 # Forward simulation horizon (trading days)
DRIFT = 0.0 # Annualized drift; 0.0 = risk-neutral
TRADING_DAYS = 252 # Annual scaling constant
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
2.2 Data Download and Volatility Estimation
We pull adjusted closing prices from Yahoo Finance, compute daily log returns, and derive annualized historical volatility. A 30-day rolling volatility series is also computed for a secondary chart. For implied volatility, we use the nearest at-the-money call option from the first available expiry as a proxy.
# ── Download price data ────────────────────────────────────────
raw = yf.download(TICKER, period="2y", auto_adjust=True, progress=False)
prices = raw["Close"].dropna()
log_returns = np.log(prices / prices.shift(1)).dropna()
# Historical volatility (annualized)
hist_vol = log_returns.tail(HIST_WINDOW).std() * np.sqrt(TRADING_DAYS)
hist_vol = float(hist_vol)
# Rolling 30-day annualized volatility
rolling_vol = log_returns.rolling(30).std() * np.sqrt(TRADING_DAYS)
print(f"Ticker : {TICKER}")
print(f"Last Close : ${float(prices.iloc[-1]):.2f}")
print(f"Historical Vol : {hist_vol:.2%}")
# ── Implied volatility proxy from options chain ────────────────
try:
tk = yf.Ticker(TICKER)
expiry = tk.options[0] # nearest expiry
chain = tk.option_chain(expiry)
calls = chain.calls
spot = float(prices.iloc[-1])
atm_call = calls.iloc[(calls["strike"] - spot).abs().argsort()[:1]]
impl_vol = float(atm_call["impliedVolatility"].values[0])
except Exception:
impl_vol = hist_vol * 1.15 # fallback: 15% premium
print("Options data unavailable — using fallback implied vol.")
print(f"Implied Vol : {impl_vol:.2%}")
2.3 Monte Carlo Path Generation
We simulate N_PATHS independent price paths over the HORIZON period using the GBM discretization. Each path starts at the last observed closing price. We run this twice — once with historical volatility and once with implied volatility — producing two matrices of shape (HORIZON, N_PATHS).
def simulate_gbm(S0: float, mu: float, sigma: float,
horizon: int, n_paths: int) -> np.ndarray:
"""
Simulate price paths under Geometric Brownian Motion.
Returns an array of shape (horizon + 1, n_paths) where
row 0 is the starting price S0.
"""
dt = 1 / TRADING_DAYS
Z = np.random.standard_normal((horizon, n_paths))
step = np.exp((mu - 0.5 * sigma**2) * dt + sigma * np.sqrt(dt) * Z)
paths = np.vstack([np.full(n_paths, S0), step])
return S0 * np.cumprod(paths, axis=0)
S0 = float(prices.iloc[-1])
paths_hv = simulate_gbm(S0, DRIFT, hist_vol, HORIZON, N_PATHS)
paths_iv = simulate_gbm(S0, DRIFT, impl_vol, HORIZON, N_PATHS)
# Terminal price distributions
terminal_hv = paths_hv[-1]
terminal_iv = paths_iv[-1]
# Percentile bands over time
t = np.arange(HORIZON + 1)
def get_bands(paths):
return {
"p025": np.percentile(paths, 2.5, axis=1),
"p160": np.percentile(paths, 16.0, axis=1),
"p500": np.percentile(paths, 50.0, axis=1),
"p840": np.percentile(paths, 84.0, axis=1),
"p975": np.percentile(paths, 97.5, axis=1),
"mean": paths.mean(axis=1),
}
bands_hv = get_bands(paths_hv)
bands_iv = get_bands(paths_iv)
# Quick probability calculator
threshold = S0 * 1.05 # example: 5% above current price
p_above_hv = (terminal_hv > threshold).mean()
p_above_iv = (terminal_iv > threshold).mean()
print(f"\nP(price > {threshold:.2f}) — HV model : {p_above_hv:.2%}")
print(f"P(price > {threshold:.2f}) — IV model : {p_above_iv:.2%}")
2.4 Visualization
The first chart overlays the two confidence cones on the same axis. The shaded regions represent 68% and 95% intervals for each model. The second chart compares terminal price distributions as overlapping histograms. Use the cone width and histogram overlap to assess how differently the two volatility inputs interpret forward risk.
plt.style.use("dark_background")
fig, axes = plt.subplots(1, 2, figsize=(16, 6))
fig.suptitle(f"{TICKER} — Monte Carlo Confidence Cone | {HORIZON}-Day Horizon",
fontsize=14, fontweight="bold", color="white", y=1.01)
# ── Panel 1: Confidence Cone ───────────────────────────────────
ax = axes[0]
for bands, color, label in [
(bands_hv, "#00BFFF", "Historical Vol"),
(bands_iv, "#FF6B6B", "Implied Vol"),
]:
ax.fill_between(t, bands["p025"], bands["p975"],
alpha=0.15, color=color, label=f"{label} 95% CI")
ax.fill_between(t, bands["p160"], bands["p840"],
alpha=0.30, color=color, label=f"{label} 68% CI")
ax.plot(t, bands["p500"], color=color, linewidth=1.5,
linestyle="--", label=f"{label} Median")
ax.plot(t, bands["mean"], color=color, linewidth=1.0,
linestyle=":", alpha=0.7)
ax.axhline(S0, color="white", linewidth=0.8, linestyle="-", alpha=0.5,
label=f"Current Price ${S0:.2f}")
ax.set_xlabel("Trading Days Forward", color="white")
ax.set_ylabel("Simulated Price ($)", color="white")
ax.set_title("Probability Cone — HV vs IV Model", color="white")
ax.tick_params(colors="white")
ax.legend(fontsize=7, loc="upper left")
ax.grid(alpha=0.15)
# ── Panel 2: Terminal Price Distribution ──────────────────────
ax2 = axes[1]
bins = np.linspace(
min(terminal_hv.min(), terminal_iv.min()),
max(terminal_hv.max(), terminal_iv.max()), 80)
ax2.hist(terminal_hv, bins=bins, alpha=0.5,
color="#00BFFF", label=f"HV σ={hist_vol:.1%}")
ax2.hist(terminal_iv, bins=bins, alpha=0.5,
color="#FF6B6B", label=f"IV σ={impl_vol:.1%}")
ax2.axvline(S0, color="white", linewidth=1.2,
linestyle="--", label=f"Current ${S0:.2f}")
ax2.axvline(threshold, color="yellow", linewidth=1.0,
linestyle=":", label=f"Threshold ${threshold:.2f}")
ax2.set_xlabel("Terminal Price ($)", color="white")
ax2.set_ylabel("Frequency (10,000 paths)", color="white")
ax2.set_title("Terminal Price Distribution", color="white")
ax2.tick_params(colors="white")
ax2.legend(fontsize=8)
ax2.grid(alpha=0.15)
plt.tight_layout()
plt.savefig(f"{TICKER}_confidence_cone.png", dpi=150, bbox_inches="tight")
plt.show()
Figure 1. Side-by-side confidence cone and terminal distribution for ASML — blue bands represent historical volatility paths, red bands represent implied volatility paths, with shaded regions showing 68% and 95% probability intervals over a 30-day horizon.
Enjoying this strategy so far? This is only a taste of what's possible.
Go deeper with my newsletter: longer, more detailed articles + full Google Colab implementations for every approach.
Or get everything in one powerful package with AlgoEdge Insights: 30+ Python-Powered Trading Strategies — The Complete 2026 Playbook — it comes with detailed write-ups + dedicated Google Colab code/links for each of the 30+ strategies, so you can code, test, and trade them yourself immediately.
Exclusive for readers: 20% off the book with code
MEDIUM20.Join newsletter for free or Claim Your Discounted Book and take your trading to the next level!
3. Reading the Results
The most immediate observation when running this on a typical large-cap equity is the divergence between the two cones. Implied volatility almost always produces a wider cone than historical volatility, particularly in the weeks around earnings announcements or macro events. This premium — the difference between implied and realized vol — is known as the volatility risk premium, and it is one of the most persistent anomalies in options markets. When the IV cone is significantly wider than the HV cone, the market is pricing in tail risk that history alone does not capture.
The terminal distribution histogram makes this quantitative. For a stock like ASML with annual historical volatility near 30% and implied volatility near 35-40%, the 95% confidence interval at 30 days spans roughly ±15% to ±18% of the current price respectively. The probability calculator output directly answers position sizing questions: if implied vol puts only a 28% chance of the stock being above your target price in 30 days while historical vol suggests 34%, that gap matters when pricing a covered call or evaluating a breakeven on a long call.
The median path in both models is approximately flat when drift is set to zero (risk-neutral). The mean path drifts slightly upward due to Jensen's inequality applied to the lognormal distribution — the arithmetic mean of lognormal terminal prices exceeds the median. This is not an error; it is a mathematical property of the GBM model that matters when comparing simulated expected values to market prices.
4. Use Cases
Options strategy planning: Use the probability calculator output to evaluate whether a sold put's premium compensates for the actual probability of assignment, or whether a call spread's short strike sits outside the 68% band.
Earnings event analysis: Recalibrate the model using short-dated implied volatility from the options chain immediately before an earnings release. The cone width visualizes the market's expected move, which you can compare against historical post-earnings realized moves.
Risk management and stop-loss calibration: Plot your intended stop-loss level against the 68% band. If the stop sits inside the one-standard-deviation cone, it is likely to be triggered by noise rather than a genuine directional move.
Portfolio stress testing: Run the simulation across multiple correlated positions by injecting correlated random draws (via Cholesky decomposition of the return covariance matrix) to produce joint terminal distributions for a multi-asset book.
5. Limitations and Edge Cases
GBM assumes constant volatility. In reality, volatility clusters — calm periods are followed by calm periods and turbulent periods by more turbulence. The GARCH family of models addresses this directly. GBM-based cones will underestimate near-term uncertainty after a volatility spike and overestimate it after a prolonged calm period.
Log-returns are not truly normal. Empirical return distributions exhibit fat tails (excess kurtosis) and mild negative skewness. This means the actual probability of extreme moves is higher than GBM predicts. The 99th percentile of the simulated distribution will consistently understate real tail risk for most equities.
Implied volatility from a single strike is a rough proxy. A proper implementation should use the full implied volatility surface to extract a model-free variance swap rate (the VIX methodology) rather than the ATM call IV from the nearest expiry. Single-strike IV can be distorted by microstructure effects and low open interest.
Zero drift is a simplifying assumption. Setting DRIFT = 0 is appropriate for risk-neutral option pricing but not for directional equity forecasting. Incorporating an estimated equity risk premium (even a simple 6-8% annualized drift) meaningfully shifts the cone upward over longer horizons.
Survivorship and data availability bias. yfinance returns clean historical data without adjustments for delistings or extended trading halts. Volatility estimated from this data will be downward-biased relative to a universe that includes distressed or delisted names.
Concluding Thoughts
Monte Carlo simulation with volatility multipliers transforms an abstract mathematical model into a concrete, visual decision-making tool. By running 10,000 paths under both historical and implied volatility, you get two complementary views of forward uncertainty — one anchored in what the stock has done, one anchored in what the market expects it to do. The gap between them is itself informative.
The immediate next experiment is replacing the constant volatility assumption with a GARCH(1,1) model for the historical volatility input. This allows the simulation to condition on the current volatility regime rather than averaging across all past market conditions. A second worthwhile extension is adding correlated assets to simulate joint portfolio outcomes rather than single-stock paths.
If you found this walkthrough useful, the same analytical framework underlies options Greeks estimation, VaR calculation, and dynamic hedging simulation — all of which follow naturally from the Monte Carlo engine built here. More strategies built on this foundation are covered in the AlgoEdge Insights newsletter, where each issue ships with a complete, runnable Python notebook.
Most algo trading content gives you theory.
This gives you the code.3 Python strategies. Fully backtested. Colab notebook included.
Plus a free ebook with 5 more strategies the moment you subscribe.5,000 quant traders already run these:
Subscribe | AlgoEdge Insights













