Extract and visualize logistic-filtered support and resistance based on pivots, momentum, and range expansion in Python.
The method presented here isolates only the most statistically significant price levels, using a blend of pivot detection, momentum, and volatility.
Instead of plotting every high or low, we filter for zones where price action aligns with strong RSI and range expansion.
We then score the zones with a logistic function. This gives us a set of key price levels which have shown historical interesest.
Each parameter in the process is adjustable, so you can fine-tune the sensitivity for any asset or timeframe as required.
The complete Python notebook for the analysis is provided below.
1. Logistic-Filtered Price Levels
Suppose you’re trading a stock in a volatile week. The price surges, then stalls, prints a sharp reversal, and the chart fills with local highs and lows.
Most trading tools would mark every single zigzag as significant, but this would leave you with a crowded and indecipherable chart.
What you actually want is a way to cut through this noise and focus only on the levels that saw real interest.
Our method extracts only the most relevant support and resistance levels by combining three key signals:
- local pivots,
- momentum shifts,
- and range expansion
Step 1: Identify Local Pivots
The process begins by scanning the price series for local maxima (highs) and minima (lows) within a moving window.
For each bar, the algorithm checks whether it represents the highest high or lowest low over the lookback period.
This window is defined by a parameter, set at 14 bars, but can be adjusted as needed.
Mathematically:
- A bar at index i is a pivot high if:

- It is a pivot low if:
n is half the window size.
Pivot detection reduces the chart to only those price points where significant reversals / reactions occurred. These are the candidate levels.
Step 2: Contextualize Each Level with Momentum and Range
Not all pivots are important. A pivot formed during a period of weak momentum or low volatility is less likely to matter in the future.
To address this, the method attaches two context signals to every candidate:
- Momentum: Measured using RSI, which quantifies the speed and change of price movements over the same window.
- Range Expansion: Captured via ATR, which measures market volatility.

Each pivot is then labeled according to whether it (i) occurred during a period of above-average momentum (RSI > 50) and (ii) whether its candle body exceeded the ATR (a proxy for high conviction moves).
Step 3: Score the Significance with a Logistic Function
Now, the method translates these signals into a single probability score using the logistic (sigmoid) function:
Here, z is a sum of the binary signals:
each “bin” is either +1 or -1, based on whether the threshold is exceeded.
This composite score reflects how strongly price, momentum, and volatility align at each level.
The logistic function compresses the result to a value between 0 and 1, which can be interpreted as the “probability” of that level.
Step 4: Filter for Statistical Relevance
Only pivots with a logistic score above a certain threshold (e.g. 0.7) are retained.
This acts as the confidence filter, so that we can exclude levels where the signals do not align.
Step 5: Practical Filtering
To avoid clutter, further rules remove levels that:
- Are too far from the current price (using a multiple of ATR),
- Or are too close to an already accepted level (using a min. price gap).
2. Key Price Levels in Python
2.1. User Parameter
This block sets up the environment and parameters:
- INTERVAL: Smaller intervals (e.g. “1h”) produce more bars and finer swings (but more noise); larger intervals (e.g. “1wk”) smooth the data and show major trends.
- PIVOT_LEN: A larger window (20–30) catches only major peaks/troughs; a smaller one (5–10) spots subtle turns but can clutter your chart.
- PROB_THRESHOLD: Raising it above 0.8 shows only the most reliable levels; lowering it toward 0.5 adds more candidates with weaker backing.
- MAX_DISTANCE_ATR: A higher multiple (15+) includes distant levels; a lower multiple (3–5) restricts focus to areas near the current price.
- MIN_LEVEL_DIFF: Increasing this gap (e.g. 10) forces widely spaced levels; decreasing it (1–2) allows tighter clusters of nearby levels.
- LOOKBACK_BARS: Set to an integer (like 100) to limit analysis to recent bars for speed; None processes the full date range.
- LINE_CLR, PRICE_CLR, BG_CLR: Tweak these hex codes to match your UI or improve contrast in your plot.
import yfinance as yf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import dates as mdates
from datetime import datetime, timedelta
# ───────── USER SETTINGS
TICKER = "NVDA"
START_DATE = "2023-01-01"
END_DATE = (datetime.today() + timedelta(days=1)).strftime('%Y-%m-%d') # Today plus a One Day
INTERVAL = "1d" # 1d, 1wk, 1mo …
PIVOT_LEN = 14 # Lookback window for pivot detection. Increase: fewer, more significant levels. Decrease: more, but noisier levels.
PROB_THRESHOLD = 0.70 # Minimum logistic probability for a level to show. Increase: fewer, higher-confidence levels. Decrease: more, but weaker levels.
MAX_DISTANCE_ATR = 10 # Max distance (in ATR multiples) from last price to include a level. Increase: more distant levels. Decrease: focus near current price.
MIN_LEVEL_DIFF = 5.0 # Minimum allowed gap between levels (in price units). Increase: fewer, more widely spaced levels. Decrease: more clustered levels.
LOOKBACK_BARS = None # Set to int (e.g. 100) to plot only recent bars. Use None to show all data.
LINE_CLR = "#f23645" # color for levels
PRICE_CLR = "#d0d0d0" # price line color
BG_CLR = "#0d1117" # background color
VOLUME_SCALE = 0.5 # 0.5 cuts bar heights in half
2.2. Fetching Data Function
The fetch function pulls OHLCV data from Yahoo Finance for the ticker and date range at various interval, e.g. daily, weekly, or hourly.
There are historical data limitations depending on the granularity of the data. For exampl, you can only get hourly data 2 years back or so.
Newsletter