AI Optimal Position Sizing System Development

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Optimal Position Sizing System Development
Medium
~2-3 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823
  • image_logo-aider_0.jpg
    AIDER company logo development
    762
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    848

Development of AI-based Optimal Position Sizing System

Position sizing determines how much capital to invest in each trade. No less important than signal quality: even good strategy collapses with too aggressive sizing, earns less potential with too conservative. AI-system adapts position size to current market conditions and signal confidence.

Classical Sizing Methods

Fixed Fractional:

Position_Size = Account_Size × f

Simplest approach. f = 1-2% risk per trade — standard for retail. Doesn't adapt to signal quality.

Kelly Criterion:

def kelly_fraction(win_rate, win_loss_ratio):
    """
    f* = W - (1-W)/R
    W: probability of win, R: average_win / average_loss
    """
    return win_rate - (1 - win_rate) / win_loss_ratio

kelly_f = kelly_fraction(win_rate=0.55, win_loss_ratio=2.0)
# kelly_f = 0.55 - 0.45/2.0 = 0.325 (32.5% of capital!)
# This is full Kelly — too aggressive
half_kelly = kelly_f * 0.5  # Practice: 25-50% of Kelly

Kelly Problem: requires accurate estimates of win rate and R. Estimation error → overbet → accelerated ruin.

ML for Dynamic Position Sizing

Confidence-Adjusted Sizing: Position size proportional to ML-signal confidence:

def ml_position_size(model_proba, base_size, min_prob=0.55, max_prob=0.80):
    """
    At probability 0.55 → base_size × 0.5 (minimum)
    At probability 0.80 → base_size × 2.0 (maximum)
    """
    if model_proba < min_prob:
        return 0  # no trade
    scale = (model_proba - min_prob) / (max_prob - min_prob)
    return base_size * (0.5 + 1.5 * min(scale, 1.0))

Volatility-Adjusted Sizing: Normalize position size to instrument volatility:

def vol_normalized_size(target_risk_pct, price, volatility_daily, account_size):
    """
    Position size: such that 1σ daily move = target_risk_pct of capital
    """
    dollar_risk = account_size * target_risk_pct
    position_value = dollar_risk / volatility_daily
    n_shares = position_value / price
    return n_shares

ATR (Average True Range) as volatility proxy: higher ATR, smaller lot.

RL for Adaptive Sizing

RL-agent learns optimal sizing depending on context:

Agent State:

  • ML-signal confidence (probability score)
  • Current volatility (ATR/10-day realized vol)
  • Drawdown from peak (if already in drawdown — less risk)
  • Macro regime (expansion vs. contraction)
  • Portfolio correlation: if position highly correlated with open ones — reduce

Actions: discrete space [0%, 0.5%, 1%, 1.5%, 2%, 3%] risk per trade.

Reward:

Reward = PnL / max_drawdown_penalty

Agent learns not just maximize return, but limit drawdown.

Risk Parity Sizing for Portfolio

When managing multiple positions simultaneously:

def risk_parity_position_sizes(signals, volatilities, correlations, target_portfolio_vol):
    """
    Position sizes such that each contributes equal amount to portfolio risk
    """
    n = len(signals)
    w = np.ones(n) / n  # initial equal distribution

    for _ in range(100):  # iterative optimization
        cov = np.diag(volatilities) @ correlations @ np.diag(volatilities)
        portfolio_vol = np.sqrt(w @ cov @ w)
        marginal_risk = cov @ w / portfolio_vol
        risk_contributions = w * marginal_risk

        # Adjust toward equal contribution
        w = w * (1 / risk_contributions)
        w = w / w.sum()  # normalize

    # Scale for target volatility
    portfolio_vol = np.sqrt(w @ cov @ w) * np.sqrt(252)
    w = w * (target_portfolio_vol / portfolio_vol)
    return w

Drawdown-Adjusted Sizing (Anti-Martingale)

On portfolio growth — increase risk (compound growth). On drawdown — reduce:

def drawdown_adjusted_size(base_risk, current_equity, peak_equity, max_drawdown=0.20):
    """
    At drawdown > max_drawdown: full stop trading
    At drawdown 0-10%: linear risk reduction
    """
    drawdown = (peak_equity - current_equity) / peak_equity
    if drawdown > max_drawdown:
        return 0  # circuit breaker
    reduction_factor = 1 - (drawdown / max_drawdown)
    return base_risk * reduction_factor

Simulation and Sizing Strategy Assessment

# Compare different approaches
strategies = {
    'fixed_1pct': lambda signal, vol, eq: fixed_fractional(1.0),
    'fixed_kelly': lambda signal, vol, eq: half_kelly_sizing(signal),
    'vol_normalized': lambda signal, vol, eq: vol_normalized(vol, target_risk=1.0),
    'ml_adaptive': lambda signal, vol, eq: ml_size(signal.probability, vol, eq.drawdown),
}

for name, size_fn in strategies.items():
    equity_curve = simulate_strategy(trades, size_fn)
    print(f"{name}: Sharpe={sharpe(equity_curve):.2f}, MaxDD={max_drawdown(equity_curve):.1%}")

Monte Carlo simulation of 10,000 paths for each sizing method — compare distribution of results.

Typical Result: ML adaptive sizing improves Sharpe by 15-30% and reduces max drawdown by 20-40% vs. fixed fractional with same entry signals.

Timeline: volatility-adjusted sizing + drawdown adjustment — 2-3 weeks. RL adaptive sizing with risk parity and full simulation — 6-8 weeks.