Development of AI-based Portfolio Optimization System
Portfolio optimization — finding optimal capital distribution among assets. Classical Markowitz (mean-variance) suffers from estimation error: portfolios are hypersensitive to input parameters. AI approaches provide more robust solutions through Bayesian estimation, machine learning and reinforcement learning.
Problems with Classical Markowitz
Estimation Error: Expected returns estimated historically very poorly. Standard error of mean estimate: σ/√T. For 10-year history: σ ≈ 20%, √T = √2520 = 50, SE = 0.4% daily = 10% annually. With such noise — optimization chases noise.
Concentrated Portfolios: MVO prone to corner solutions: invests everything in few assets with best historical Sharpe. This is overfitting to historical data.
Stale Covariances: Using standard covariance matrix with high number of assets → ill-conditioned matrix, numerically unstable inversion.
AI Improvements in Portfolio Optimization
1. Bayesian Expected Returns (Black-Litterman):
from pypfopt import BlackLittermanModel, risk_models, expected_returns
# Market equilibrium returns (CAPM)
market_prices = ...
mu = expected_returns.capm_return(prices)
# Investor views: 'AAPL will outperform MSFT by 3%'
viewdict = {'AAPL': 0.05, 'MSFT': 0.02}
bl = BlackLittermanModel(cov_matrix, pi=mu, absolute_views=viewdict)
bl_returns = bl.bl_returns()
Black-Litterman combines prior (market equilibrium) with investor views, giving more stable expectations.
2. ML Expected Returns: XGBoost/LSTM for forward returns prediction on optimization horizon (1 month, quarter). Model uses momentum, value, quality factors. Predicted returns as μ instead of historical means.
3. Shrinkage Covariance:
from sklearn.covariance import LedoitWolf
lw = LedoitWolf()
cov_matrix = lw.fit(returns).covariance_
Ledoit-Wolf shrinkage provides better covariance estimate with large asset count.
Alternative Objective Functions
Instead of max Sharpe (mean-variance):
Minimum Variance:
from pypfopt import EfficientFrontier
ef = EfficientFrontier(None, cov_matrix) # None = ignore returns
ef.min_volatility()
weights = ef.clean_weights()
Doesn't use expected returns → doesn't suffer from estimation error. Works better in-sample.
Risk Parity / Equal Risk Contribution: Each asset contributes equal amount to total portfolio risk:
from pypfopt import risk_models, EfficientFrontier
# Or via specialized library riskfolio-lib
import riskfolio as rp
port = rp.Portfolio(returns=returns_df)
w = port.optimization(model='RP', rm='MV', obj='MinRisk')
Risk parity popular in hedge funds (Bridgewater All Weather — classic example).
Maximum Diversification: Maximize ratio of weighted average volatility to portfolio volatility. Theoretically maximizes diversification benefit.
RL for Dynamic Allocation
RL-agent manages portfolio as decision process:
- State: returns, volatility, macro factors, portfolio weights
- Action: delta weights (how to change allocation)
- Reward: risk-adjusted return (Sharpe increment)
Frameworks:
# FinRL: specialized framework for RL in trading
from finrl.meta.env_portfolio_optimization import StockPortfolioEnv
from stable_baselines3 import PPO
env = StockPortfolioEnv(df=data, stock_dim=30, ...)
model = PPO("MlpPolicy", env)
model.learn(total_timesteps=100000)
RL-agent naturally accounts for transaction costs in rebalancing, which classical optimization ignores.
Constraints and Practical Limitations
Real Constraints:
ef = EfficientFrontier(mu, cov_matrix)
# Weight bounds
ef.add_constraint(lambda w: w >= 0) # no shorting
ef.add_constraint(lambda w: w <= 0.15) # max 15% per asset
# Sector constraints
sector_weights = {sector: sum(w[i] for i in sector_indices)}
ef.add_constraint(lambda w: sector_weights['tech'] <= 0.30)
# ESG: exclude companies with ESG score < threshold
excluded = esg_screener(universe)
ef.add_constraint(lambda w: w[excluded] == 0)
Transaction Costs-aware Optimization:
# Account for trade costs when rebalancing
tc = 0.001 # 10 bps
new_weights, metrics = optimize_with_tc(
current_weights, target_weights, returns, cov, tc
)
Backtesting Portfolio Strategy
Expanding Window Simulation:
for rebalance_date in rebalance_dates:
# Train on data before rebalance_date
train_returns = returns[returns.index < rebalance_date]
# Optimize
weights = optimize_portfolio(train_returns)
# Apply until next rebalancing
portfolio_returns.append(
apply_weights(returns[next_period], weights)
)
Metrics: Sharpe, Calmar, Max Drawdown, Turnover (% of portfolio traded on rebalancing), Transaction Cost Drag.
Timeline: Markowitz + Black-Litterman with monthly rebalancing — 4-6 weeks. RL-agent + risk parity + TC-aware optimization with backtesting — 3-4 months.







