Predict markets. Simulate crowds. Find your edge.
12 AI agents with distinct personas debate in real-time — powered by 26 mathematical methods
including Dempster-Shafer evidence theory, MCMC sampling, Shapley attribution, copula modeling — informed by 23 live data feeds.
Ask any yes/no question. Get a calibrated probability with confidence intervals, Bayesian posterior, extremized estimate, and edge vs market odds.
$ python main.py forecast "Will BTC hit $100k?" --odds 0.42 Weighted: 48.7% | Bayesian: 46.2% Extremized: 51.3% | LogOP: 47.8% 95% CI: [38.1%, 57.3%] MC median: 47.5% | Entropy: 0.998 bits Market: 42% → Edge: +6.7%
Feed any event. Watch 12 personas react in real-time — sentiment shifts, price impact, crowd narrative, second-order effects.
$ python main.py scenario "Fed cuts rates 50bps"
BULLISH | Sentiment: +0.52 | Impact: +8.3%
"Risk assets rally as liquidity expectations
shift. BTC leads, alts follow with lag."
→ Funding rates spike within 30 min
→ Shorts liquidated across major pairs
Seven aggregation methods. Game theory. Information theory. Peer-reviewed science.
Each agent's estimate is treated as evidence. Beliefs update via Bayes' theorem with KL-divergence weighting. More surprising estimates carry more information.
P(H|E) = P(E|H)·P(H) / P(E)
5,000 simulations treating each agent as a beta distribution parameterized by probability and confidence. Produces percentiles, skew, and threshold probabilities.
P(>50%) = 0.63 | Skew: -0.12
Based on Satopää/Tetlock IARPA research. Transforms to log-odds space and applies extremizing factor d to correct systematic under-confidence in crowd averages.
logit(p_ext) = d · mean(logit(p_i))
Prelec (2017, Nature). The correct answer is often more popular than people predict. Exploits private information agents leak through meta-predictions.
SP = actual_mean - predicted_mean
Multiplicative combination in log space. Satisfies external Bayesianity — if agents are independent Bayesians with shared likelihood, recovers the correct posterior.
p_log = Π(p_i^w_i) / Z
Performance-based weighting from expert elicitation theory. Weights by calibration AND informativeness. Unqualified agents are pruned from the pool.
w_i = calibration_i × info_i
Resample agent estimates 1,000 times to produce 95% confidence intervals. Quantifies uncertainty in the swarm's consensus.
95% CI: [38.1%, 57.3%]
HHI-based clustering analysis detects when agents converge suspiciously. Flags contrarian signals when the herd is wrong.
Herding: 0.72 → contrarian signal
Tracks how beliefs shift between debate rounds. Detects when agents flip sides, who moved most, and whether convergence was genuine.
Convergence: 78% | 2 agents flipped
Checks if the consensus is stable — would any agent benefit from deviating? Unstable equilibria signal low-confidence forecasts.
Stability: 0.92 → consensus holds
Pairwise agreement matrix between all agents. Identifies most aligned and most divergent pairs to surface hidden consensus patterns.
Most divergent: Skeptic | Native
Every forecast is tracked. When markets resolve, per-agent Brier scores update. Better-calibrated agents automatically gain more weight over time.
Brier: 0.12 → weight: 1.38x
Macro analyst, quant trader, crypto native, skeptic, options trader — each with documented biases that create genuine disagreement.
Agents see each other's reasoning and update beliefs. Weak arguments collapse. Strong ones survive. Information cascades are tracked.
Funding rates, options flow, DeFi TVL, on-chain metrics, social sentiment, prediction market odds — fetched in parallel.
Brier scores per agent, updated on resolution. Better agents gain weight over time. The swarm improves automatically.
Claude, GPT-4o, Llama, Mistral, Ollama — swap with one env var. Run fully local and private.
FastAPI server with interactive docs. Typer CLI for terminal. Docker for deployment. Integrate into any pipeline.
Every agent brings a different lens. That's the point.
23 free APIs, fetched in parallel, no API keys required.
# Install $ git clone https://github.com/defidaddydavid/polyswarm.git $ cd polyswarm && pip install -r requirements.txt $ cp .env.example .env # add your API key # Forecast $ python main.py forecast "Will BTC hit $100k?" --odds 0.45 # Simulate $ python main.py scenario "SEC bans crypto staking" # API server $ python main.py serve # → localhost:8000/docs
Built on peer-reviewed forecasting science.
Satopää et al. (2014) — Extremized aggregation corrects the systematic under-confidence found in averaged probability forecasts.
Prelec et al. — The "Surprisingly Popular" algorithm exploits meta-cognitive information to find truth even when majorities are wrong.
Classical Model for expert elicitation — weight forecasters by empirical calibration and informativeness, not just confidence.
Logarithmic opinion pools satisfy external Bayesianity — theoretically optimal when agents share likelihoods.
MIT Licensed. Open source. Run locally or in the cloud.