Skip to content

Evaluation & Deliverables

The 30-Day Evaluation Structure

A BidOptic Evaluation Agreement is structured as a single 30-day engagement. The deliverable is a signed findings report presenting three primary metrics against your calibrated baseline. These metrics form the basis of the commercial conversation at day 30.


What the Container Produces

At the end of each run, BidOptic writes three files to your client/output/ directory. All files are written locally — they never leave your VPC.

client/output/
├── market_intelligence.json   ← Segment profiles, economic viability verdict, dataset summary
├── latency_profile.json       ← Bid latency distribution, timeout analysis, forfeited revenue
└── simulation_history.csv     ← Per-step spend, wins, clicks, conversions, and ROAS (Captured from the first seed run, Seed 42)

market_intelligence.json is the primary calibration output. It contains three sections:

  • dataset_summary — row count, historical win rate, conversion count, days of data, and average daily impression volume. Note: historical_win_rate_pct reflects your market's observed win rate from the training data. Your strategy's win rate in simulation will differ based on your bidding logic — this is expected and is the signal the simulation is designed to surface.
  • economic_viability — a plain-English viability verdict, global CTR and CVR rates, the implied auction-to-conversion rate, and your calibrated daily spend and suggested 7-day budget guardrail.
  • segment_intelligence — one entry per user segment, sorted by average order value descending. Each entry includes a label (Whale, High Intent, Engaged, High Value, Volume, Toxic, or Standard), a one-sentence strategy recommendation, population share, CTR, CVR, and average order value.

latency_profile.json contains the latency audit results: total auctions simulated, bids exceeding the timeout threshold, latency leak percentage, and forfeited revenue in USD.

simulation_history.csv contains the full per-step telemetry from Seed 42, suitable for plotting spend curves and ROAS trajectories over the simulation period.


The Three Deliverable Metrics

At the conclusion of the 30-day evaluation, BidOptic will present a findings report structured around three primary metrics.

1. Latency Leak Percentage

The fraction of technically winnable auctions lost because your bidding stack's response time exceeded the exchange timeout threshold.

The simulation measures your actual enrich + bid wall-clock time at every step. It then applies the calibrated Infrastructure Latency Twin (trained on your bid_latency_ms column, or synthesised from market priors if absent) to derive a total round-trip time distribution. Auctions where the modelled total latency exceeds the BID_TIMEOUT_MS threshold are counted as latency losses, even if your bid price would have won.

A Latency Leak of 3–8% is common in production DSP deployments. Reducing it is often the highest-leverage engineering intervention available. The forfeited revenue figure in latency_profile.json translates this percentage into a USD figure against your calibrated market.

2. Strategy Funnel Accuracy

The simulation reconstructs your bidding funnel (Strategy Win Rate, CTR, CVR, and resulting CPA/ROAS) based on the decisions made by your ClientStrategy.

The pre-calibration baseline compares these simulated funnel metrics against your actual DSP logs. The post-tuning figure reflects the funnel improvement after one or more iterations informed by segment-level calibration data. This quantifies exactly where your existing strategy is leaking value (e.g., high win rate but zero clicks, or winning expensive inventory with low CVR).

3. Calibrated ROAS Projection

The ROAS trajectory produced by the simulation under your configured budget, KPI mode, and bidding strategy — expressed as a mean and variance across the 3-seed multi-reality run.

The multi-seed design intentionally introduces market variance between runs. A tight variance band indicates a robust strategy; a wide band indicates strategies that are sensitive to market noise and warrant further tuning before production deployment.

This figure is a calibrated estimate grounded in your own data. It is not a guarantee of production performance.


Commercial Trigger

A signed Evaluation Agreement includes a defined, objective commercial trigger based on simulation accuracy — not a subjective benchmark comparison.

The evaluation protocol is:

  1. Calibrate the simulation on your first 8 weeks of historical auction data.
  2. Implement your production ClientEnricher and ClientStrategy — the same models and logic you used during that 9th week of live traffic.
  3. Run the simulation against the calibrated environment for the equivalent of week 9.
  4. Compare the simulation's output (spend, win rate, conversions, ROAS) to your actual week-9 DSP logs.

If the error between the simulation's predictions and your actual week-9 results is within the pre-agreed accuracy threshold (defined in writing before the evaluation begins), the 30-day trial automatically converts to a paid licence. We offer flexible quarterly, annual, or multi-year subscription terms, with significant early-adopter discounts applied for our Design Partners.

If the error exceeds the threshold, the evaluation closes. You provide BidOptic with a small error-summary file (containing only the delta metrics, not your raw logs) so the calibration models can be improved. No licence fee is charged.

The trigger threshold and licence terms are agreed in writing before the evaluation starts. Nothing in the evaluation output is shared with BidOptic — you run the container, you own the results.

Design Partner Analytics Sharing

Because BidOptic is strictly Zero-Egress, the container cannot and will not transmit telemetry, logs, or results back to us. For clients participating in our Design Partner program, sharing evaluation analytics (such as market_intelligence.json and simulation_history.csv) for our case studies is a manual, opt-in process handled directly between your engineering team and ours at the conclusion of the 30-day trial.


Frequently Asked Questions

Do we need to share our bidding logic with BidOptic? No. Your ClientEnricher and ClientStrategy implementations run entirely inside your environment. BidOptic never sees your code.

Can we run multiple simulation episodes during the evaluation? Yes. The container is not episode-limited. You can re-run with different config parameters, different creative scenarios, or revised bidding logic as many times as you choose within the evaluation period. Calibration is cached after the first run, so subsequent runs go straight to simulation.

What happens to the trained models after the evaluation? They remain in your client/output/ directory. BidOptic has no copy. If you choose not to proceed to a licence, you can delete them. If you proceed, they form the baseline for the production deployment.

Can we test a strategy we have not yet built? Yes. You can implement a stub ClientStrategy that encodes any bid logic you want to evaluate, including approaches that are not yet integrated into your production stack.

Why is the simulation win rate lower than the calibration report win rate? The calibration win rate reflects your historical market data — the fraction of auctions you won across all bids in your logs. The simulation win rate reflects what your specific ClientStrategy bids in the simulated market. If your strategy bids below the calibrated floor price on many auctions, it will win fewer. This discrepancy is diagnostic information, not a model error. Use market_intelligence.json segment data to identify which publishers and segments your current bids are losing, then adjust your enricher or bid logic accordingly.

Ready to Proceed?

Once you have run the Open-Source Schema Validator against a sample of your logs and generated a passing receipt, you are ready to begin the evaluation.

Next Steps

Once your dataset passes validation, follow these steps to begin your evaluation:

  1. Submit your Receipt: Send your bidoptic_receipt_*.json file to berlik@bidoptic.com.
  2. Provide Host Specs: Include your target Linux host specifications (machine-id and CPU core count). This allows us to mint your hardware-locked Evaluation License.
  3. Initialize Sandbox: We will securely deliver your bidoptic.tar.gz container and Python SDK bundle. You can begin your 30-day trial immediately upon receipt.

Technical Support & Inquiries

If you encounter issues during validation or have questions regarding the architecture:

Email: berlik@bidoptic.com
LinkedIn: Csanád Berlik
Main Site: bidoptic.com