Skip to content

Deployment & System Requirements

Hardware Requirements

BidOptic is a CPU-bound simulation workload. No GPU is required at any stage.

Resource Minimum Recommended
RAM 16 GB 32 GB
CPU 8 cores 16 cores
Disk (free space) 10 GB 20 GB
Operating System Linux (x86-64) Ubuntu 20.04 LTS or later
Docker Engine 20.10+ 24.x
GPU Not required Not required

On CPU cores. The calibration pipeline parallelises across cores during model training. An 8-core machine completes calibration of a 3-million-row dataset in approximately 60–90 seconds. Simulation episode runtime scales with AUCTIONS_PER_STEP and SIMULATION_PERIOD_DAYS; the default 7-day configuration with 3 seed runs completes in under 2 minutes on any compliant machine.

On RAM. The 16 GB minimum assumes your input dataset does not exceed roughly 2 million rows. For larger datasets (up to the supported 10 million row ceiling), 32 GB is required to avoid OOM during the high-dimensional model training phase.

On operating system. The container image is built for Linux/amd64. macOS with Docker Desktop (Apple Silicon or Intel) is supported for development purposes only and is not recommended for production evaluation runs due to virtualisation overhead affecting latency benchmarks.


What You Will Receive

At the start of an Evaluation Agreement you will receive:

  • bidoptic.tar.gz — the encrypted Docker image archive containing the C-compiled BidOptic core engine.
  • client/ — the SDK directory containing:
  • license.bin — your licence token, scoped to your evaluation period and hardware-locked to your machine ID
  • contracts.py — the two abstract interfaces (ClientEnricher, ClientStrategy)
  • config.yaml — your campaign parameter file
  • enricher.py and strategy.py — your implementation templates
  • data/ — place your validated Parquet or CSV file here
  • output/ — all results are written here after each run
  • This documentation

Loading and Running the Container

Step 1 — Load the image

docker load -i bidoptic.tar.gz

This registers the image locally. It does not start a container or open any network connections.

Step 2 — Run calibration and simulation

docker run --rm \
  --network none \
  -v /path/to/your/client:/app/client \
  -v /path/to/your/data:/app/data \
  -v /etc/machine-id:/app/host/machine-id:ro \
  bidoptic:latest \
  --data /app/data/logs.parquet

Replace /path/to/your/client with the absolute path to the client/ folder you received. On Windows with Git Bash, prefix the command with MSYS_NO_PATHCONV=1.

All outputs are written to client/output/ on your host filesystem.

Parameter reference

Flag Description
--rm Removes the container automatically when it exits.
--network none Disables all network interfaces. Required — the container will abort if a non-loopback interface is detected.
-v .../client:/app/client Bind-mounts your client folder. Must contain license.bin, config.yaml, enricher.py, strategy.py, and your data file.
-v /etc/machine-id:/app/host/machine-id:ro Provides the host machine ID for hardware-lock validation. Required.
--data Path inside the container to your input Parquet or CSV file.

Plugging In Your Strategy

The Observation Dictionary (obs)

When the simulation calls your enrich(obs) and bid(enriched_obs) methods, you receive a dictionary of NumPy arrays. Each array has a shape of (batch_size,).

Supply | Key | Type | Description | |---|---|---| | visible_floor | Float32 | SSP floor price. ~80% of auctions expose this. Bidding below = silent loss. | | ad_sizes | Int32 | Encoded ad format. | | publisher_ids | Int32 | 0 = highest-traffic publisher (descending volume). | | pub_avg_clearing | Float32 | Rolling avg clearing price. Use for bid shading. | | pub_win_rate | Float32 | Your historical win rate on this publisher. | | pub_avg_cvr | Float32 | Calibrated publisher CVR. Best CVR proxy without an ML model. |

Audience | Key | Type | Description | |---|---|---| | user_segments | Int32 | Cluster ID. 0 = Whale (highest LTV). | | user_ids | Int32 | Label-encoded user identity. | | user_impression_count | Int32 | Rolling frequency count. ≥5 = likely saturated. | | time_since_last_impression| Float32 | Seconds. Large = cold/re-engageable user. | | user_click_history | Int32 | Total clicks this campaign. Strong intent signal. |

Temporal & Pacing | Key | Type | Description | |---|---|---| | hour_of_day | Int32 | 0–23 | | day_of_week | Int32 | 0–6 | | budget_remaining | Float32 | $ budget remaining. | | time_remaining_pct | Float32 | 1.0 = just started, 0.0 = end of flight. |

After calibration completes, the container immediately runs the simulation using your current enricher.py and strategy.py. You interact with the simulation by implementing two Python classes defined in client/contracts.py:

ClientEnricher — implement the enrich(obs) method. This is where your pCTR, pCVR, or LTV model runs. The method receives an auction observation dictionary and must return it with a predicted_value key added — a scalar float representing the expected value of the auction (e.g. pCVR × expected_payout).

ClientStrategy — implement the bid(enriched_obs) method. This receives the enriched observation and must return a NumPy array of bid prices in USD (one per auction). Return 0.0 to pass on an auction.

The SDK measures the wall-clock time consumed by your enrich and bid calls and feeds that measured latency directly into the simulation physics. Your latency is not estimated — it is observed.

Each time you update your strategy or enricher, re-run the same Docker command. Calibration is cached — the container detects that models already exist and skips straight to simulation, so iteration is fast.


Campaign Configuration

Before running a simulation episode, edit client/config.yaml to reflect your campaign parameters:

# ================================================================
# CAMPAIGN TARGETS & BUDGET
# ================================================================
TOTAL_BUDGET:
  value: 10000.0   # USD
  override: true

KPI_MODE:
  value: roas      # Options: roas | cpa | revenue
  override: true

TARGET_ROAS:
  value: 2.0
  override: true

TARGET_CPA:
  value: 15.0
  override: true

MIN_SPEND_BEFORE_KILL:
  value: 2000.0    # Spend threshold before early KPI kill engages
  override: true

KPI_KILL_MULTIPLIER:
  value: 5.0       # Kills episode if CPA > 5x target or ROAS < target/5
  override: true

# ================================================================
# SIMULATION RUNTIME SETTINGS
# ================================================================
SIMULATION_PERIOD_DAYS:
  value: 7         # Increase to 14 if conversion delays cause zero attribution
  override: true

NUM_SIMULATION_SEEDS:
  value: 3         # Default: 3. Higher = tighter variance estimates, longer runtime.
  override: true

# ================================================================
# COUNTERFACTUAL SCENARIO: MARKET STRUCTURE
# ================================================================
EXPLICIT_ARCHETYPE_MARKET_SHARE:
  value: 0.30      # 0.40-0.60: Consolidated market | 0.05-0.15: Highly fragmented
  override: true

AUCTION_TYPE_DISTRIBUTION:
  value:
    first_price: 1.0
    second_price: 0.0
  override: true

# ================================================================
# COUNTERFACTUAL SCENARIO: CREATIVE PERFORMANCE
# ================================================================
ACTIVE_CREATIVE_ID:
  value: Premium_Video_Ad
  override: true

CREATIVES:
  value:
    Premium_Video_Ad:
      base_ctr_multiplier: 1.5   # 50% more clicks
      fatigue_rate: 0.000002
      appeal_segment: 2          # targets segment 2 specifically
      appeal_multiplier: 4.0
  override: true

Troubleshooting: Revenue = $0.00 & Early Terminations

Work through this checklist in order if your simulation is failing:

1. Win Rate = 0% — bids are below every floor. Check visible_floor range in your data (typical: $0.01–$0.50). The baseline strategy already floors against visible_floor * 1.01.

2. Wins but zero clicks — winning low-quality inventory with near-zero CTR. Filter out auctions where pub_avg_cvr < 0.001.

3. Clicks but zero conversions — conversions have realistic post-click delays (~2 days avg). Check the Attribution Bridge line in the logs. If it also shows $0, try SIMULATION_PERIOD_DAYS: 14.

4. "BINARY CONVERSION MODE" warning — your calibration data had no conversion_value variance. Each conversion is fixed at $1.00 in this mode. ROAS = conversions per dollar, not revenue ROAS. Add a conversion_value column to your training data and re-calibrate to unlock true ROAS.

5. Simulation terminates early — KPI kill guard firing before conversions accumulate. If your CPA exceeds the safety threshold too early, the episode dies. Set MIN_SPEND_BEFORE_KILL: 2000.0 and KPI_KILL_MULTIPLIER: 5.0 in your config.yaml to give the simulation more breathing room.