Approving and deploying
A reviewed wave isn't a deployed wave. Forge runs validations first — automated checks that confirm the wave is internally consistent, the target connection is live, and the translations behave the same as the source for sample inputs. Only after every validation passes does the approval gate unlock; only then can you deploy.
This page covers the validate → approve → deploy sequence and what to expect at each step.
Goal
You finish this page with a wave deployed to BigQuery, the source-system artifacts cut over (or in shadow mode), and the deploy state visible in Forge.
Prerequisites
- A wave where every artifact is in Approved or Skipped status. See Reviewing translations.
- Write access to the target BigQuery dataset (the service account Forge uses needs
bigquery.dataEditoron the dataset).
Steps
1. Run wave validations
Open the wave and click Run Validations. Forge runs a battery of checks:
| Validation | What it confirms |
|---|---|
| Translation parse | All approved SQL is valid BigQuery dialect. |
| Dependency graph | Artifacts referenced by other artifacts deploy in the right order. |
| Sandbox replay | Each translated artifact runs against synthetic inputs in a sandbox project; outputs match source within tolerance. |
| Target permissions | The Forge service account can write to the target dataset. |
| Quota | Estimated deploy operations fit within BigQuery quota. |
Each validation surfaces as pass / fail / warning in the wave's Validations tab. Failures block the gate; warnings don't block but are flagged for review.
Validations are idempotent — re-run as many times as you want.
2. Clear the gate
Once every validation passes, the Approve Wave button becomes active. This is the gate: explicit human approval acknowledging that automated checks passed.
Click Approve Wave. The wave moves from Validated → Approved. At this point the wave is locked: artifacts can't be added, removed, or re-translated without first reverting to Draft.
3. Deploy
Click Deploy Wave. Forge orchestrates:
- Tables migrated by direct copy —
CREATE TABLEin target,INSERTrows. - Tables migrated by Lakeflow — Forge generates the Lakeflow pipeline (already configured at wave-creation time), runs it, waits for completion.
- Translated artifacts —
CREATE OR REPLACEin dependency order. If any deploy fails, Forge runs the bounded self-heal loop one more time on the failing artifact, with the deploy error fed to the LLM. - Smoke tests — quick post-deploy queries against each artifact to confirm it's live.
You watch all of this happen in the wave's Deploy tab. Each artifact moves through Pending → Deploying → Deployed (or Failed).
4. (Optional) Cutover
Forge doesn't cut over by default — deployed artifacts live in BigQuery alongside the source still operating in Teradata / Oracle. This shadow mode lets you compare for a sanity period.
When you're confident, two cutover paths:
- Manual — point downstream consumers (BI tools, jobs, dashboards) at the BigQuery target. Forge doesn't manage this.
- Forge-orchestrated — for tenants that have configured the cutover hook, click Cut Over in the wave to redirect specified consumers automatically. Tenant-specific feature.
What happens if deploy fails
For tables, deploy failure is rare and almost always a permission / quota issue — fix and retry.
For translated artifacts, Forge runs the bounded self-heal loop once more on the failing artifact, with the deploy error context. If self-heal succeeds, deploy continues. If self-heal exhausts its budget, the artifact lands in Failed state and the wave's deploy is partial:
- All artifacts deployed before the failure stay deployed.
- The failing artifact is marked for manual review.
- Subsequent dependent artifacts are Skipped (blocked) — you fix the failed one and re-deploy the wave to pick up where it left off.
Metrics, dashboards, alerts
Every wave produces metrics under qry_forge_* prefix — exposed to Prometheus, displayed in the qry-forge Grafana dashboard, and integrated into the qry.forge alert group. Useful ones to watch:
qry_forge_wave_duration_seconds— how long deploys are taking.qry_forge_translation_self_heal_attempts— translations needing more than one self-heal pass; rising trend = LLM struggling with your dialect quirks.qry_forge_deploy_failures_total— fail counter; alerts page on > 0.
Common issues
Validation hangs at Sandbox replay.
The sandbox project hasn't been provisioned for your tenant. Ask an admin to check forge.sandbox.project_id in tenant settings.
Approve Wave is greyed out even though validations all show green. Refresh — sometimes the UI doesn't update the gate state in real time. If still greyed, an admin can check the Forge backend's wave-state machine.
Deploy succeeded but BI tools are still hitting Teradata. Cutover is a separate step. Forge doesn't redirect downstream consumers automatically unless your tenant configured the cutover hook.
Failed artifact, self-heal exhausted. The translation needs human work. Open the artifact, edit the SQL manually, then Re-deploy Wave — only the failed and blocked artifacts run again.
Emergency: deploy is producing wrong results.
Use the FORGE_GLOBAL_KILL_SWITCH environment variable to disable Forge translation tenant-wide while you investigate. Feature flag runtime_config.forge_translation.enabled is the gentler alternative.
See also
- Creating a migration wave — pre-requisite walkthrough.
- Reviewing translations — pre-requisite review.
- Forge reference — full feature reference, including all metrics, alerts, and the Migration Guide / Runbook links.