CUSTOMER DATA INFRASTRUCTURE
Iterable expects userId or email, catalog events, and `dataFields` in a specific shape. BigQuery has nested `STRUCT` and `ARRAY` fields, per-byte billing, and internal user IDs that don't match. Meiro Pipes resolves the identity gap, flattens nested BigQuery fields in the transform layer, translates your schema into Iterable's API format, and keeps profiles enriched in both directions — without custom ETL you'll spend quarters maintaining.
Free trial · No credit card · Live in minutes
Identity is the first obstacle. Iterable uses email as the canonical identifier in most deployment configurations. BigQuery stores records by internal user ID, Firebase installation ID, or Salesforce account ID depending on the source. When these don't resolve to an Iterable email, syncs silently fail, create duplicate profiles, or land on the wrong user. No standard connector handles cross-system identity resolution.
Iterable expects flat, typed payloads: dataFields as a key/value dictionary, Track API events with typed properties. BigQuery data is often nested — STRUCT and ARRAY fields that must be flattened before they can map to Iterable's schema. That flattening SQL must be maintained every time upstream schemas evolve. BigQuery also bills per byte scanned, so naive change-detection queries create a GCP cost problem on top of the engineering one. Catalog event types like order.purchased have strict schemas; wrong shape means silent rejection for revenue attribution and suppression purposes.
List management is the final gap. Iterable is list-first: syncing audiences means computing membership deltas and calling subscribe/unsubscribe APIs separately from profile updates. Getting Iterable behavioral data into BigQuery for enrichment requires an S3 export pipeline — not native integration. The complete loop needs multiple tools, each with its own failure modes.
Problem
BigQuery has user_id. Iterable uses email as the primary key in most configurations. When they don't match, records create duplicate profiles, land on the wrong user, or are silently dropped. BigQuery SQL can't resolve this cross-system gap.
Meiro solves it
Pipes resolves identity across every identifier type — email, user_id, phone, external ID, CRM account ID — using deterministic matching with configurable merge limits. One unified profile, regardless of which system the identifier originated from.
Problem
BigQuery stores nested STRUCTs and repeated ARRAYs. Iterable requires flat JSON. Flattening in BigQuery SQL means unnesting ARRAY fields and expanding STRUCTs at query time — expensive operations that scan more bytes and increase your BigQuery bill.
Meiro solves it
Pipes transform functions receive BigQuery rows and flatten nested fields in the JavaScript sandbox — not in the warehouse query. Your SQL stays simple and cheap. The transform layer handles struct traversal, array mapping, and type coercion before delivery to Iterable.
Problem
BigQuery bills per byte scanned. Full-table scans to detect changed records — the default approach for most reverse ETL tools — directly cost money on every sync run, even when few records have changed.
Meiro solves it
Pipes uses partition-aware and watermark-based queries designed for BigQuery's billing model. Only changed records are processed. Your sync runs on the data that actually changed — not a full table scan on every schedule tick.
Problem
Iterable campaigns are list-based. Syncing BigQuery audiences to Iterable means computing current list membership, calculating deltas (who joined, who left), and calling subscribe/unsubscribe APIs separately from profile updates. Standard connectors don't handle this.
Meiro solves it
Pipes treats list membership as a first-class sync operation. Define audience logic in your data model. Pipes computes membership changes and sends the correct list subscribe or unsubscribe calls automatically — no custom delta computation required.
Problem
Your product team builds feature adoption scoring or churn propensity models in BQML. The output lands in a BigQuery table. Getting those scores into Iterable to trigger lifecycle sequences requires a pipeline that doesn't exist out of the box.
Meiro solves it
Pipes connects directly to the BigQuery table where BQML outputs land. Model scores become Iterable dataFields on user profiles. Qualifying users are subscribed to the correct Iterable list. The lifecycle sequence fires automatically — without a custom sync job.
Iterable engagement data — email opens, clicks, conversions, custom events — flows into Pipes via webhook or export. Events land without replacing your existing Iterable setup.
Events land in BigQuery automatically. Pipes connects directly — browse datasets, map columns, join with product analytics or BQML model outputs. BigQuery remains your source of truth for user intelligence.
Pipes stitches profiles across Iterable userIds, email addresses, BigQuery user_ids, and any other identifier. Deterministic matching with configurable limits. No duplicate profiles. No dropped records.
Enriched profiles push back to Iterable with correctly formatted dataFields, properly shaped catalog events, and list membership changes — all via the right Iterable API endpoints. Scheduled or real time. BigQuery syntax respected throughout.
Your product team uses BigQuery ML to score users on feature adoption likelihood — a model that runs weekly and writes output scores back to a BigQuery table. Users who score below a threshold haven't activated a key feature in their first 14 days. You want Iterable to trigger a targeted onboarding sequence for those users.
The problem: the BQML output table has user_id keyed records with nested feature flag STRUCTs. Iterable identifies those users by email. The nested fields need to be flattened before they can become Iterable dataFields.
Without Meiro: You'd write a BigQuery SQL job that unnests STRUCT fields (scanning more bytes per run), resolves email from user_id via a join, formats a flat dataFields payload for each user, calls Iterable's user update API in batches, and separately subscribes qualifying users to the correct Iterable list. Any schema change from the BQML pipeline breaks the job. Every sync run scans the full table.
With Meiro Pipes: The BQML output table is connected directly. A simple BigQuery query using DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) fetches recent records efficiently. The Pipes transform flattens nested feature flag STRUCTs in the JavaScript sandbox — not in SQL. Pipes resolves user_id to Iterable email using the identity graph. The enriched profile — including BQML scores and activation flags — pushes to Iterable in the correct dataFields format. Qualifying users are subscribed to the right Iterable list automatically.
Time from BQML model output to live Iterable campaign: hours, not sprints.
Your BigQuery table
SELECT
user_id,
email,
signup_date,
churn_risk_score,
account_tier,
last_active_date,
updated_at
FROM `project.analytics.user_scores`
WHERE updated_at > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) Pipes transform
// Pipes send function (Event Destination)
async function send(payload, headers) {
return payload.events.map(row => ({
email: row.email,
userId: row.user_id,
dataFields: {
churn_risk_score: row.churn_risk_score,
account_tier: row.account_tier,
last_active_date: new Date(row.last_active_date)
.toISOString(),
signup_date: new Date(row.signup_date)
.toISOString()
}
}));
} What Iterable receives
{
"email": "[email protected]",
"userId": "usr_8472",
"dataFields": {
"churn_risk_score": 0.82,
"account_tier": "enterprise",
"last_active_date": "2026-03-15T00:00:00.000Z"
}
} No raw API construction. No byte-expensive unnesting in SQL. Pipes handles nested field flattening in the transform layer, identity resolution, schema compliance, and delivery — and adapts when your BigQuery schema changes.
The standard stack
Meiro Pipes
A reverse ETL tool syncs rows. It doesn't resolve identity, manage list membership, flatten nested BigQuery fields cheaply, or handle the enrichment loop. Meiro Pipes does all of that — and the pipeline that remains is one you can actually maintain.
You want to build Iterable campaigns that trigger based on BQML scores and product behavior — data that lives in BigQuery and isn't available in Iterable today.
You're tired of maintaining the BigQuery → Iterable pipeline. The email resolution logic. The STRUCT unnesting that scans extra bytes. The list delta computation that breaks whenever a new segment is added.
Native connector. Pushes user profile updates, custom events, and catalog events (order.purchased, cart.abandon, etc.) to Iterable in the correct API format. Handles dataFields serialization, ISO 8601 date formatting, and list subscribe/unsubscribe calls.
Direct warehouse connection supporting backtick-quoted table references and BigQuery-native SQL syntax including DATE_SUB, TIMESTAMP_DIFF, and CAST. Browse datasets, map identifier columns to Meiro identity types. Model warehouse data as profile attributes, events, or audience definitions.
Deterministic stitching across email, userId, user_id, phone, device ID, and CRM IDs. Configurable maxIdentifiers and merge priority to prevent false merges. Resolves the BigQuery user_id → Iterable email gap automatically.
Sandboxed JavaScript functions for schema translation. Flatten nested BigQuery STRUCT and ARRAY fields without expensive SQL unnesting. Map fields, coerce types, construct dataFields dictionaries, format catalog events. 47 allowlisted packages available.
Scheduled or real-time Live Profile Sync. Partition-aware change detection to minimize BigQuery bytes scanned. Push enriched profiles, events, and list membership changes to Iterable. Full delivery history and retry logic.
Model BigQuery-derived audiences as Iterable list memberships. Pipes computes membership deltas between runs and issues the correct subscribe/unsubscribe API calls. No manual delta logic required.
Identity is the first obstacle. Iterable's data model is built around email as the canonical identifier in many deployment configurations. BigQuery stores customers by internal user_id, project-scoped account identifiers, or email depending on the upstream system. When these identifiers diverge, syncs silently fail, create duplicate profiles, or associate data with the wrong user. No standard reverse ETL connector reconciles cross-system identity.
BigQuery's schema model is the second obstacle. BigQuery supports nested STRUCT and ARRAY fields natively — features that data engineers rely on for efficient storage and query performance. Iterable requires flat JSON. Flattening nested fields in BigQuery SQL means unnesting ARRAY fields and expanding STRUCTs at query time, which increases bytes scanned and drives up cost. The right approach is to flatten in the transform layer, keeping the warehouse query simple and cheap.
Per-byte billing is the third obstacle. BigQuery charges per byte scanned. Full-table or wide date-range scans — the approach most reverse ETL tools use for change detection — directly cost money on every sync run. On large tables, the billing impact of inefficient sync queries compounds over time.
List management adds a fourth layer. Iterable is a list-first platform. Getting BigQuery-derived audiences into Iterable as list memberships means computing current state, calculating deltas, and issuing subscribe and unsubscribe API calls separately from the profile update API. Most reverse ETL tools don't provide this.
The BQML enrichment loop closes the gap. Product teams running BigQuery ML models — churn propensity, feature adoption scoring, LTV prediction — produce outputs that should drive Iterable lifecycle sequences. Getting those scores from BigQuery into Iterable, and getting Iterable engagement data back into BigQuery to improve the models, is not a configuration task. It is infrastructure work.
Connect BigQuery and Iterable through Meiro Pipes. Identity-resolved. Schema-aware. Bidirectional. Start free.