Join the upcoming webinar: Meiro Pipes Launch. Save your spot → WEBINAR: Pipes that work. Save your spot →
Loading signup form...
Meiro
  • Data Control Plane

    Meiro Pipes CDI

    Capture and route data

    Event Router Collect events across the entire customer journey Architecture How Pipes is built and deployed Reverse ETL Sync warehouse data to any tool Identity Resolution Merge fragmented customer identities Integrations 300+ native connectors AI Enablement Data enrichment and AI pipelines

    Profile Engine

    Meiro Audiences CDP

    Build unified customer profiles

    Single Customer View Unified, persistent customer profiles Audience Center Build and activate complex segments

    Activation Layer

    Meiro Engage CEP

    Activate across channels

    Email Marketing Built-in email with any SMTP provider Mobile Push Personalized app notifications WhatsApp Automated campaign messaging Journey Orchestration Multi-channel workflow automation AI Personalization Real-time personalization at scale Marketing AI Agents Automate campaign ideation and launch

    Platform

    Deploy anywhere / Private deployment

    Pipes → Audiences → Engage. Self-hosted on your cloud, on-prem, or managed. Zero data egress.

    Explore hosting options
  • By Use Case
    Convert Anonymous Web Visitors Personalize for users before they identify Boost Customer Lifetime Value Maximize revenue across the full lifecycle Prevent Churn Identify and re-engage at-risk customers Optimize Advertising Spend Suppress converted users, improve ROAS Explore more use cases
    Industries
    Banking & Finance Compliant CDP for regulated sectors Retail & E-commerce Personalization at purchase scale Health & Beauty Loyalty and lifecycle marketing Media & Publishers Audience monetization and retention iGaming Real-time player context and activation
    By Team

    Technical

    For Technical Teams

    Data engineers, architects & developers

    Marketing

    For Marketing & Business

    Marketers, analysts & CX teams

    AI & Agents

    For AI & Agents

    AI-first teams building agentic workflows

    Coming soon
  • 300+ integrations

    Connect your existing stack across analytics, marketing, data warehouses, and more.

    Browse all integrations
    Pipes Integrations
    Warehouse Activation Sync warehouse data
    to your engagement stack.
    Analytics × Warehouse Close the enrichment loop between
    your analytics tool and warehouse.
    Deployment
    All Options Your infrastructure, your choice Amazon Web Services Deploy within your AWS infrastructure Microsoft Azure Run securely in Microsoft Azure Google Cloud Platform Scale on Google Cloud infrastructure On-Premise Full control within your own servers Customer Cloud Account Self-host in your own cloud account
  • Pricing
  • Learn
    Blog Insights on data and personalization Use Cases Real-world activation patterns Events Conferences and meetups Resource Library Guides, reports, and whitepapers
    Watch
    Webinars Live and on-demand sessions Case Studies Customer success stories
    Compare
    CDP Competitors How Meiro stacks up Testimonials What our customers say
  • About Us Our team and mission Careers Join the Meiro team Partners Technology and agency partners Contact Us Social Mission Newsroom
Contact Us
Contact Us
Meiro Pipes CDI Event Router Architecture Reverse ETL Identity Resolution Integrations AI Enablement
Meiro Audiences CDP Single Customer View Audience Center
Meiro Engage CEP Email Marketing Mobile Push WhatsApp Journey Orchestration AI Personalization Marketing AI Agents
Explore hosting options →
By Use Case Convert Anonymous Web Visitors Boost Customer Lifetime Value Prevent Churn Optimize Advertising Spend All use cases →
Industries Banking & Finance Retail & E-commerce Health & Beauty Media & Publishers iGaming
By Team For Technical Teams For Marketing & Business
Browse all 300+ integrations →
Pipes Integrations Warehouse Activation Analytics × Warehouse
Deployment All Options Amazon Web Services Microsoft Azure Google Cloud Platform On-Premise Customer Cloud Account
Pricing
Learn Blog Use Cases Events Resource Library
Watch Webinars Case Studies
Compare CDP Competitors Testimonials
About Us Careers Partners Contact Us Social Mission Newsroom

CUSTOMER DATA INFRASTRUCTURE

The missing link between BigQuery and Iterable

Iterable expects userId or email, catalog events, and `dataFields` in a specific shape. BigQuery has nested `STRUCT` and `ARRAY` fields, per-byte billing, and internal user IDs that don't match. Meiro Pipes resolves the identity gap, flattens nested BigQuery fields in the transform layer, translates your schema into Iterable's API format, and keeps profiles enriched in both directions — without custom ETL you'll spend quarters maintaining.

Talk to a Consultant

Free trial · No credit card · Live in minutes

BigQuery BigQuery
Meiro Pipes Meiro Pipes
Iterable Iterable
Identity-resolved · Schema-aware · Bidirectional

Everyone says BigQuery and Iterable integrate. Nobody warns you about what breaks.

Identity is the first obstacle. Iterable uses email as the canonical identifier in most deployment configurations. BigQuery stores records by internal user ID, Firebase installation ID, or Salesforce account ID depending on the source. When these don't resolve to an Iterable email, syncs silently fail, create duplicate profiles, or land on the wrong user. No standard connector handles cross-system identity resolution.

Iterable expects flat, typed payloads: dataFields as a key/value dictionary, Track API events with typed properties. BigQuery data is often nested — STRUCT and ARRAY fields that must be flattened before they can map to Iterable's schema. That flattening SQL must be maintained every time upstream schemas evolve. BigQuery also bills per byte scanned, so naive change-detection queries create a GCP cost problem on top of the engineering one. Catalog event types like order.purchased have strict schemas; wrong shape means silent rejection for revenue attribution and suppression purposes.

List management is the final gap. Iterable is list-first: syncing audiences means computing membership deltas and calling subscribe/unsubscribe APIs separately from profile updates. Getting Iterable behavioral data into BigQuery for enrichment requires an S3 export pipeline — not native integration. The complete loop needs multiple tools, each with its own failure modes.

Five ways the BigQuery → Iterable pipeline breaks

01

Identity mismatch

Problem

BigQuery has user_id. Iterable uses email as the primary key in most configurations. When they don't match, records create duplicate profiles, land on the wrong user, or are silently dropped. BigQuery SQL can't resolve this cross-system gap.

Meiro solves it

Pipes resolves identity across every identifier type — email, user_id, phone, external ID, CRM account ID — using deterministic matching with configurable merge limits. One unified profile, regardless of which system the identifier originated from.

02

Nested `STRUCT` and `ARRAY` fields

Problem

BigQuery stores nested STRUCTs and repeated ARRAYs. Iterable requires flat JSON. Flattening in BigQuery SQL means unnesting ARRAY fields and expanding STRUCTs at query time — expensive operations that scan more bytes and increase your BigQuery bill.

Meiro solves it

Pipes transform functions receive BigQuery rows and flatten nested fields in the JavaScript sandbox — not in the warehouse query. Your SQL stays simple and cheap. The transform layer handles struct traversal, array mapping, and type coercion before delivery to Iterable.

03

Per-byte billing and change detection

Problem

BigQuery bills per byte scanned. Full-table scans to detect changed records — the default approach for most reverse ETL tools — directly cost money on every sync run, even when few records have changed.

Meiro solves it

Pipes uses partition-aware and watermark-based queries designed for BigQuery's billing model. Only changed records are processed. Your sync runs on the data that actually changed — not a full table scan on every schedule tick.

04

List membership management

Problem

Iterable campaigns are list-based. Syncing BigQuery audiences to Iterable means computing current list membership, calculating deltas (who joined, who left), and calling subscribe/unsubscribe APIs separately from profile updates. Standard connectors don't handle this.

Meiro solves it

Pipes treats list membership as a first-class sync operation. Define audience logic in your data model. Pipes computes membership changes and sends the correct list subscribe or unsubscribe calls automatically — no custom delta computation required.

05

BQML scores stuck in BigQuery

Problem

Your product team builds feature adoption scoring or churn propensity models in BQML. The output lands in a BigQuery table. Getting those scores into Iterable to trigger lifecycle sequences requires a pipeline that doesn't exist out of the box.

Meiro solves it

Pipes connects directly to the BigQuery table where BQML outputs land. Model scores become Iterable dataFields on user profiles. Qualifying users are subscribed to the correct Iterable list. The lifecycle sequence fires automatically — without a custom sync job.

One pipeline. Identity-resolved. Schema-aware.

1

Collect from Iterable

Iterable engagement data — email opens, clicks, conversions, custom events — flows into Pipes via webhook or export. Events land without replacing your existing Iterable setup.

→
2

Load & Model in BigQuery

Events land in BigQuery automatically. Pipes connects directly — browse datasets, map columns, join with product analytics or BQML model outputs. BigQuery remains your source of truth for user intelligence.

→
3

Resolve Identity

Pipes stitches profiles across Iterable userIds, email addresses, BigQuery user_ids, and any other identifier. Deterministic matching with configurable limits. No duplicate profiles. No dropped records.

→
4

Activate Back to Iterable

Enriched profiles push back to Iterable with correctly formatted dataFields, properly shaped catalog events, and list membership changes — all via the right Iterable API endpoints. Scheduled or real time. BigQuery syntax respected throughout.

Use case: Onboarding sequence powered by BQML feature adoption scoring

Your product team uses BigQuery ML to score users on feature adoption likelihood — a model that runs weekly and writes output scores back to a BigQuery table. Users who score below a threshold haven't activated a key feature in their first 14 days. You want Iterable to trigger a targeted onboarding sequence for those users.

The problem: the BQML output table has user_id keyed records with nested feature flag STRUCTs. Iterable identifies those users by email. The nested fields need to be flattened before they can become Iterable dataFields.

Without Meiro: You'd write a BigQuery SQL job that unnests STRUCT fields (scanning more bytes per run), resolves email from user_id via a join, formats a flat dataFields payload for each user, calls Iterable's user update API in batches, and separately subscribes qualifying users to the correct Iterable list. Any schema change from the BQML pipeline breaks the job. Every sync run scans the full table.

With Meiro Pipes: The BQML output table is connected directly. A simple BigQuery query using DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) fetches recent records efficiently. The Pipes transform flattens nested feature flag STRUCTs in the JavaScript sandbox — not in SQL. Pipes resolves user_id to Iterable email using the identity graph. The enriched profile — including BQML scores and activation flags — pushes to Iterable in the correct dataFields format. Qualifying users are subscribed to the right Iterable list automatically.

Time from BQML model output to live Iterable campaign: hours, not sprints.

Pipes speaks Iterable's schema so your BigQuery doesn't have to

Your BigQuery table

SELECT
  user_id,
  email,
  signup_date,
  churn_risk_score,
  account_tier,
  last_active_date,
  updated_at
FROM `project.analytics.user_scores`
WHERE updated_at > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)

Pipes transform

// Pipes send function (Event Destination)
async function send(payload, headers) {
  return payload.events.map(row => ({
    email: row.email,
    userId: row.user_id,
    dataFields: {
      churn_risk_score: row.churn_risk_score,
      account_tier: row.account_tier,
      last_active_date: new Date(row.last_active_date)
        .toISOString(),
      signup_date: new Date(row.signup_date)
        .toISOString()
    }
  }));
}

What Iterable receives

{
  "email": "[email protected]",
  "userId": "usr_8472",
  "dataFields": {
    "churn_risk_score": 0.82,
    "account_tier": "enterprise",
    "last_active_date": "2026-03-15T00:00:00.000Z"
  }
}

No raw API construction. No byte-expensive unnesting in SQL. Pipes handles nested field flattening in the transform layer, identity resolution, schema compliance, and delivery — and adapts when your BigQuery schema changes.

The cost of bolting it together

The standard stack

  • Custom ETL job — query BigQuery with backtick table names, resolve email from `user_id`, batch API calls
  • Unnesting `STRUCT` and `ARRAY` fields in BigQuery SQL — expensive, scans more bytes per sync run
  • Full-table change-detection scans — every sync run bills against your BigQuery quota
  • No identity resolution, silent failures when `user_id` and email diverge
  • Separate list subscribe/unsubscribe logic — not handled by any standard connector
  • Iterable S3 export → BigQuery load for engagement data — multi-step with schema drift
  • Breaks on every BigQuery schema change or BQML output format update

Meiro Pipes

  • Native connectors for Iterable and BigQuery
  • Partition-aware queries — change detection without full table scans
  • Nested field flattening in the transform layer, not in BigQuery SQL
  • Deterministic identity matching across `userId`, email, `user_id`, CRM ID
  • List membership sync as a first-class operation — computes deltas automatically
  • Bidirectional: Iterable engagement events land in BigQuery automatically
  • Correct `dataFields` format, correct types, every sync

A reverse ETL tool syncs rows. It doesn't resolve identity, manage list membership, flatten nested BigQuery fields cheaply, or handle the enrichment loop. Meiro Pipes does all of that — and the pipeline that remains is one you can actually maintain.

One platform. Two problems solved.

For the Lifecycle Marketer

You want to build Iterable campaigns that trigger based on BQML scores and product behavior — data that lives in BigQuery and isn't available in Iterable today.

  • ·Describe the audience you need — Piper builds it
  • ·BQML scores and product attributes appear in Iterable without engineering tickets
  • ·Feature adoption flags, churn risk scores, LTV — all available as Iterable `dataFields`
  • ·Build campaigns on complete customer context, not just Iterable engagement data
  • ·Audience sync to Iterable lists happens automatically — no manual CSV exports

For the Data Engineer

You're tired of maintaining the BigQuery → Iterable pipeline. The email resolution logic. The STRUCT unnesting that scans extra bytes. The list delta computation that breaks whenever a new segment is added.

  • ·Connect BigQuery and Iterable once — Pipes handles schema translation
  • ·Nested field flattening in the JavaScript sandbox, not in expensive BigQuery SQL
  • ·Partition-aware change detection — sync only what changed
  • ·Identity resolution across `userId`, email, `user_id`, CRM ID
  • ·CI/CD-native config management via mpcli — version-control your pipeline

Under the hood

Iterable Event Destination

Native connector. Pushes user profile updates, custom events, and catalog events (order.purchased, cart.abandon, etc.) to Iterable in the correct API format. Handles dataFields serialization, ISO 8601 date formatting, and list subscribe/unsubscribe calls.

BigQuery Connector

Direct warehouse connection supporting backtick-quoted table references and BigQuery-native SQL syntax including DATE_SUB, TIMESTAMP_DIFF, and CAST. Browse datasets, map identifier columns to Meiro identity types. Model warehouse data as profile attributes, events, or audience definitions.

Identity Resolution

Deterministic stitching across email, userId, user_id, phone, device ID, and CRM IDs. Configurable maxIdentifiers and merge priority to prevent false merges. Resolves the BigQuery user_id → Iterable email gap automatically.

Transform Sandbox

Sandboxed JavaScript functions for schema translation. Flatten nested BigQuery STRUCT and ARRAY fields without expensive SQL unnesting. Map fields, coerce types, construct dataFields dictionaries, format catalog events. 47 allowlisted packages available.

Reverse ETL / Profile Sync (Customer Studio)

Scheduled or real-time Live Profile Sync. Partition-aware change detection to minimize BigQuery bytes scanned. Push enriched profiles, events, and list membership changes to Iterable. Full delivery history and retry logic.

List Membership Sync

Model BigQuery-derived audiences as Iterable list memberships. Pipes computes membership deltas between runs and issues the correct subscribe/unsubscribe API calls. No manual delta logic required.

Why connecting BigQuery and Iterable requires more than a connector

Identity is the first obstacle. Iterable's data model is built around email as the canonical identifier in many deployment configurations. BigQuery stores customers by internal user_id, project-scoped account identifiers, or email depending on the upstream system. When these identifiers diverge, syncs silently fail, create duplicate profiles, or associate data with the wrong user. No standard reverse ETL connector reconciles cross-system identity.

BigQuery's schema model is the second obstacle. BigQuery supports nested STRUCT and ARRAY fields natively — features that data engineers rely on for efficient storage and query performance. Iterable requires flat JSON. Flattening nested fields in BigQuery SQL means unnesting ARRAY fields and expanding STRUCTs at query time, which increases bytes scanned and drives up cost. The right approach is to flatten in the transform layer, keeping the warehouse query simple and cheap.

Per-byte billing is the third obstacle. BigQuery charges per byte scanned. Full-table or wide date-range scans — the approach most reverse ETL tools use for change detection — directly cost money on every sync run. On large tables, the billing impact of inefficient sync queries compounds over time.

List management adds a fourth layer. Iterable is a list-first platform. Getting BigQuery-derived audiences into Iterable as list memberships means computing current state, calculating deltas, and issuing subscribe and unsubscribe API calls separately from the profile update API. Most reverse ETL tools don't provide this.

The BQML enrichment loop closes the gap. Product teams running BigQuery ML models — churn propensity, feature adoption scoring, LTV prediction — produce outputs that should drive Iterable lifecycle sequences. Getting those scores from BigQuery into Iterable, and getting Iterable engagement data back into BigQuery to improve the models, is not a configuration task. It is infrastructure work.

Stop debugging the pipeline. Start activating the data.

Connect BigQuery and Iterable through Meiro Pipes. Identity-resolved. Schema-aware. Bidirectional. Start free.

Talk to a Consultant
Meiro

The customer context platform for the agentic era. Capture, resolve, profile, and activate customer data — deployed on your infrastructure.

Platform Meiro Pipes (CDI) Meiro Audiences (CDP) Meiro Engage (CEP) AI Agents Integrations
Deployment AWS Azure Google Cloud On-Premise All Hosting Options
Solutions Banking & Finance Retail & E-commerce Health & Beauty Media & Publishers
Resources Blog Case Studies Webinars Compare
Company About Careers Contact Partners Schedule Demo
By Region Saudi Arabia Singapore & SEA Australia Czech Republic

© 2026 - Meiro Pte. Ltd. All rights reserved.

Product Updates Terms & Conditions Privacy Policy Terms Events Software Limits Cookie Notice