CUSTOMER DATA INFRASTRUCTURE
Braze CDI syncs rows. Meiro Pipes resolves identity, transforms events into Braze's schema, and keeps profiles enriched in both directions — without Hightouch, Census, or a custom pipeline you'll regret building.
Free trial · No credit card · Live in minutes
The core problem is identity. Snowflake stores records keyed on whatever upstream systems use — Salesforce IDs, internal user IDs, emails. Braze expects an external_id. When these don't align, CDI syncs silently create duplicate profiles or drop records. No standard Snowflake connector reconciles cross-system identity — IDs pass through and the system assumes they match.
Braze adds two more layers. Its event model is strict: every custom event needs a name, an ISO 8601 timestamp, and a typed JSON properties object under 100 KB — one event per row, no reserved key names. CDI requires a PAYLOAD column containing a handcrafted JSON string, which means writing change-detection SQL against Snowflake Streams, handling insert/update/delete separately via UNION ALL, and rebuilding that payload every time a source schema changes. This is custom ETL, not configuration.
Every attribute sync costs a Braze data point; events count against your contract. Teams overspend because attribute-versus-event tradeoffs are made in SQL rather than at the data model layer. Braze CDI is also one-directional — closing the loop from Braze behavioral data back into Snowflake requires a separate vendor (Hightouch, Census) or additional custom plumbing.
Problem
Snowflake has email. Braze has external_id. Your CRM has Salesforce account ID. No tool in the standard pipeline reconciles them. Duplicate profiles, dropped records, broken segments.
Meiro solves it
Pipes resolves identity across every identifier type — email, user_id, device_id, phone, CRM ID — using deterministic matching with configurable merge limits. One unified profile, regardless of which system the identifier came from.
Problem
Braze requires events in a specific JSON shape — name, ISO 8601 time, properties under 100 KB, one event per row, no reserved key names, consistent data types across syncs. Your Snowflake tables don't look like that.
Meiro solves it
Pipes transform functions convert your Snowflake data into Braze-compatible event payloads automatically. Sandboxed JavaScript transforms handle schema mapping, field renaming, type coercion, and property formatting — without writing raw SQL in Snowflake.
Problem
Braze CDI needs a PAYLOAD column with a precise JSON string. Building it means writing SQL to detect changes, construct minimal payloads, and handle insert/update/delete separately via Snowflake Streams and UNION ALL queries. It's custom ETL that breaks on every schema change.
Meiro solves it
Pipes constructs the payload from your data model — you define what to sync, Pipes handles the serialization, change detection, and delivery. No raw SQL payload construction.
Problem
Every attribute sync costs a data point. Events count against your contract. Event properties don't — but they're invisible on the user profile and limited to 20 segmentable properties. Teams overspend because they can't optimize what to send as attributes vs. events vs. properties.
Meiro solves it
Pipes lets you model your data before it reaches Braze. Decide what becomes an attribute (persistent, segmentable, costs data points) vs. an event (timestamped, trigger-able) vs. a property (free, contextual). Optimize your data point spend at the infrastructure layer, not in Snowflake SQL.
Problem
Braze CDI is one-directional per sync. You can pull data in, but you can't run a Snowflake → Braze → Snowflake → Braze enrichment loop without adding Hightouch or Census. Each adds cost, config, and a failure surface.
Meiro solves it
Pipes collects from both directions. Events from Braze flow into Snowflake. Snowflake data enriches profiles. Enriched profiles flow back to Braze via scheduled or real-time sync. One platform, bidirectional, identity-resolved.
Braze engagement data — opens, clicks, conversions, custom events — flows into Pipes via Currents or webhook. Events land without replacing your Braze SDK.
Events land in Snowflake automatically. Pipes connects directly — browse tables, map columns, join with CRM data, billing records, or any warehouse source. Snowflake stays your source of truth.
Pipes stitches profiles across Braze external_ids, Snowflake user_ids, CRM emails, device fingerprints — any identifier. Deterministic matching with configurable limits. No duplicate profiles. No dropped records.
Enriched profiles push back to Braze in the exact schema Braze expects — attributes as JSON payloads, events with proper formatting, properties correctly typed. Scheduled or real-time. No Hightouch. No Census.
Your data science team builds a churn propensity model in Snowflake. It combines product usage data (from Braze events landed in the warehouse) with commercial data (contract value, support ticket volume, NPS scores from Salesforce).
The model produces a churn_risk_score for every customer.
Without Meiro: Getting that score back into Braze means writing a Snowflake view that formats the score as a JSON payload in Braze CDI's exact shape, setting up a CDI sync, hoping identity matches, and praying the data type doesn't change between runs. Or paying Hightouch $10K+/yr to do it.
With Meiro Pipes: The churn_risk_score is modeled as an attribute in Meiro. Pipes resolves the identity between the Snowflake user_id and the Braze external_id. The enriched profile — including the score — pushes to Braze as a custom attribute in the correct format. Your lifecycle team builds a Canvas that triggers a retention campaign for anyone with churn_risk > 0.7. No SQL. No CDI payload debugging. No middleware vendor.
Time from model output to live Braze campaign: hours, not sprints.
Your Snowflake table
SELECT
user_id,
email,
churn_score,
last_purchase_date,
account_tier,
updated_at
FROM analytics.customer_scores
WHERE updated_at > CURRENT_DATE - 1 Pipes transform
// Pipes send function (Event Destination)
async function send(payload, headers) {
return payload.events.map(row => ({
external_id: row.user_id,
attributes: {
churn_risk_score: row.churn_score,
account_tier: row.account_tier,
last_purchase_date: new Date(row.last_purchase_date)
.toISOString()
}
}));
} What Braze receives
{
"external_id": "usr_8472",
"attributes": {
"churn_risk_score": 0.82,
"account_tier": "enterprise",
"last_purchase_date":
"2026-03-15T00:00:00.000Z"
}
} No `PAYLOAD` column. No UNION ALL queries. No change detection SQL. Pipes handles serialization, schema compliance, and delivery — and adapts when your Snowflake schema changes.
The standard stack
Meiro Pipes
Braze CDI is a data pipe. Hightouch is a sync tool. Neither resolves identity or transforms schema. Meiro Pipes does all three — and the pipeline that remains is one you can actually maintain.
You want to build a Braze Canvas that targets high-value customers at risk of churning — using data from Snowflake you can't currently access.
You're tired of maintaining the Snowflake → Braze pipeline. The JSON payload SQL. The Streams-based change detection. The CDI config that breaks when marketing adds a new field.
Native connector. Pushes attributes, events, and purchases to Braze in the exact /users/track API format. Handles JSON serialization, ISO 8601 date formatting, and property type validation.
Direct warehouse connection. Browse schemas, tables, columns. Map identifier columns to Meiro identity types. Model warehouse data as attributes or audiences.
Deterministic stitching across email, external_id, user_id, device_id, phone — any identifier. Configurable maxIdentifiers and priority to prevent false merges. Cross-system, not per-tool.
Sandboxed JavaScript functions for schema translation. Convert Snowflake rows to Braze payloads. Filter, enrich, rename fields. No raw SQL in Snowflake. 47 allowlisted packages available.
Scheduled or real-time Live Profile Sync. Push enriched profiles and segments to Braze or any destination. On-demand exports for backfills. Full delivery history and retry.
Model data before it reaches Braze. Decide at the infrastructure layer what becomes an attribute (costs data points), event (costs events), or event property (free). Stop overspending on Braze's pricing model.
In practice, every team that runs this pipeline hits the same problems. Braze CDI requires data in a precise format — a PAYLOAD column containing a JSON string that matches the /users/track API shape. Building and maintaining that JSON in Snowflake SQL means writing change detection queries, handling insert/update/delete separately, and constructing minimal payloads that include only modified fields. This isn't configuration. It's custom ETL work that breaks every time a source schema changes or a new attribute is added.
Identity is the harder problem. Snowflake stores customer records with whatever identifiers your upstream systems provide — CRM IDs, internal user IDs, emails. Braze identifies users by external_id, with optional email or phone. When these don't match — and they frequently don't — syncs create duplicate profiles, drop records, or silently associate data with the wrong user. Neither Braze CDI nor Hightouch resolves cross-system identity. They pass identifiers through and assume they match.
Braze's event model adds another layer of complexity. Custom events require specific formatting: ISO 8601 timestamps, one event per row, properties as typed JSON under 100 KB, no reserved key names. Inconsistent types between syncs cause silent rejection. The 20-property segmentation limit means teams have to carefully decide what becomes a segmentable property vs. a regular event property — a decision that's hard to change after the fact.
The data point pricing model makes this worse. Every attribute sync costs a data point. Sending the wrong data shape — attributes instead of events, or syncing unchanged values — directly increases Braze costs with no additional value.
Connect Snowflake and Braze through Meiro Pipes. Identity-resolved. Schema-aware. Bidirectional. Start free.