Use Case

Deliver Data Into Iceberg Tables with Zero Operational Overhead

Continuously ingest, transform, and write streaming data into managed Apache Iceberg tables — no orchestrators, no compactors, no extra services to manage.

Live Lakehouse Pipeline
continuous
Iceberg Tables
127
● all healthy
Rows Written/sec
285K
● steady throughput
Compaction Queue
0
● auto-managed
Data Freshness
45s
end-to-end latency
See Live Demo

Trusted by 1,000+ Data-Driven Organizations

for Real-time Analytics

Trusted by 1,000+ Data-Driven Organizations for Real-time Analytics

The Problem

Are Your Iceberg Tables Hours Behind Because ETL Runs Overnight?

Most Iceberg architectures rely on batch Spark jobs to land data. You schedule them every few hours, manage compaction separately, and pray the orchestrator doesn't fail. Your lakehouse data is never truly fresh.

With RisingWave

Write to Iceberg Continuously — Compaction Included

RisingWave writes to Apache Iceberg tables continuously as data arrives. Automated compaction, schema evolution, and exactly-once delivery are built in. Your queries always see fresh data.

Continuous Ingestion
Stream data from Kafka, CDC, and APIs directly into Iceberg tables. No batch jobs, no scheduling.
Managed Compaction
Automatic small-file compaction and data optimization. No manual Spark jobs, no maintenance windows.
Query-Ready in Seconds
Data lands in Iceberg within seconds, not hours. Your Snowflake, Trino, and Spark queries always see fresh results.
See RisingWave in Action: Streaming Lakehouse
See how RisingWave processes real data in real time — not a recording, not a simulation.

A crypto trading desk monitors Solana DEX activity across Jupiter, Raydium, and Orca. They need to detect whale movements and arbitrage opportunities before the rest of the market reacts — every block (~400ms) matters.

By the time an analyst pulls data from a block explorer, the arbitrage window has closed and the whale has already moved.
LIVEdex_swaps
signatureslotprogramwallettoken_intoken_outamount_inamount_outts
5vGn...kQ3m250847291JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV47xKXa9m2pVEb3R8gT4wNdJfLqYsPnpR4nSOLBONK14.211428572024-03-15T14:32:01.412Z
3kRp...wN8j250847293675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp87xKXa9m2pVEb3R8gT4wNdJfLqYsPnpR4nUSDCBONK85006800002024-03-15T14:32:02.204Z
9mTx...bL5e250847295whirLbMiicVdio4qvUfM5KAg6Ct8VwpYzGff3uctyCc7xKXa9m2pVEb3R8gT4wNdJfLqYsPnpR4nSOLBONK6.85371422024-03-15T14:32:03.891Z
2hYd...pK7v250847294JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4Bq4rJ7pVEcSan3k9dTx5NLzRmA8HYfJp2SOLJUP245.518412.52024-03-15T14:32:02.687Z
7fWm...cR2a250847296675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp8DxP8rNfvYk4G2mBz5QLr9TJa3nYhR6wKmUSDCWIF42000140002024-03-15T14:32:04.118Z
4nBx...hT9q250847297JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4DxP8rNfvYk4G2mBz5QLr9TJa3nYhR6wKmWIFSOL14000251.32024-03-15T14:32:04.802Z
Streaming SQLRunning
Track whale accumulation patterns
CREATE MATERIALIZED VIEW whale_accumulation AS
SELECT
  wallet,
  token_out AS token,
  SUM(amount_out) AS total_amount,
  COUNT(*) AS swap_count,
  COUNT(DISTINCT program) AS dex_count,
  window_start
FROM TUMBLE(dex_swaps, ts, INTERVAL '30 SECONDS')
GROUP BY wallet, token_out, window_start
HAVING
  SUM(amount_in) > 10000
  OR COUNT(*) > 3;
Detect cross-DEX arbitrage
whale_accumulationauto-updating
wallettokentotal_amountswap_countdex_countwindow_start
7xKX...pR4nBONK2613967432024-03-15T14:32:00.000Z
DxP8...6wKmWIF14000112024-03-15T14:32:00.000Z
Bq4r...Jp2JUP18412.5112024-03-15T14:32:00.000Z
RisingWave detects wallet 7xKX...pR4n accumulating 2.3M BONK across 3 DEXes in 12 seconds. The desk repositions before the price impact fully propagates.
Why RisingWave

Build a Lakehouse That's Always Fresh

Replace batch ETL with continuous streaming into Apache Iceberg — and eliminate the infrastructure that comes with it.

Eliminate Batch ETL Complexity
Stop managing Airflow DAGs, Spark clusters, and cron schedules just to land data in your lakehouse. RisingWave replaces all of it with a single streaming SQL statement.
Minutes-Fresh Data (Not Hours)
Data arrives in your Iceberg tables within seconds of being produced. Analysts and ML models work with the freshest data possible, not yesterday's snapshot.
Zero Infrastructure to Manage
No separate compaction service, no orchestrator, no manual maintenance. RisingWave handles ingestion, transformation, compaction, and delivery as one managed system.

Ready to Build a Streaming Lakehouse?

Best-in-Class Event Streaming
for Agents, Apps, and Analytics
GitHubXLinkedInSlackYouTube
Sign up for our to stay updated.