Use Case
Continuously ingest, transform, and write streaming data into managed Apache Iceberg tables — no orchestrators, no compactors, no extra services to manage.
Trusted by 1,000+ Data-Driven Organizations
for Real-time Analytics
The Problem
Most Iceberg architectures rely on batch Spark jobs to land data. You schedule them every few hours, manage compaction separately, and pray the orchestrator doesn't fail. Your lakehouse data is never truly fresh.
With RisingWave
RisingWave writes to Apache Iceberg tables continuously as data arrives. Automated compaction, schema evolution, and exactly-once delivery are built in. Your queries always see fresh data.
A crypto trading desk monitors Solana DEX activity across Jupiter, Raydium, and Orca. They need to detect whale movements and arbitrage opportunities before the rest of the market reacts — every block (~400ms) matters.
| signature | slot | program | wallet | token_in | token_out | amount_in | amount_out | ts |
|---|---|---|---|---|---|---|---|---|
| 5vGn...kQ3m | 250847291 | JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4 | 7xKXa9m2pVEb3R8gT4wNdJfLqYsPnpR4n | SOL | BONK | 14.2 | 1142857 | 2024-03-15T14:32:01.412Z |
| 3kRp...wN8j | 250847293 | 675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp8 | 7xKXa9m2pVEb3R8gT4wNdJfLqYsPnpR4n | USDC | BONK | 8500 | 680000 | 2024-03-15T14:32:02.204Z |
| 9mTx...bL5e | 250847295 | whirLbMiicVdio4qvUfM5KAg6Ct8VwpYzGff3uctyCc | 7xKXa9m2pVEb3R8gT4wNdJfLqYsPnpR4n | SOL | BONK | 6.8 | 537142 | 2024-03-15T14:32:03.891Z |
| 2hYd...pK7v | 250847294 | JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4 | Bq4rJ7pVEcSan3k9dTx5NLzRmA8HYfJp2 | SOL | JUP | 245.5 | 18412.5 | 2024-03-15T14:32:02.687Z |
| 7fWm...cR2a | 250847296 | 675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp8 | DxP8rNfvYk4G2mBz5QLr9TJa3nYhR6wKm | USDC | WIF | 42000 | 14000 | 2024-03-15T14:32:04.118Z |
| 4nBx...hT9q | 250847297 | JUP6LkbZbjS1jKKwapdHNy74zcZ3tLUZoi5QNyVTaV4 | DxP8rNfvYk4G2mBz5QLr9TJa3nYhR6wKm | WIF | SOL | 14000 | 251.3 | 2024-03-15T14:32:04.802Z |
CREATE MATERIALIZED VIEW whale_accumulation AS
SELECT
wallet,
token_out AS token,
SUM(amount_out) AS total_amount,
COUNT(*) AS swap_count,
COUNT(DISTINCT program) AS dex_count,
window_start
FROM TUMBLE(dex_swaps, ts, INTERVAL '30 SECONDS')
GROUP BY wallet, token_out, window_start
HAVING
SUM(amount_in) > 10000
OR COUNT(*) > 3;| wallet | token | total_amount | swap_count | dex_count | window_start |
|---|---|---|---|---|---|
| 7xKX...pR4n | BONK | 2613967 | 4 | 3 | 2024-03-15T14:32:00.000Z |
| DxP8...6wKm | WIF | 14000 | 1 | 1 | 2024-03-15T14:32:00.000Z |
| Bq4r...Jp2 | JUP | 18412.5 | 1 | 1 | 2024-03-15T14:32:00.000Z |
Replace batch ETL with continuous streaming into Apache Iceberg — and eliminate the infrastructure that comes with it.