Back to Home11/19/2025, 7:22:17 AM

Apache Iceberg vs. Delta lake – benchmarked

1 points
1 comments

Mood

thoughtful

Sentiment

positive

Category

tech

Key topics

Apache Iceberg

Delta Lake

data engineering

benchmarking

Debate intensity10/100

The author shares their experience benchmarking Apache Iceberg and Databricks Delta Lake, highlighting the strengths of both formats, and sparking a discussion on the choice between the two.

Snapshot generated from the HN discussion

Discussion Activity

Light discussion

First comment

N/A

Peak period

1

Hour 1

Avg / period

1

Comment distribution1 data points

Based on 1 loaded comments

Key moments

  1. 01Story posted

    11/19/2025, 7:22:17 AM

    2h ago

    Step 01
  2. 02First comment

    11/19/2025, 7:22:17 AM

    0s after posting

    Step 02
  3. 03Peak activity

    1 comments in Hour 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/19/2025, 7:22:17 AM

    2h ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (1 comments)
Showing 1 comments
Cappybara12
2h ago
For every other data engineer or someone in higher hierarchy down the road comes to a choiuce of Apache Iceberg or Databricks Delta Lake, so we went ahead and benchmarked both systems. Just sharing our experience here.

TL;DR Both formats have their perks: Apache Iceberg offers an open, flexible architecture with surprisingly fast query performance in some cases, while Databricks Delta Lake provides a tightly managed, all-in-one experience where most of the operational overhead is handled for you.

Setup & Methodology

We used the TPC-H 1 TB dataset which is a dataset of about 8.66 billion rows across 8 tables to compare the two stacks end-to-end: ingestion and analytics.

For the Iceberg setup:

We ingested data from PostgreSQL into Apache Iceberg tables on S3, orchestrated through OLake’s high-throughput CDC pipeline using AWS Glue as catalog and EMR Spark for query.. Ingestion used 32 parallel threads with chunked, resumable snapshots, ensuring high throughput. On the query side, we tuned Spark similarly to Databricks (raised shuffle partitions to 128 and disabled vectorised reads due to Arrow buffer issues).

For the Databricks Delta Lake setup: Data was loaded via the JDBC connector from PostgreSQL into Delta tables in 200k-row batches. Databricks’ managed runtime automatically applied file compaction and optimized writes. Queries were run using the same 22 TPC-H analytics queries for a fair comparison.

This setup made sure we were comparing both ingestion performance and analytical query performance under realistic, production-style workloads.

What We Found

We used OLake to ingest to Iceberg and was about 2x faster - 12 hours vs 25.7 hours on Databricks thanks to parallel chunked ingestion.

Iceberg ran the full TPC-H suite 18% faster than Databricks.

Cost: Infra cost was 61% lower on Iceberg + OLake (around $21.95 vs $50.71 for the same run).

here are the overall result and our ideology on this-

Databricks still wins on ease-of-use: you just click and go. Cluster setup, Spark tuning, and governance are all handled automatically. That’s great for teams that want a managed ecosystem and don’t want to deal with infrastructure.

But if your team is comfortable managing a Glue/AWS stack and handling a bit more complexity, Iceberg + OLake’s open architecture wins on pure numbers faster at scale, lower cost, and full engine flexibility (Spark, Trino, Flink) without vendor lock-in.

read our article to know more on our steps followed and the overall benchmarks and the numbers around it curious to know what you people think ofcourse these are numbers but it largely depends on your experience too of how you adopted in your org

ID: 45976740Type: storyLast synced: 11/19/2025, 7:24:17 AM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.