Ibm to Acquire Confluent
Key topics
The tech world is abuzz with IBM's acquisition of Confluent, sparking a heated debate about the fate of Confluent employees and the future of Kafka. While some commenters predict a lucrative short-term payday followed by layoffs, others counter that essential employees may receive retention bonuses worth 100-300% of their base salary. As the discussion unfolds, it becomes clear that the outcome depends on various factors, including the employees' roles and the motivations of IBM's divisional leaders. Meanwhile, the acquisition has also triggered a chorus of "enshittification" warnings, with some commenters touting Kafka alternatives and sparking a side debate about the merits of Kafka itself.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
48m
Peak period
148
Day 1
Avg / period
40
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 8, 2025 at 8:43 AM EST
26 days ago
Step 01 - 02First comment
Dec 8, 2025 at 9:32 AM EST
48m after posting
Step 02 - 03Peak activity
148 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 16, 2025 at 8:50 PM EST
17 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Some redundant departments (HR, finance, accounting and the like) will be downsized after the acquisition.
Engineering and product will mostly be unaffected in the short term, but in a year or two the IBM culture will start to seep in, and that would be a good time for tenured employees to start planning their exits. That's also when lock-up agreements will expire and the existing leadership of Confluent will depart and be replaced by IBM execs.
IBM will likely give Confluent employees a large pay package, and then let them go after the merger.
edit: btw, it's typical for any acquisition/merger
IBM is a really big and diverse company, in a way fundamentally different from most other big tech. In a sense, it is completely incoherent to refer to them as a singular entity.
My opinions are my own. I worked at IBM like a decade ago in a role where I could see the radically different motivations of divisions.
I'll start.
https://github.com/tansu-io/tansu
kevstev wrote just above about Kafka being written to run on spinning disks (HDDs), while Redpanda was written to take advantage of the latest hardware (local NVMe SSDs). He has some great insights.
As well, Apache Kafka was written in Java, back in an era when you were weren't quite sure what operating system you might be running on. For example, when Azure first launched they had a Windows NT-based system called Windows Azure. Most everyone else had already decided to roll Linux. Microsoft refused to budge on Linux until 2014, and didn't release its own Azure Linux until 2020.
Once everyone decided to roll Linux, the "write once run everywhere" promise of Java was obviated. But because you were still locked into a Java Virtual Machine (JVM) your application couldn't optimize itself to the underlying hardware and operating system you were running on.
Redpanda, for example, is written in C++ on top of the Seastar framework (seastar.io). The same framework at the heart of ScyllaDB. This engine is a thread-per-core shared-nothing architecture that allows Redpanda to optimize performance for hardware utilization in ways that a Java app can only dream of. CPU utilization, memory usage, IO throughput. It's all just better performance on Redpanda.
It means that you're actually getting better utility out of the servers you deploy. Less wasted / fallow CPU cycles — so better price-performance. Faster writes. Lower p99 latencies. It's just... better.
Now, I am biased. I work at Redpanda now. But I've been a big fan of Kafka since 2015. I am still bullish on data streaming. I just think that Apache Kafka, as a Java-based platform, needs some serious rearchitecture,
Even Confluent doesn't use vanilla Kafka. They rewrote their own engine, Kora. They claim it is 10x faster. Or 30x faster. Depending on what you're measuring.
1. https://www.confluent.io/confluent-cloud/kora/
2. https://www.confluent.io/blog/10x-apache-kafka-elasticity/
Kind of like how people use docker for evrything, when what you really should be doing is learn how to package software.
Agree on the Kafka thing though. I've seen so many devs trip over Kafka topics, partitions and offsets when their throughput is low enough that RabbitMQ would do fine.
The people distributing software should shut them damn up about how the rest of the system it runs in is configured. (But not you, your job is packaging full systems.)
That said, it seems to me that this is becoming less of a problem.
So you are stuck with some really terrible tradeoffs- Go with Confluent Cloud, pay a fortune, and still likely have some issues to deal with. Or you could go with Confluent Platform, still have to pay people to operate it, while Confluent the company focuses most of their attention on Cloud and still charges you a fortune. Or you could just go completely OS and forgo anything Confluent and risk being really up the river when something inevitably breaks, or you have to learn the hard way that librdkafka has poor support for a lot of the shiny features discussed in the release notes.
Redpanda has surpassed them from a technical quality perspective, but Kafka has them beat on the ecosystem and the sheer inertia of moving from one platform to another. Kafka for example was built in a time of spinning rust hard disks, and expects to be run on general purpose compute nodes, where Redpanda will actually look at your hardware and optimize the number of threads its spawns for the box it is on- assuming it is going to be the only real app running there, which is true for anything but a toy deployment.
This is my experience from running platform teams and being head of messaging at multiple companies.
Not a drop in replacement, but worth looking at.
https://www.redpanda.com/compare/redpanda-vs-kafka
Sigh.
I used ZMQ connect nodes and the worker nodes would connect to an indexer/coordinator node that effectively did a `SELECT FROM ORDER BY ASC`.
It's easier than you may think and the bits here ended up with probably < 1000 SLOC all told.
Dead simple design, extremely robust, very high throughput. ZMQ made it easy to connect the remote threads to the centralized coordinator. It was effectively "self balancing" because the workers would only re-queue their thread once it finished work. Very easy to manage, but did not have hot failovers since we kept the materialized, "2D" work queue in memory. Though very rarely did we have issues with this.Generally I say, "Message queues are for tasks, Kafka is for data." But in the latter case, if your data volume is not huge, a message queue for async ETL will do just fine and give better guarantees as FIFO goes.
In essence, Kafka is a very specialized version of much more general-purpose message queues, which should be your default starting point. It's similar to replacing a SQL RDBMS with some kind of special NoSQL system - if you need it, okay, but otherwise the general-purpose default is usually the better option.
https://docs.streamnative.io/cloud/build/kafka-clients/kafka...
The second you approach any kind of scale, this falls apart and/or you end up with a more expensive and worse version of Kafka.
I was surprised how far sqlite goes with some sharding on modern SSDs for those in-between scale services/saas
Kafka already solves this problem and gives me message durability, near infinite scale out, sharding, delivery guarantees, etc out of the box. I do not care to develop, reshard databases or production-alize this myself.
My main point is, I have zero interest in creating novel solutions to a solved problem. It just artificially increases the complexity of my work and the learning curve for contributors.
Not everything needs to be big and complicated.
(SELECT * from EVENTS where TIMESTAMP > LAST_TS LIMIT 50) for example
But yeah, for a lot of implementations you don't need streaming. But for pull based apps you design your architecture differently, some things are a lot easier than it is with DB, some things are harder.
Have a table level seqno as monotonically increasing number stamped for every mutation. When a subscriber connects it asks for rows > Subscriber's seqno-last-handled.
https://newsroom.ibm.com/2025-12-08-ibm-to-acquire-confluent...
I don't understand how this acquisition is relevant for AI.
AI is just the lastest buzzword. Everyone has it, because they have to. Don't look behind the curtain.
/s
For Red Hat, there's no longer an official "public" distribution of RHEL, but apart from that they seemingly have been left alone and able to continue to develop their own products. But that's only my POV as a user of OSS Red Hat products at home and of RHEL and OpenShift at work.
GTK is still alive. It seems like Cosmic desktop with GTK apps will be a reasonable path forward. Of course there's KDE and QT, but I mean as an alternative to those.
Slow and boring is a pretty nice place to be.
Yes, two decades: https://adtmag.com/articles/2003/08/04/solaris-gets-a-gnome-...
https://www.centos.org/centos-stream/
And Fedora is still the upstream of RHEL, nothing changed there.
CentOS Stream employs a rolling-release model, which is much less stable than RHEL.
The previous main selling point of CentOS was bug-for-bug compatibility with RHEL. Red Hat is just killing the distro by moving their focus to a non-existent market. Enthusiasts will choose RHEL, while enterprises would choose the more stable RHEL, which Red Hat could earn money from, or alternatives like Alma or Rocky.
CentOS Stream has major versions and EOL dates, and thus is not a rolling release. It functions as the RHEL major version branch and follows the RHEL compatibility rules, so it's the same major version stability as RHEL.
While you may have considered bug-for-bug compatibility the main feature, it was a major point of frustration for many users and the maintainers. That model means you can't fix any bugs or accept contributions from the community. CentOS finally fixed both problems by moving to the Stream model.
The culture makes the company. Everyone on the lower rungs of the org chart knows this, because it's what they live and breathe every day. A positive, supportive workplace culture with clear goals and relative autonomy is a thing of beauty. You routinely find people doing more work than they really have to because they believe in the mission, or their peers, or the work is just fun. People join the company (and stay) because they WANT to not because they have to.
Past a certain company size, upper management NEVER sees this. They are always looking outward: strategy, customers, marketing, competition. Never in. They've been trained to give great motivational speeches that instill a sense of company pride and motivation for about 30 seconds. After that, employee morale is HR's job.
I have worked in a company that got acquired while it was profitable. The culture change was slow but dramatic. We went from a fun, dynamic culture with lots of teamwork and supportive management, to one step or two above Office Space. As far as the acquiring company was concerned, everything we were doing didn't matter, even if it worked. We had to conform to their systems and processes, or find new jobs. Most of us eventually did the latter.
Somehow Red Hat seems to be a notable exception. Although IBM owns Red Hat, they seem to have mostly left it alone instead of absorbing it. The name "IBM" doesn't even appear on redhat.com. Because I'm an outsider, I can't say whether IBM meddled in Red Hat's HR or management, but I would guess not.
1. https://www.cio.com/article/4084855/ibm-to-cut-thousands-of-...
2. https://www.newsobserver.com/news/business/article312796900....
HashiCorp also changed their licenses to non-open-source licenses, but again I think this was technically pre-acquisition (I think as they were gearing up to be a more attractive target for an exit).
Client side state encryption was one of the things which HashiCorp always gatekeeped for HashiCorp Cloud and never implemented in the Open Souce / Sourece Available versions.
A common conspiracy theory, but not true.
j/k Love ghostty!
An "exit" from the public market?
Hopefully mitchellh will write a book about Hashicorp some time. Would be fascinating to read the inside take.
Red Hat has far more autonomy. We are not structured the same.
On the HR side — many good people are leaving; new hires have to be on-site for 3 days and located in 4 "strategic" locations in the US.
What do you mean by that, like "centos/stream" (aka https://www.centos.org/download/ ) ?
The previous main selling point of CentOS was bug-to-bug compatible with RHEL. They are just killing the distro by making focusing on a non-existent market. Enthusiasts will choose REHL, while enterprises would choose the more stable Alma/Rocky.
While you may have considered bug-for-bug compatibility the main feature, it was a major point of frustration for many users and the maintainers. That model means you can't fix any bugs or accept contributions from the community. CentOS finally fixed both problems by moving to the Stream model.
From my perspective, as someone who is deeply suspicious of IBM in general, that's a plus.
https://www.confluent.io/blog/confluent-acquires-warpstream/
RedPanda was a huge win for us. Confluent never made sense to us since we were always so cost conscious but the complexity/risk of managing a critical part of our infra was always something I worried about. RedPanda was able to handle both for us - cheaper than Kafka hosting vendors with significantly better performance. We were pretty early customers but was a huge win for us.
I've been pretty happy with RP performance/cost/functionality wise. It isn't Kafka though, it's a proprietary C++ rewrite that aims for 100% compatibility. This hasn't been an issue in the 2+ years since we migrated prod, but YMMV.
Ok, so does anyone remember 'Watson'? It was the chatgpt before chatgpt. they built it in house. Why didn't they compete with OpenAI like Google and Anthropic are doing, with in-house tools? They have a mature PowerPC (Power9+? now?)setup, lots of talent to make ML/LLMs work and lots of existing investment in datacenters and getting GPU-intense workloads going.
I don't disagree that this acquisition is good strategy, I'm just fascinated (Schadenfreude?) to witness the demise of confluent now. I think economists should study this, it might help avert larger problems.
I'll believe that when I see it. They had a decade headstart with all of this, and yeah, could have been at the forefront. But they're not, and because of the organization itself, they're unlikely to have a shot at even getting close to there. Seems they know this themselves too, as they're targeting the lower end of the market now with their Granite models, rather than shooting for the stars and missing, like they've done countless of times before.
Well, in Confluent's case I'm not so sure that's true given that their CEO is also the company founder as well as one of the original authors of Apache Kafka.
Leadership in IBM also thought that Watson was like what what OAI/Anthropic/Google are doing now. It wasn't. Watson was essentially a ML pipeline over-optimized on Jeopardy, which is why it failed in literally every other domain.
Outside of Jeopardy, Watson was just a brand.
For my package on one VERSION alone: https://gitlab.com/redhat/centos-stream/src/kernel/centos-st...
I dont know if you were trying to be funny, or simply dont understand how much change really goes on.
Athough i think they just di/dont know how to adapt these to market that isnt a enterprise behemoth , rather than develop/price it so more devs can take a hold and experiment.
I do. I remember going to a chat once where they wanted to get people on-board in using it. It was 90 minutes of hot air. They "showed" how Watson worked and how to implement things, and I think every single person in the room knew they were full of it. Imagine we were all engineers and there were no questions at the end.
Comparing Watson to LLMs is like comparing a rock to an AIM-9 Sidewinder.
It really is probably that strangest company in tech which you think could be mysterious and intriguing. But no one cares. It’s like no one wants to look behind the boring suit and see wtf. From my low point on that bell curve I can’t see how they are even solvent.
IBM has a ton of Enterprise software, backed by a bunch of consultants hiding in boring businesses/governments.
They also do a ton of outsourcing work where they will be big enterprise IT support desk and various other functions. In fact, that side has gotten so big, IBM now has more employees in India in then any other country.
199 more comments available on Hacker News