Amd Open Source Driver for Vulkan Project Is Discontinued
Posted4 months agoActive4 months ago
github.comTechstoryHigh profile
supportivepositive
Debate
20/100
AmdVulkanOpen SourceLinux
Key topics
Amd
Vulkan
Open Source
Linux
AMD has discontinued its open-source Vulkan driver project, AMDVLK, in favor of RADV, which is seen as a positive development by the community due to RADV's popularity and collaborative efforts.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3h
Peak period
33
12-24h
Avg / period
11.2
Comment distribution67 data points
Loading chart...
Based on 67 loaded comments
Key moments
- 01Story posted
Sep 16, 2025 at 8:31 PM EDT
4 months ago
Step 01 - 02First comment
Sep 16, 2025 at 11:10 PM EDT
3h after posting
Step 02 - 03Peak activity
33 comments in 12-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 22, 2025 at 4:08 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45270087Type: storyLast synced: 11/20/2025, 5:11:42 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Per AMD
>Notably, AMD's closed-source Vulkan driver currently uses a different pipeline compiler, which is the major difference between AMD's open-source and closed-source Vulkan drivers.
Hardware vendors: Stop writing software. Instead write and publish hardware documentation sufficient for others to write the code. If you want to publish a reference implementation that's fine, but your assumption should be that its primary purpose is as a form of documentation for the people who are going to make a better one. Focus on making good hardware with good documentation.
Intel had great success for many years by doing that well and have recently stumbled not because the strategy doesn't work but because they stopped fulfilling the "make good hardware" part of it relative to TSMC.
So a lot of the complexity of what the hardware is doing gets relegated to firmware as that is easier to patch and, especially relevant for wifi hardware before the specs get finalized, extend/adapt later on.
The problem with that, in turn, is patents and trade secrets. What used to be hideable in the ASIC masks now is computer code that's more or less trivially disassemblable or to reverse engineer (see e.g. nouveau for older NVDA cards and Alyssa's work on Apple), and if you want true FOSS support, you sometimes can't fulfill other requirements at the same time (see the drama surrounding HDMI2/HDCP support for AMD on Linux).
And for anything RF you get the FCC that's going to throw rocks around on top of that. Since a few years, the unique combination of RF devices (wifi, bt, 4G/5G), antenna and OS side driver has to be certified. That's why you get Lenovo devices refusing to boot when you have a non-Lenovo USB network adapter attached at boot time or when you swap the Sierra Wireless modem with an identical modem from a Dell (that only has a different VID/PID), or why you need old, long outdated Lenovo/Dell/HP/... drivers for RF devices and the "official" manufacturer ones will not work without patching.
I would love a world in which everyone in the ecosystem were forced to provide interface documentation, datasheets, errata and ucode/firmware blobs with source for all their devices, but unfortunately, DRM, anti-cheat, anti-fraud and overeager RF regulatory authorities have a lot of influence over lawmakers, way more than FOSS advocates.
It also contains quirks for Intel x86 core platform features: https://github.com/torvalds/linux/blob/master/arch/x86/kerne...
For the now-fashionable LLMs-on-GPUs world, it's pretty much just matrix multiplications. How many patents can reside in that? I don't expect Google to sell TPUs because that's not the business they're in, but AMD could put them in their SoCs without writing drivers: https://cloud.google.com/tpu/docs/system-architecture-tpu-vm...
How would/should this work? Release hardware that doesn't have drivers on day one and then wait until someone volunteers to do it?
> Intel had great success for many years by doing that well
Not sure what you're referring to but Intel's open source GPU drivers are mostly written by Intel employees.
Intel and AMD did this in the past for their CPUs and accompanying chipsets, when any instruction set extensions or I/O chipset specifications were published some years in advance, giving time to the software developers to update their programs.
Intel still somewhat does it for CPUs, but for GPUs their documentation is delayed a lot in comparison with the product launch.
AMD now has significant delays in publishing the features actually supported by their new CPUs, even longer than for their new GPUs.
In order to have hardware that works on day one, most companies still have to provide specifications for their hardware products to various companies that must design parts of the hardware or software that are required for a complete system that works.
The difference between now and how this was done a few decades ago, is that then the advance specifications were public, which was excellent for competition, even if that meant that there were frequently delays between the launch of a product and the existence of complete systems that worked with it.
Now, these advance specifications are given under NDA to a select group of very big companies, which design companion products. This ensures that now it is extremely difficult for any new company to compete with the incumbents, because they would never obtain access to product documentation before the official product launch, and frequently not even after that.
When the fears are unfounded the reason isn't "Nvidia/Intel could find out things about the hardware", it's "incompetence rooted in believing something that isn't true". Which is an entirely different thing because in one case they would have a proper dilemma and in the other they would need only extricate their cranium from their rectum.
Good luck trying to explain that to Legal. The problem at the core with everything FOSS is the patent and patent licensing minefield. Hardware patents are already risky enough to get torched by some "submarine patent" troll, the US adds software patents to that mix. And even if you think you got all the licenses you need, it might be the case that the licensing terms ban you from developing FOSS drivers/software implementing the patent, or that you got a situation like the HDMI2/HDCP situation where the DRM <insert derogatory term here> insist on keeping their shit secret, or you got regulatory requirements on RF emissions.
And unless you got backing from someone very high up the chain, Corporate Legal will default to denying your request for FOSS work if there is even a slight chance it might pose a legal risk for the company.
Don't let Legal run the company. It's there to support the company, not the other way around. (unless it's Oracle, I guess)
Software patents are indeed a scourge, but not publishing source code doesn't get you out of it. Patent trolls file overly broad patents or submarine patents on things they get included into standards so that everyone is infringing their patent because the patent covers the abstract shape of every solution to that problem rather than any specific one, or covers the specific one required by the standard. They can still prove that using binary software because your device is still observably doing the thing covered by the patent.
Meanwhile arguing that this makes it harder for them to figure out that you're infringing their patent actually cuts the other way, because if plaintiffs are clever they're going to use that exact reasoning to argue for willful infringement -- that concealing the source code is evidence that you know you're infringing and trying to hide it.
> And even if you think you got all the licenses you need, it might be the case that the licensing terms ban you from developing FOSS drivers/software implementing the patent, or that you got a situation like the HDMI2/HDCP situation where the DRM <insert derogatory term here> insist on keeping their shit secret, or you got regulatory requirements on RF emissions.
To my knowledge there is no actual requirement that you not publish the source code for radio devices, only some language about not giving the user the option to exceed regulatory limits. But if that can be done through software then it could also be done by patching the binary or using a binary meant for another region, so it's not clear how publishing the code would change that one way or the other. More relevantly, it's pretty uncommon for a GPU to have a radio transceiver in it anyway, isn't it? On top of that, this would only be relevant to begin with for firmware and not drivers.
And the recommended way of implementing DRM is to not, but supposing that you're going to do it anyway, that would only apply to the DRM code and not all the rest of it. A GPU is basically a separate processor running its own OS which is separated into various libraries and programs. The DRM code is code that shouldn't even be running unless you're currently decoding DRM'd media and could be its own optional tiny little blob even if the other 98% of the code is published.
Is this still the case? I.e. why shut down the open amdvlk project then? They could just make it focused on Windows only.
So at best it'll be of limited utility for a reference, I can see why they might decide that's just not worth the engineering time of maintaining and verifying their cleaning-for-open-source-release process (as the MS stuff wasn't the only thing "stripped" from the internal source either).
I assume the llvm work will continue to be open, as it's used in other open stacks like rocm and mesa.
Though I think a lot of it might be considered "Legacy" - it still existed.
I believe the intent was to slowly deprecate the internal closed compiler, and leave it more as a fallback for older hardware, with most new development happening on LLVM. Though my info is a few months out of date now, I'd be surprised if the trajectory changed that quickly.
llama.cpp and other inference servers work fine on the kernel driver.
AMDVLK was always a weird regression in the openness of the development model compared to that. Maybe understandable that the bean counters wanted to share the effort between the Windows and AMD drivers but throwing away the community aspect in order to achieve that made that approach doomed from the start IMO. The initial release being incredibly late (even though Vulkan was modeled after AMD's own Mantle) was the cherry on top that allowed RADV to secure the winning seat but probably only accelerated the inevitable.
I thought mesa is always default since I use fedora kde
Note that this is only about the user-space portion of the driver - the kernel part of the Linux drivers is shared by all of these as well as the OpenGL drivers - there used to be a proprietary kernel driver from AMD as well but that was abandoned with the switch to the "amdgpu-pro" package.
so they did use that for windows as well now right
so valve and OSS community make a better driver than amd themselves??? shit is new low
You thought wrong. From [0]
> Mesa is primarily developed and used on Linux systems. But there’s also support for Windows, other flavors of Unix and other systems such as Haiku.
Also check out [1]
[0] <https://docs.mesa3d.org/systems.html>
[1] <https://docs.mesa3d.org/systems.html#deprecated-systems-and-...>
That you were "forced" to switch away from the old proprietary driver for some reason does not negatively implicated AMD's contribution to the open source drivers.
I think they are blaming the vendor who received their money not the nebulous and non-specific Linux community.
Despite being lauded compared to closed source Nvidia AMD has had painful support issues as well.
Almost no one is scared anymore to buy AMD for linux desktop and servers knowing that it normally works well and the same kind of person will be the one doing recommendation for their families and relatives or relative companies even if these one are using windows.
This is a matter of AMD no longer wasting time on a pointless duplicate project no-one is really interested in. They can allocate more resources for amdgpu and radv and ultimately do less overall by getting rid of the redundant project.
Win-win.
Maybe I'm just naive but the downsides of doing this seem absolutely minimal and the upsides quite large.
https://news.ycombinator.com/item?id=39543291
> Maybe I'm just naive
Yep.
There are things hidden in the design of very widely used hardware that would make people's heads explode from how out there they are. They are trade secrets, and used to maintain a moat in which people can make money. (As opposed to patents which require publishing publicly).
If you live in open source land you cannot make money from selling software. If there is no special sauce in the hardware you won't be able to make money from that either. Then we can all act surprised that the entire tech landscape is taken over by ads and fails to meaningfully advance.
The dirty open secret in the tech industry is that the special sauce almost always just isn't all that special.
Because AMD didn't actually care about the Linux driver since it didn't make them moeny.
> The dirty open secret in the tech industry is that the special sauce almost always just isn't all that special.
Only among people where that's true. In the computer industry just look at the M series of chips where it's very clear that their direct competitors can't establish why it does what it does.
This is weird Apple fanboy head-in-the-sand thinking. The Mx chips have been dug into plenty and are just good engineering, not magic. AMD's horribly-named "Ryzen AI Max+ 395" chip is definitely moving in the same direction.
Bonus points for ones without ex-Apple employees involved in their design, because maybe those people might know something about it.
Qualcomm have never actually caught up with Apple performance wise since the introduction of Arm64. They had a very nice 32 bit implementation and were completely caught off guard. Prior to their NuVia acquisition their 64 bit efforts were barely improvements on what you can just license from Arm directly, to the point for a while that is all they were.
Nvidia got around this on their kernel driver by moving most of it to the cards firmware.
Just yesterday, I tried getting ROCm working to see of I could use StableDiffusion. Well, in the end 6.16 is currently unsupported and after a few hours of fail, I managed to get the in the box kernel module working again and gave up. It is emphatically nice that many/most games now run through Mesa/Vulkan+Proton without issue... but it would be nice to actually be able to use some of the vaulted AI features in AMD's top current card in the leading edge Linux Kernel release with their platform.
Hopefully sooner than later, this will all "just work" mostly and won't be nearly the exercise in frustration for someone who hasn't been actively in the AI culture. I could create a partition for a prior distro/kernel or revert back, but I probably shouldn't have to, in general I tend to expect leading edge releases to work in the Linux ecosystem, or at least relatively quickly patched up.
I'll dig into this over the weekend when I invariably try again.
There's definitely a lot of variation in experiences. In my case, on my box with an RX 9090 XTX, installing ROCm via apt did "just work" and I can compile and run programs against the GPU, and things like Ollama work with GPU acceleration with no weird fiddling or custom setup. And from what I hear, I'm definitely not the only person having this kind of experience.
> This is a good but long overdue decision by AMD. RADV has long been more popular with gamers/enthusiasts on Linux than their own official driver. Thanks to Valve, Google, Red Hat, and others, RADV has evolved very nicely.