Amd and Sony's Ps6 Chipset Aims to Rethink the Current Graphics Pipeline
Key topics
AMD and Sony are collaborating on a new chipset for the PS6, focusing on efficient machine-learning-based neural networks and rethinking the graphics pipeline, but the community is divided on its potential impact and value.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
121
0-12h
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 11, 2025 at 12:36 AM EDT
3 months ago
Step 01 - 02First comment
Oct 11, 2025 at 2:44 AM EDT
2h after posting
Step 02 - 03Peak activity
121 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 17, 2025 at 4:17 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Games written for the PlayStation exclusively get to take advantage of everything, but there is nothing to compare the release to.
Alternatively, if a game is release cross-platform, there’s little incentive to tune the performance past the benchmarks of comparable platforms. Why make the PlayStation game look better than Xbox if it involves rewriting engine layer stuff to take advantage of the hardware, for one platform only.
Basically all of the most interesting utilization of the hardware comes at the very end of the consoles lifecycle. It’s been like that for decades.
I’m intrigued.
For PS2, game consoles didn't become the centre of home computing; for PS3, programming against the GPU became the standard of doing real time graphics, not some exotic processor, plus that home entertaining moved on to take other forms (like watching YouTube on an iPad instead of having a media centre set up around the TV); for PS4, people didn't care if the console does social networking; PS5 has been practical, it's just the technology/approach ended up adopted by everyone, so it lost its novelty later on.
PS3s edge was generally seen as the DVD player.
That's why Sony went with Blue Ray in the PS4, hoping to capitalize on the next medium, too. While that bet didn't pay out, Xbox kinda self destructed, consequently making them the dominant player any way.
Finally:
> PS5 has been practical, it's just the technology/approach ended up adopted by everyone, so it lost its novelty later on.
PS5 did not have any novel approach that was consequently adopted by others. The only thing "novel" in the current generation is frame generation, and that was already being pushed for years by the time Sony jumped on that bandwagon.
The PS2 was the DVD console. The PS3 was the bluray console.
The PS4 and PS5 are also bluray consoles, however blurays are too slow now so they're just a medium for movies or to download the game from.
DualSense haptics are terrific, though the Switch kind of did them first with the Joy-Cons. I'd say haptics and adaptive triggers are two features that should become standard. Once you have them you never want to go back.
PS5's fast SSD was a bit of a game changer in terms of load time and texture streaming, and everyone except Nintendo has gone for fast m.2/nvme storage. PS5 also finally delivered the full remote play experience that PS3 and PS4 had teased but not completed. Original PS5 also had superior thermals vs. PS4 pro, while PS5 pro does solid 4K gaming while costing less than most game PCs (and is still quieter than PS4 pro.) Fast loading, solid remote play, solid 4K, low-ish noise are all things I don't want to give up in any console or game PC.
My favorite PS5 feature however is fast game updates (vs. PS4's interminable "copying" stage.) Switch and Switch 2 also seem to have fairly fast game updates, but slower flash storage.
Technically the PS4 supported 2.5" SATA or USB SSDs, but yeah PS5 is first gen that requires SSDs, and you cannot run PS5 games off USB anymore.
Console also partially had to be quirky dragsters because of Moore's Law - they had to be ahead of PC by years, because it had to be at least comparable to PC games at the end of lifecycle, not utterly obsolete.
But we've all moved on. IMO that is a good thing.
But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
A common argument is that we don't have fast enough hardware yet, or developers haven't been able to use raytracing to it's fullest yet, but it's been a pretty long damn time since this hardware was mainstream.
I think the most damning evidence of this is the just released Battlefield 6. This is a franchise that previously had raytracing as a top-level feature. This new release doesn't support it, doesn't intend to support it.
And in a world where basically every AAA release is panned for performance problems, BF6 has articles like this: https://www.pcgamer.com/hardware/battlefield-6-this-is-what-...
Pretty much this - even in games that have good ray tracing, I can't tell when it's off or on (except for the FPS hit) - I cared so little I bought a card not known to be good at it (7900XTX) because the two games I play the most don't support it anyway.
They oversold the technology/benefits and I wasn't buying it.
Ray tracing looks almost indistinguishable from really good rasterized lighting in MOST conditions. In scenes with high amounts of gloss and reflections, it's a little more pronounced. A little.
From my perspective, you're getting, like, a 5% improvement in only one specific aspect of graphics in exchange for a 200% cost.
It's just not worth it.
CP2077 rasterization vs ray tracing vs path tracing is like night and day. Rasterization looks "gamey". Path tracing makes it look pre-rendered. Huge difference.
As soon as you remove the ridiculous amounts of gloss, the difference is almost imperceptible.
But that's a game design change that takes longer
(sorry if obvious / already done)
Because enabling raytracing means the game supports non-raytracing too. Which limits the game's design on how they can take advantage of raytracing being realtime.
The only exception to this I've seen The Finals: https://youtu.be/MxkRJ_7sg8Y . Made by ex-Battlefield devs, the dynamic environment from them 2 years ago is on a whole other level even compared to Battlefield 6.
With raytracing lighting a scene goes from taking hours-days to just designating objects that emit light
Ray tracing is solving the light transport problem in the hardest way possible. Each additional bounce adds exponentially more computational complexity. The control flows are also very branchy when you start getting into the wild indirect lighting scenarios. GPUs prefer straight SIMD flows, not wild, hierarchical rabbit hole exploration. Disney still uses CPU based render farms. There's no way you are reasonably emulating that experience in <16ms.
The closest thing we have to functional ray tracing for gaming is light mapping. This is effectively just ray tracing done ahead of time, but the advantage is you can bake for hours to get insanely accurate light maps and then push 200+ fps on moderate hardware. It's almost like you are cheating the universe when this is done well.
The human brain has a built in TAA solution that excels as frame latencies drop into single digit milliseconds.
edit: not Doom Etenral, it’s Doom The Dark Ages, the latest one.
Light mapping is a cute trick and the reason why Mirror's Edge still looks so good after all these years, but it requires doing away with dynamic lighting, which is a non-starter for most games.
I want my true-to-life dynamic lighting in games thank you very much.
Most modern engines support (and encourage) use of a mixed lighting mode. You can have the best of both worlds. One directional RT light probably isn't going to ruin the pudding if the rest of the lights are baked.
I would say, the closest we can get are workarounds like radiance cascades. But everything else than raytracing is just an ugly workaround which falls apart in dynamic scenarios. And don't forget that baking times and storing those results, leading to massive game sizes, are a huge negative.
Funnily enough raytracing is also just an approximation to the real world, but at least artists and devs can expect it to work everywhere without hacks (in theory).
640Kb surely is enough!
2. People turn on RT in games not designed with it in mind and therefore observe only minor graphical improvements for vastly reduced performance. Simple chicken-and-egg problem, hardware improvements will fix it.
On a more subjective note, you get less interesting art styles because studio somehow have to cram raytracing as a value proposition in there.
Even without modern deep-learning based "AI", it's not like the pixels you see with traditional rendering pipelines were all artisanal and curated.
Given netflix popularity, most people obviously don’t value image quality as much as other factors.
And it’s even true for myself. For gaming, given the choice of 30fps at a higher bitrate, or 60fps at a lower one, I’ll take the 60fps.
But I want high bitrate and high fps. I am certainly not going to celebrate the reduction in image quality.
What about perceived image quality? If you are just playing the game chances of you noticing anything (unless you crank up the upscaling to the maximum) are near zero.
I am playing on a 55” TV at computer monitor distance, so the difference between a true 4K image and an upscaled one is very significant.
When I was a kid people had dozens of CDs with movies, while pretty much nobody had DVDs. DVD was simply too expensive, while Xvid allowed to compress entire movie into a CD while keeping good quality. Of course original DVD release would've been better, but we were too poor, and watching ten movies at 80% quality was better than watching one movie at 100% quality.
DLSS allows to effectively quadruple FPS with minimal subjective quality impact. Of course natively rendered image would've been better, but most people are simply too poor to buy game rig that plays newest games 4k 120FPS on maximum settings. You can keep arguing as much as you want that natively rendered image is better, but unless you send me money to buy a new PC, I'll keep using DLSS.
Some [0] are seeing 20 to 30% drop in actual frames when activating DLSS, and that means as much latency as well.
There's still games where it should be a decent tradeoff (racing or flight simulators ? Infinite Nikki ?), but it's definitely not a no-brainer.
[0] https://youtu.be/EiOVOnMY5jI
In accelerated compute, the largest areas of interest for advancement are 1) simulation and modeling and 2) learning and inference.
That's why this doesn't make sense to a lot of people. Sony and AMD aren't trying to extend current trends, they're leveraging their portfolios to make the advancements that will shape future markets 20-40 years out. It's really quite bold.
Seems they didn’t learn from the PS3, and that exotic architectures don't drive sales. Gamers don’t give a shit and devs won’t choose it unless they have a lucrative first party contract.
Now, shackling yourself to AMD and expecting a miracle... that I cannot say is a good idea. Maybe Cerny has seen something we haven't, who knows.
TL:DW - it's not quite the full-fat CNN model but it's also not a uselessly pared-back upscaler. Seems to handle antialiasing and simple upscale well at super low TDPs (<10w).
Since Mark Cerny became the hardware architect of PS they have not made the mistakes of the PS3 generation at all.
every year, Playstation ranks very high when it comes to GOTY nominations
just last year, Playstation had the most nominations for GOTY: https://x.com/thegameawards/status/1858558789320142971
not only that, but PS5 has more 1st party games than Microsoft's Xbox S|X
1053 vs 812 (that got inflated with recent Activision acquisition)
https://en.wikipedia.org/wiki/List_of_PlayStation_5_games
https://en.wikipedia.org/wiki/List_of_Xbox_Series_X_and_Seri...
It's important to check the facts before spreading random FUD
PS5 had the strongest lineup of games this generation, hence why they sold this many consoles
Still today, consumers are attracted to PS5's lineup, and this is corroborated by facts and data https://www.vgchartz.com/
In August for example, the ratio between PS5 and Xbox is 8:1; almost as good as the new Nintendo Switch 2, and the console is almost 5 years old!
You say "underwhelming", people are saying otherwise
Also, to my knowledge, the PS5 still lags behind the PS4 in terms of sales, despite the significant boost that COVID-19 provided.
Its sequel Saros is coming out next year too.
There’s also Spider-Man 2, Ratchet and Clank Rift Apart, Astro Bot, Death Stranding 2, Ghost of Yotei…
Their output hasn’t been worse than the PS4 at all imo.
Each generation has around half the number of games as the previous. This does get a bit murky with the advent of shovelware in online stores, but my point remains.
I think this only proves is that games are now ridiculously expensive to create and met the quality standards expected. Maybe AI will improved this in this future. Take-Two has confirmed that GTA6's budget has exceeded US$1 billion, which is mind-blowing.
And why wouldn’t they? In many cases they’re are some compiler settings and a few drivers away from working.
The main goal of Direct3D 12, and subsequently Vulcan, was to allow for better use of the underlying graphics hardware as it had changed more and more from its fixed pipeline roots.
So maybe the time is ripe for a rethink, again.
Particularly the frame generation features, upscaling and frame interpolation, have promise but needs to be integrated in a different way I think to really be of benefit.
You aren't seeing them adopted that much, because the hardware still isn't deployed at scale that games can count on them being available, and also it cannot ping back on improving the developer experience adopting them.
However, I'm pessimistic on how this can keep evolving. RT already takes a non trivial amount of transistor budget and now those high end AI solutions require another considerable chunk of the transistor budget. If we are already reaching the limits of what non generative AI up-scaling and frame-gen can do, I can't see where a PS7 can go other than using generative AI to interpret a very crude low-detail frame and generating a highly detailed photorealistic scene from that, but that will, I think, require many times more transistor budget than what will likely ever be economically achievable for a whole PS7 system.
Will that be the end of consoles? Will everything move to the cloud and a power guzzling 4KW machine will take care of rendering your PS7 game?
I really can only hope there is a break-trough in miniaturization and we can go back to a pace of improvement that can actually give us a new generation of consoles (and computers) that makes the transition from an SNES to a N64 feel quaint.
I’d be absolutely shocked if in 10 years, all AAA games aren’t being rendered by a transformer. Google’s veo 3 is already extremely impressive. No way games will be rendered through traditional shaders in 2035.
The images rendered in a game need to accurately represent a very complex world state. Do we have any examples of Transformer based models doing something in this category? Can they do it in real-time?
I could absolutely see something like rendering a simplified and stylised version and getting Transformers to fill in details. That's kind of a direct evolution from the upscaling approach described here, but end to end rendering from game state is far less obvious.
The main rendering would be done by the transformer.
Already in 2025, Google Veo 3 is generating pixels far more realistic than AAA games. I don't see why this wouldn't be the default rendering mode for AAA games in 2035. It's insanity to think it won't be.
Veo3: https://aistudio.google.com/models/veo-3
That’s because games are "realtime", meaning with a tight frame-time budget. AI models are not (and are even running on multiple cards each costing 6 figures).
I’m so excited to be charged AAA prices for said wonderful experience.
For Genie to exceed AAA Graphics in 2035 at 60 to 120fps per second would require a breakthrough of efficiency that is at least an order of magnitude, and much higher for it to be cost effective.
The gaming industry for AAA titles requires at least 3-4 years in making. Which means AAA titles studios would need to start working on it in 2031. Possibility of All AAA games in 2031 are made with LLM model is practically zero.
We are talking 10 years from now.
And unfortunately 10 years isn't that long time in many industries. We are barely talking about 3 cycles.
Traditional rendering techniques can also easily exceed the quality of AAA games if you don't impose strict time or latency constraints on them. Wake me up when a version of Veo is generating HD frames in less than 16 milliseconds, on consumer hardware, without batching, and then we can talk about whether that inevitably much smaller model is good enough to be a competitive game renderer.
You conflate the challenge of generating realistic pixels with the challenge of generating realistic pixels that represent a highly detailed world state.
So I don't think your argument is convincing or complete.
It will be AI all the way down soon. The models internal world view could be multiple passes and multi layer with different strategies... In any case; safe to say more AI will be involved in more places ;)
I think it's reasonable to assume we won't see this tech replace game engines without significant further breakthroughs...
For LLMs agentic workflows ended up being a big breakthrough to make them usable. Maybe these World Models will interact with a sort of game engine directly somehow to get the required consistency. But it's not evident that you can just scale your way from "visual memory extending up to one minute ago" to 70+ hour game experiences.
I’m really curious why this would be preferable for a AAA studio game outside of potential cost savings. Also imagine it’d come at the cost of deterministic output / consistency in visuals.
10 years ago people were predicting VR would be everywhere, it flopped hard.
10 years ago, people were predicting that deep learning will change everything. And it did.
Why just use one example (VR) and apply it to everything? Even then, a good portion of people did not think VR would be everywhere by now.
It is odd how many people don't realize how developed self-driving taxis are.
I think most people will consider self driving tech to be a thing when it's as widespread as TVs were, 20 years after their introduction.
Fully autonomous in select defined cities owned by big corps is probably a reasonable expectation.
Fully autonomous in the hands of an owner applied to all driving conditions and working reliably is likely still a distant goal.
Those with the real vested interest don't care if that flops, while zealous worshippers to the next brand new disruptive tech are just a free vehicle to that end.
The other major success of recent years not discussed much so far is gaussian splats, which tear up the established production pipeline again.
Now that will be peak power efficiency and a real solution for the world where all electricity and silicon are hogged by AI farms.
/s or not, you decide.
https://youtu.be/NyvD_IC9QNw
The answer is clearly transformer based.
Competition works wonders.
In the past a game console might launch at a high price point and then after a few years, the price goes down and they can release a new console at a high at a price close to where the last one started.
Blame crypto, AI, COVID but there has been no price drop for the PS5 and if there was gonna be a PS6 that was really better it would probably have to cost upwards of $1000 and you might as well get a PC. Sure there are people who haven’t tried Steam + an XBOX controller and think PV gaming is all unfun and sweaty but they will come around.
You don’t have to install any drivers or anything and with the big screen mode in Steam it’s a lean back experience where you can pick out your games and start one up without using anything other than the controller.
The "it just works" factor and not having to mess with drivers is a huge advantage of consoles.
Apple TV could almost be a decent game system if Apple ever decided to ship a controller in the box and stopped breaking App Store games every year (though live service games rot on the shelf anyway.)
Even in the latest versions of unreal and unity you will find the classic tools. They just won't be advertised and the engine vendor might even frown upon them during a tech demo to make their fancy new temporal slop solution seem superior.
The trick is to not get taken for a ride by the tools vendors. Real time lights, "free" anti aliasing, and sub-pixel triangles are the forbidden fruits of game dev. It's really easy to get caught up in the devil's bargain of trading unlimited art detail for unknowns at end customer time.
257 more comments available on Hacker News