Blender 5.0
Mood
excited
Sentiment
positive
Category
tech
Key topics
Blender
3D modeling
open-source software
The release of Blender 5.0 has generated significant excitement among users, with many praising its new features and improvements, while also discussing the future of 3D modeling and the impact of AI.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
25m
Peak period
42
Hour 2
Avg / period
8.9
Based on 160 loaded comments
Key moments
- 01Story posted
11/18/2025, 9:39:18 PM
21h ago
Step 01 - 02First comment
11/18/2025, 10:04:02 PM
25m after posting
Step 02 - 03Peak activity
42 comments in Hour 2
Hottest window of the conversation
Step 03 - 04Latest activity
11/19/2025, 7:12:09 PM
15m ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Now I want to look into it more, but I'd imagine that "Blackbody" and sky generation nodes might still assume a linear sRGB working space.
Since people are always asking for “real world examples”, I have to point out this is a great place to use an agent like Claude Code or Codex. Clone the source, have your coding assistant run its /init routine to survey the codebase and get a lay of the land, then turn “thinking” to max and ask it “Do the Blackbody attribute for volumes and the sky generation nodes still expect to be working in linear sRGB? Or do they take advantage of the new ACES 2.0 support? Analyze the codebase, give examples and cite lines of code to support your conclusions.”
The best part: I’m probably wrong to assert that linear sRGB and ACES 2.0 are some sort of binary, but that’s exactly the kind of knowledge a good coding agent will have, and it will likely fold an explanation of the proper mental model into its response.
If you make a color space for a display, the intent is that you can (eventually) get a display which can display all those colors. However, given the shape of the human color gamut, you can't choose three color primaries which form a triangle which precisely contain the human color gamut. With a display color space, you want to pick primaries which live inside the gamut; else you'd be wasting your display on colors that people can't see. For a working space, you want to pick primaries which contain the entire human color gamut, including some colors people can't see (since it can be helpful when rendering to avoid clipping).
Beyond that, ACES isn't just one color space; it's several. ACEScg, for example, uses a linear transfer function, and is useful for rendering applications. A colorist would likely transform ACEScg colors into ACEScc (or something of that ilk) so that the response curves of their coloring tools are closer to what they're used it (i.e. they have a logarithmic response similar to old-fashioned analogue telecine machines).
or you are saying if there is some intermediate transform that makes color go beyond P3 it will get clipped? then I understand...
Exactly! The conversion between ACES (or any working color space) and the display color space benefits from manual tweaking to preserve artistic intent.
Seems like in 10 years AI will basically make it pointless to use a tool like this at least for people working on average projects.
What do folks in the industry think? What’s the long term outlook?
The fact that it “seems easy” is a great flag that it probably isn’t.
Really no one can predict the future.
It's like 2D art with more complexity and less training data. Non-AI 2D art and animation tools haven't been made irrelevant yet, and don't look like they will be soon.
What blender and other CGI software gets for free is continuity. The 3D model does not change without explicitly making it change.
Until we get AI which can regenerate the same model from one scene to the next, the use of AI in CGI will be severely limited.
Recent news on major AI scientists starting "world AI" companies confirm this trend.
So 3D soon will become a very important tech even compared to today.
I recently used WAN to generate a looping clip of clouds moving quickly, something that’s difficult to do in CGI and impossible to capture live action. It worked out because I didn’t have specific demands other than what I just said, and I wasn’t asking for anything too obscure.
At this point, I expect the quality of local video models (the only kind I’m willing to work with professionally) to go up, but prompt adherence seems like a tough nut to crack, which makes me think it may be a while before we have prosumer models that can replace what I do in Blender.
A lot of the editing functions for 3D art play some role in achieving verisimilitude in the result - that it looks and feels believably like some source reference, in terms of shapes, materials, lights, motion and so on. For the parts of that where what you really want to say is "just configure A to be more like B", prompting and generative approaches can add a lot of value. It will be a great boost to new CG users and allow one person to feel confident in taking on more steps in the pipeline. Every 3D package today resembles an astronaut control panel because there is too much to configure and the actual productions tend to divvy up the work into specialty roles where it can become someone's job to know the way to handle a particular step.
However, the actual underlying pipeline can't be shortcut: the consistency built by traditional CG algorithms is the source of the value within CG, and still needs human attention to be directed towards some purpose. So we end up in equilibriums where the budget for a production can still go towards crafting an expensive new look, but the work itself is more targeted - decorating the interior instead of architecting the whole house.
You are asking for industry predictions from industry professionals in an industry you know nothing about while assuming a lot about that industry.
Why do you think they should do all the heavy lifting for you?
You might as well ask ChatGPT what it thinks because it seems you already have an idea of what you want the answer to be.
AI coding agents didn't make IDEs obsolete. They just added plugins to some existing IDEs and spawned a few new ones.
I'm very excited to see the addition of structs and closures/higher-order functions to blender nodes! (I've also glanced at the shader compiler they're using to lower it to GLSL; neat stuff!) Not only is this practically going to be helpful, the PL researcher in me is tickled by seeing these features get added to a graphical programming language.
If you haven't heard of Blender before, or if you think AI will replace all the work done in it, fair enough. But I'd still strongly suggest looking into what it is and how it works.
Always nice to see these updates though, Blender has really come a long long way.
Inkscape is good for typing dimensions into rectangles tho
I’ll check out Inkscape as well. I’ve tried using some raster graphics in the past, but I couldn’t type dimensions and had to use the rules and guides with snapping. It mostly worked, but was a bit annoying.
What I'd do is:
- Spreadsheet workbench --> Create spreadsheet (name it "measurements"). (This is optional)
- Switch to Part design workbench --> Create body (name it "layout") --> select XY plane --> Create sketch --> Create Polyline
- Zoom out, start drawing the rooms in your house, approximately to scale.
- Before going into too much detail, add a dimension (select line --> "Constrain Distance") to the first line you draw, so that you can do the rest of your drawing approximately to scale. Then the general shape won't get messed up when you add dimensions to everything else.
- (If you have a photo or picture, you can import that to sketch over).
- Add constraints to match your room measurements, mostly vertical or horizontal distance constraints. Be careful not to overconstrain the sketch. (You can put the measurements directly into the sketch constraints, or you can put them into the top-level spreadsheet, create an alias for each cells, and then set the dimensions to reference those cells).
- Once the rooms are drawn, close the sketch and create a new sketch on the xy plane called "furniture".
- Draw some rectangles for your sofas / tables / etc, delete any horizontal and vertical constraints that get automatically added, and instead apply perpendicularity constraints. Dimension your rectangles using only the "constrain distance" tool. Now you can drag them around the room and rotate them freely.
- If you want to make 3D models for these too, create new Part Design bodies for each room and each piece of furniture, create a shape binder referencing the master sketches in the Layout body, and then extrude the sketches using the "Pad" operation.
That's about as much tutorial as it makes sense to pack into a HN comment. If you give it a try, I hope it works out for you!
FreeCad is rapidly evolving and quite a few tutorials are already using the v1.1 dev builds. Pay attention to the version used in tutorials as you can run into trouble following them if you are on an older release.
But of course built-in intro of Solidworks was a way better UX.
> Mom, may we have SolidWorks?
> We have SolidWorks at Home.
> <SolidWorks at Home>
This is in contrasti to the example the parent comment brought up, and the one I added: Blender and KiCad do not have this concern; there are free (Or you could say inexpensive) high quality tools in their spaces. This is notably not the case for traditional CAD.
Many people complain about it being a mesh editor but it works for me. The sheer variety of tooling and flexibility in Blender is insane, and that's before you get to the world of add-ons.
I want to learn Geometry nodes and object generation as I think they will address a lot of the "parametric" crowd concerns. This v5 is meant to be a big step in ease of use of this.
Also, I'm not sure if the different tooling lets me see all the flaws of online "parametric" models, or whether I'm being pedantic. They get frustrating. I have Gordon-Ramsay-screamed "How can you fuck up a circle!".
>Mesh formats like stl cannot represent a circle by its position and radius, while a parametric format like step can.
This is where I think the Geometry nodes can help. A node (function) can be used to represent the circle with inputs and outputs set or changed as required.[0]
[0] https://docs.blender.org/manual/en/latest/modeling/geometry_...
Maybe it is the export or something. I run the 3D toolbox and often models are not manifold.
I see things like two circles in slightly different positions but both are connected in different ways to the surrounding "single" instance model. Things like this mean you end up with "infinitely small volumes" like a klein bottle with infinitely thin walls. There is no fully enclosed "volume" and so mathematically there is "nothing to 3D print".
As a model this makes no sense to do, and so it irks me.
But clearly the slicer software doesn't care or autocorrects and people make their 3D print happen just fine.
I have used it to make quite a few functional prints, with the help of making sure my scene units are correct and a CAD plugin.
If I put some holes in something that are 1mm from the edge, but then I print it and see it doesn't line up and needs to be 1.5mm, in Fusion I can just change one number and it all updates. Doing the same thing in blender would likely be very difficult.
KiCAD was also a meh ECAD FOSS alternative 7-8 years ago, now it is by far the tool of choice for regular ECAD designs. I can see FreeCad getting there by 2030.
It seems like it has lots of capability but still "punch your monitor" levels of difficulty just trying to do the most basic stuff.
I'm sure I could grind harder and learn more and make FreeCAD work, but I'm not sure why I'd bother.
Deltahedra is a great YouTube channel for getting the basics.
MangoJelly has done an amazing job in churning out high quality tutorials for FreeCAD: https://www.youtube.com/watch?v=t_yh_S31R9g&list=PLWuyJLVUNt...
(this is just one playlist, there's a lot more on his channel).
And it presents nonsensical problems, like offering to create a sketch on the face of an object and then complaining that the sketch doesn't belong to any object. So you have to manually drag it under the object in the treeview. So gallingly DUMB.
Despite all that, I will wrestle with its ineptitude before giving Autodesk a penny.
Parasolid is powering practically every major CAD system. Its development started in 1986 and it's still actively developed. The amount of effort that goes into those things is immense (39 years of commercial development!) and I don't believe it can be done pro-bono in someone's spare time. What's worse, with this kind of software there is no "graceful degradation": while something like a MIP solver can be useful even if it's quite a bit slower than Gurobi, a kernel that can't model complex lofts and fillets is not particularly useful.
3D CAD is much harder than Blender and less amenable to open source development.
Fornjot has been attempting this: https://www.fornjot.app
It's going to be years or decades before it's competitive though. Also, it looks like they switched to keeping progress updates private except to sponsors, which means I don't actually have any easily-accessible information about it anymore which is sad.
The tricky bit is having a G2 fillet that intersects a complex shape built from surface patches and thickened, with both projected into a new sketch, and keeping the workflow sane if I go and adjust the original fillet. I hope one day we'll see a free (as in speech) kernel that can enable that, until then it's just Parasolid, sadly.
Can you help me understand why this problem is so hard?
Now, generally speaking, in a CAD model most surfaces will be “analytic” (plane, torus, conical, arc, line, etc). But whenever some complex surface that joins these surfaces is required, (NUR)B-splines are the principal technique for “covering” the gap.
This is already complex and fiddly enough. Just having a stable 2D drawing environment that uses a constraint solver but also behaves predictably and doesn't run into numerical instability issues is already an achievement. You don't want a spline blowing up while the user is applying constraints one by one! And yet it's trivial compared to the rest of the problem.
Having 3D features analytically (not numerically!) interacting with each other means someone needs to write code that handles the interactions. When I click on a corner and apply a G2 fillet to it, it means that there's now a new 3D surface where every section is a spline with at least 4 control points. When I then intersect that corner with a sphere, the geometric kernel must be able to analytically represent the resulting surface (intersecting that spline-profiled surface with a sphere). If I project that surface into a sketch, the kernel needs to represent its outline from an arbitrary angle — again, analytically. Naturally, there is an explosion of special cases: that sphere might either intersect the fillet, just touch it (with a single contact point), or not touch it at all, maybe after I made some edits to the earlier features.
Blender at its core is comparatively trivial. Polygons are just clumps of points, they can be operated on numerically. CAD is hell.
Firstly, you probably have a variety of analytic shapes to represent — things like lines and circles in 2D or cubes and spheres in 3D. Even seemingly simple questions, like whether two such shapes intersect or not, can require a significant amount of logic to calculate the answer. That logic will often be specific to the exact combination of shapes you have, because the number of freedoms and nature of any symmetries in the shapes you’re working with can mean you would use completely different algorithms for superficially similar situations.
Secondly, while you’re probably going to implement a lot of analytic calculations, in realistic models you’re probably going to end up using numerical methods as well. That can be because you need to work with geometry like Bézier curves or NURBS surfaces that has many freedoms. It can be because even if you start with convenient analytic shapes, new geometry that you derive from those shapes, for example by offsetting a single shape or by combining details from multiple shapes as in constructive solid geometry, won’t in general have an analytic shape itself.
By the time you allow for the numerous different types of constraint that you might want to enforce between different types of geometry and the numerous different ways you can construct new geometry from geometry you already have, the scale of the problem explodes. And on top of that, almost everything you do is going to have numerical sensitivity issues, and all but the simplest algorithms are going to need detailed, careful analysis to make sure you really have covered all the possibilities. In this field, “edge case” and “corner case” are literal terms and not just figures of speech!
To give a practical example, without looking up how to do it, could you confidently calculate whether two arbitrary cuboids are completely separate or they touch or intersect somewhere? As another example, given an arbitrary parametric surface, a sphere in a position just resting on that surface, and the constraint that the surface of the sphere must remain tangent to the parametric surface without intersecting it anywhere, how would you calculate the path the centre of the sphere will follow if you introduce gravity to start the sphere rolling in a certain direction along the surface?
These are relatively simple problems in the field, but each already has some subtlety that leaves the “obvious” solutions incomplete. Solve a few thousand problems like that, each unique and with its own calculation strategy, and now you’re starting to get a practically useful geometric modelling system. (You’ve also probably had a team of dozens of mathematicians and developers working on it for decades.)
So it doesn't really represent meaningful progress towards FOSS CAD because ultimately it uses the same proprietary, expensive library to do the heavy lifting as most of its competitors.
It must be difficult when so much management is short sighted and focused on delivering short term profits for shareholders. Even academia is run like a business now.
Unless a privately held rogue company like Valve got interested its probably going to have to wait for a government/ngo/scientific. Industry, particularly the tech industry, is notorious for leaching of free and open source software and in some cases building entire businesses on it and not giving back.
Management just reacts to environments created by governments. When ZIRP was around money was very easy to get hold of - too easy. Now it's really hard because businesses have to beat government bond interest rates, which are guaranteed, to get debt/investment.
> Unless a privately held rogue company like Valve
Valve is not a rogue company.
> Industry, particularly the tech industry, is notorious for leaching of free and open source software and in some cases building entire businesses on it and not giving back
Your premise is wrong. It's impossible to leach off something that is freely given. This is like being angry because people don't all tip a street performer. The deal is it's free.
And your facts are wrong. Businesses fund a giant amount of OSS work.
It would be interesting to see if they would license that out further for some amount of money.
https://github.com/sandialabs/sgm
Originally open-source, but since taken back in-house. As I understand, which should not be construed as an accurate accounting, Sandia wants to flesh out the basics further before (potentially) open-sourcing it again.
I can't recall a single CAD system which did this differently. Has modern solidworks figured this out?
The algorithms it enables are fundamentally more capable and robust than traditional kernels based on linear algebra (vectors and matrices). You can do really fancy things like interpolating in space and time robustly, find extrema in high-dimensional phase spaces, etc...
This could potentially allow straightforward and robust solvers for kinematics, optimal shape finding, etc...
Every few decades there's a "step change" where some new algorithm or programming paradigm sweeps away the old approach because suddenly a hobbyist can do the same thing solo that took dozens of developers a decade in the past. I suspect (but cannot prove) that PGA is one of those things.
Previously discussed here:
https://news.ycombinator.com/item?id=30597061
take a look at https://Plasticity.xyz. It's not open-source, but it's got a small, highly dedicated team behind it. It's built on Solidworks' kernel, so it's quite robust.
Also take a look at solverspace, caligula, FreeCAD, ...
Hard nope.
I was going to suggest solvespace. It is very barebones, but was much easier to use than FreeCad for me. It also has constraints in 3D space, which I use a lot: https://solvespace.com/index.pl
I'd like to hear someone's perspective on how difficult it would be to unify OpenUSD and CAD file formats so that they are portable between programs?
You're on point that there's a tremendous amount of money captures by Autodesk for CAD software that could be better directed at the open source community instead.
Software like OpenSCAD and FreeCAD are obviously not suitable for much commercial work, and have very irritating limitations for hobbyist work, in my mind a big part of that is the UI and Blender has a good and established UI at this point so I'd love to see the open source CAD that provides an alternative to vendor lock in come from a Blender add-on instead of a separate program.
I am no expert but as I understand it the primary difficulty with developing good alternatives to commercial CAD software lie in the development of an effective geometric kernel.
It seems to me that if a developer of an opensource CAD program develops it as a Blender add-on they can effectively outsource the remainder of the development efforts to the Blender community while focus can be made on the CAD kernel itself.
A viable OSS alternative, particularly one that prioritizes simplicity and being a gentle on-ramp for hobbyists, would be fantastic.
It wouldn't need (and I would argue shouldn't attempt) to compete with the big for-profit outfits to be useful either. Offering a simplified UX for the most-used features of the pro software would have a ton of utility, while also being a great place to build the foundational skills you need in order to master the more complex stuff.
Furthermore, a project with the mission of complementing the pro tools rather than replacing them would probably be far more likely to succeed, IMO. As long as projects could be exported to variety of formats and brought into some other software when a specific use-case arises, you'd have all your bases covered.
That said, I use FC as my main CAD driver and, not only tolerate it, but enjoy using it. I had to watch several hours of introductory videos to get the hang of things initially, but now I'm quite fast and proficient.
The initial pains and common complaints about UI and such, are basically non-issues for me now and when I model, my cognitive energy is basically devoted to the design problem itself rather than issues with UI or the behavior of the software.
It's necessary to put the time into learning it, but it's worth it.
Speaking with over a decade of experience as a developer in industrial CAD (but still just one random guys point of view only). The question _isn't_ about the availability of a 3D kernel.
3D kernel is not the "moat".
You can cross that with money.
You can purchase a ACIS or Parasolid and you are off to the races. Or even use OpenCascade if you know what you are doing.
The more interesting question is: Ok hotshot, you have a 3D kernel, 10M of investor money (or equivalent resources).
What's your next move? What industry are you going to conquer? What are the problems you are going to solve better than the current tools do?
What's the value you provide to the users except price?
What are you going to do better than the incumbent softwares in relevant specific design industries?
Which industry is your go-to-market?
Etc etc.
The programmer's view is "I will build a CAD". The industrial user on the other hand does _NOT_ want a "general CAD software".
They want a tool with a specific, trainable workflow for their specific industrial use case.
So "if you build it and they will come" will require speaking to a specific engineering/designer audience.
You can of course build a generic tool (it's all watertight manifolds in the end) but the success in the market depends on the usual story about market forces. What's your go to market/beached. Does it enable you to move to other markets? And the answer usually is - NO. You need to build the market share in _each_ domain separately.
Meanwhile, in other niches, Microsoft Office still beats open source office suites like LibreOffice; Photoshop isn't about to give up its crown to GIMP; Lightroom isn't losing to Darktable; and FreeCAD isn't even in the rear view mirror of Solidworks.
I wonder what will be the next category of open source to pull ahead? Godot is rapidly gaining users/mindshare while Unity seems to be collapsing, but Unreal is still the king of game engines for now. Krita is a viable alternative for digital painting.
OBS is on line 2 ....
https://www.blender.org/user-stories/japanese-anime-studio-k...
Nobody talks about how Linux dominates the server space anymore. Nobody talks about how “git is winning” or getting “battle tested”. These are mundane and banal facts.
I don’t believe the same has happened to Blender yet.
So while Maya is currently the standard, I don't believe that it's growing. It'll probably be around still in 20 years, with lots of studios having built their pipelines and tooling around it, with lots of people being trained in it, and because it's at the moment still better than Blender in some aspects like rigging and animation (afaik).
IMHO that's only still true because large studios can't afford to move their entire highly customized production pipeline which they had built around Maya for nearly three decades to any other tool (Blender or not), even if they desparately want a divorce from Autodesk. Autodesk basically has them locked in and can milk them until all eternity or the studios go bankrupt.
I bet that the next generation of CGI and game studios will be built around Blender (and not based on the quality of those tools, but because of Autodesk business practices).
(edit: somehow my brain switched Adobe and Autodesk - forgiveable though because both use the same 'milking existing customers' strategy heh)
And also, how can you say Blender is not battle-proven? I mean, the big studios use Maya like fortune 500 companies use Microsoft Windows - doesn't mean Linux isn't battle proven.
Super small nit (or info tidbit), but it doesn't take away from your overall message regarding production and scene scale.
Pixar does not and has not used Maya as the primary studio application, it's really only used for asset modeling and some minor shading tasks like UV generation and some Ptex painting. The actual studio app is Presto, which is an in-house tool Pixar has developed over the years since its earliest productions. All other DCCs are team/task specific.
Dreamworks is similar with their tool, Presto, IIRC. Walt Disney Animation Studio (WDAS) does use Maya as the core app last I saw, but I don't know if they've made any headway with evaluating Presto since 2019...
Davinci Resolve is probably competitive with Premiere, but while free it's not actually open source. But either a viable competitor catching up or Davinci publishing the code could change that fast
I don't have the ability to compare these things in intimate detail, but Lightworks has at least been used for "real" productions [2] so I think it's production-ready.
Tbf, everything starts somewhere and all the proprietary apps you listed were not instant market leaders.
I can and do use all those FOSS tools just fine both as a hobbyist and professionally, my needs are meet. Others may not find the same, but I suspect there's just a lot of stickyness preventing even trying new workflows.
Mine aren't: GIMP is okay, FreeCAD is a complete joke. It is painfully obvious that their development is done primarily by F/LOSS enthusiasts rather than by industry professionals and UX designers. They are closer to being a random collection of features than a professional workhorse. You might eventually get the job done, but compared to the proprietary competition it is woefully incomplete, overly complicated, and significantly buggier.
The poor quality of FreeCAD is the main reason my 3D printer is collecting dust. As a Linux-only user the proprietary alternatives mostly aren't available to me, and FreeCAD is bad enough that I'd rather not do CAD at all. The Ondsel fork was looking promising for a while, but sadly that died off.
If you want to limit standard Office productivity to ones that were written with the GUI in mind, MS Office was the leader on the Mac before it came to PCs and crushed WordPerfect and Lotus early on.
KiCad, for PCB design. They have been making massive improvements over the last few years, and with proprietary solutions shutting down (Eagle) or being unaffordable (Altium) Kicad is now by far the best option for both hobbyists and small companies.
With the release of KiCad 5 in 2018 it went from being "a pain to use to, but technically sufficient" to being a genuine option for less-demanding professionals. Since then they've been absolutely killing it, with major releases happening once a year and bringing enough quality-of-life improvements that it is actually hard to keep track of all of them.
From the type of new features it is very obvious that a lot of professional users are now showing interest in the application, and as we've seen with Blender a trickle of professional adoption can quickly turn into a flood which takes over the entire market.
KiCad still has a long way to go when it comes to complex high-speed boards (nobody in their right mind would use it to design an EPYC motherboard, for example), but it is absolutely going to steamroll the competition when it comes to the cookie-cutter 2/4/6 layer PCBs in all the everyday consumer products.
It is very kludgy and cumbersome to split project into several PCB (for example, stack of PCBs connected by backplane or headers, like Arduino & Shield for it) and/or to have variations of the PCBs for one schematics, like TH and SMD variants of the PCB for exactly same schematics.
Even in my very modest almost-electrical (as opposed to electronic) projects I need one or another from time to time.
As far as I understand it is limitation which is not easy to fix, because all architecture of KiCAD is based on this 1-1-1 principle.
The thing I miss is being able to rotate a IC by 45 degrees.
[1] https://sschueller.github.io/posts/ci-cd-with-kicad-and-gitl...
Guess what, user adoption increased dramatically, because it became pleasant (or tolerable) to use by people that used literally any other program.
V8 included in the core many things that were plugins before, and replaced the old utilities that the neckbeards in the forum were crying to keep, or else! (or else more adoption.)
V9 had even more many improvements, but also many regressions, over V8. V10 might be the release that truly consolidates the core of the suite and then they can start really focus on high speed designs.
I've navigated many programs over my career, and unless a future employer mandates me to use Altium, or purely technical reasons (8+ layer, high speed designs) requires me to use Cadence, only kicad for me.
Incidentally, it feels like this past two years freeCAD GIMP and Inkscape have started moving away from listening to noisy members of the community, to useful members of the community. I'm seeing a slow but steady progress that will eventually accelerate and make both toold true alternatives, as it happened with KiCAD (though it will really be tough for GIMP, even if it's perfectly usable for many, many tasks, any graphic designer will kick and scream if they're not given the adobe suite, pity.)
Myself, i do very little basic graphics like replicate buttons and such things to not bother my colleague, or apply correction to my photos, i proudly do that in GIMP and inkscape.
I think FreeCAD might be on a distant hilltop in their rearview these days, check it out again.
The most important improvement is the toponaming heuristic solver spearheaded by Realthunder.
Since that was merged into mainline, it seems that the devs keep breaking the UX and shortcuts without rythme nor reason, while the fundamentals are broken beyond repair.
I would never recommend freecad to anybody, even though this this the only CAD I use, and I actually write python for it for some automation.
I cannot live without freecad. But damn it's a mess.
Unity and Unreal are dinosaurs that target the shrinking console market. Godot is being built in their image. My hope is that something more versatile like Bevy becomes common so that we have something that could potentially compete with the next generation of Roblox.
Hadn’t heard that. How many AAA vfx studios have left Maya for Blender?
really? I haven't done 3D rendering in a long time, admittedly, but back then Maya and Lightwave were miles ahead of Blender. Rhino3D too. Even 3DSMax was better. Lightwave seems to have sadly fallen off (unfortunately, IMO it was the best at one point, had excellent ray tracing). I didn't really Blender had come such a long way -- that's great.
Since you mention niches: Adobe InDesign has no OSS competition at all, and Illustrator is still much better than Inkscape.
Radial tiling my beloved, and a seemingly far more straightforward array modifier <3 Faster volume scattering for non-homogenous volumes.
For those wondering "where the AI is", the new Convolve Node might be it :) Convolutions are a pretty generic signal processing operation (Hadamard product) which are also used in neural networks which work with images. Realistically though, this will be mostly useful for wonky hand-crafted blurs.
The new sequencer looks fantastic, too. I always went to DaVinci Resolve but I might be able to go full blender. Compositor modifiers in the sequencer is also very welcome.
This is incredible for me.
156 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.