A Visualization of the Rgb Space Covered by Named Colors
Posted2 months agoActiveabout 2 months ago
codepen.ioTechstoryHigh profile
calmpositive
Debate
40/100
Color TheoryData VisualizationWeb Development
Key topics
Color Theory
Data Visualization
Web Development
A visualization of the RGB color space covered by named colors sparks discussion on color naming conventions, perception, and cultural differences.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5d
Peak period
66
Day 6
Avg / period
18.2
Comment distribution91 data points
Loading chart...
Based on 91 loaded comments
Key moments
- 01Story posted
Oct 29, 2025 at 12:09 PM EDT
2 months ago
Step 01 - 02First comment
Nov 3, 2025 at 10:11 AM EST
5d after posting
Step 02 - 03Peak activity
66 comments in Day 6
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 6, 2025 at 11:18 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45748816Type: storyLast synced: 11/20/2025, 7:55:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Adding:
Looking some more I think it would be nice if the rotation could be stopped.
Labeling the axis would be nice also.
author said he fixed that, interacting will stop it now
I like sharing descriptive names with designers instead of naming everything "light blue" "dark blue" "not quite as light but still not dark blue" etc.
This new thing is tons of fun but seems a bit less practically useful.
Another dev, Daniel Flück, extended the app to help color blind users: https://www.color-blindness.com/color-name-hue/
I'm trying to figure it out.
They’ve done this! It’s shown on a “chromaticity diagram”, and is useful for comparing what colors different screens/printers/etc can reproduce. (It’s 2D not 3D cause it’s normalized for luminance or brightness.) Color science is weirdly fascinating:
https://en.wikipedia.org/wiki/Color_space?wprov=sfti1#
Only change is lines 421 + 422 that sloooowly rotated the cube are commented out in the javascript, otherwise should act the same!
I've long been interested in the names of colors and their associations. If I may plug my own site a bit, check out the "color thesaurus" feature on OneLook that organizes color names more linearly. Start with mauve, as an example: https://onelook.com/?w=mauve&colors=1 (It also lets you see the words evoked by the color and vice versa, which was a fun LLM-driven analysis.)
If so, would it be possible to put a "namespace" in front (like html.violet, or html::violet). That way you see which source it's from? That way you know where it's from (though I realize this may cause multiple "hits" on the same value/name) Or perhaps same names have different values.
Either way, pretty cool. I agree, it would be nice to have a button or mode to stop spinning without having to hack it manually.
Now I'm not sure how many colors are there in that list, but it feels like there are too many to be practically useful. How do you see this?
I started with about 1,600 names scraped from Wikipedia, but with only that many, there were a lot of redundancies and when you disallow duplicates, you end up with colors being labeled as “orange” even though they don’t actually look orange. On top of that many of the names were racist or at least questionable (so are many names on colornames.org)
Other large lists like the Pantone one, don't have a permissive license.
So for the past ten years or so, I’ve been collecting color names in a very unscientific way. It slowly turned into a hobby—something I often do on vacation, especially when I’m surrounded by unfamiliar places, dishes, or objects where color is used in unexpected ways.
Tools I made that benefit from using the names:
- https://meodai.github.io/poline/ - https://words.github.io/color-description/ - https://farbvelo.elastiq.ch/ - https://codepen.io/meodai/pen/PoaRgLm - https://parrot.color.pizza/ - https://meodai.github.io/rampensau/
And probably some that I forgot about...
also, beautiful site! https://elastiq.ch/
Saying it’s the insight that led to JPEG seems wrong though, as DCT + quantization was (don’t quote me on this) the main technical breakthrough?
I guess there exist multiple names for the same colors, per https://www.w3schools.com/cssref/css_colors.php, and for some reason "Aqua" takes precendence in this display.
Note that the headline gets this wrong but the page linked to gets this right.
sRGB or Rec2020 or ACEScg etc. are color spaces with known primaries and a known whitepoint. This is not nit-picky. Almost everyone doing CGI for the first time w/o reading up on some theory beforehand gets this wrong (because of gamma and then premultiplication, usually in that order).
Then there are color models which are also color spaces. CIE XYZ is an example.
[1] https://en.wikipedia.org/wiki/Color_model
[2] https://en.wikipedia.org/wiki/Color_space
Most of my career was somehow related to graphics programming and I always thought it's bit weird that most quantization algorithms were operating in RGB model despite the fact that it was designed for hardware, not so for faithful color manipulation.
The easiest way to see that is to imagine a gradient between two colors and trying to make it in RGB. It doesn't seem right most of the time.
If so, then why would we consider distance in 3D space between two colors as faithful representation of their likeness?
Well, lo and behold, it's 2025 and everyone finally accepting LAB as the new standard. :)
It's definitely not something you can plug into a three-value model. Those are good stimuli encoding space, however.
The distinction between brain-color and physical-color is what screws everyone up.
What do you mean? And what is screwed up? We use 3 dimensions because most of us are trichromats, and because (un-coincidentally) most digital display devices have 3 primaries. The three-value models definitely are sufficient for many color tasks & goals. Three-value models work so well that outside of science and graphics research it’s hard to find good reasons to need more, especially for art & design work. It’d be more interesting to identify cases where a 3d color model or color space doesn’t work… what cases are you thinking of? 3D cone response is neither physical (spectral) color nor perceptual (“brain”) color, and it lands much closer to the physically-based side of things, but completely physically justifies using 3D models without needing to understand the brain or perception, does it not?
Printing instead uses colors that are in the range we can perceive well, and whenever you want a color that is beyond what a combination of the chosen CMYK tones can represent you just add more colors to widen your gamut. Also printed media arguably prints more information than just color (e.g. "metal" colors with different reflectivity, or "neon" colors that convert UV to visible light to appear unnaturally bright)
CMYK always has a dramatic color shift from any on-screen colorspace. Vivid green is really hard to get. Neons are (kinda obviously) impossible. And, hilariously/ironically (given how prevalent they are), all manor of skin tones are tough too.
Photoshop and Illustrator let you work in CMYK, and is directionally correct. Ask your printer if they accept those natively.
https://www.amazon.com/Xeepton-Cartridge-Replacement-PFI4100...
There’s only 1 extra color there: red. There are multiple blacks, multiple cyans, multiple yellows, and multiple magentas. The reason printers use more than 3 inks is for better tone in gradations, better gloss and consistency. It’s not because there’s anything wrong with 3D color models. It’s because they’re a different medium than TVs. Note that most color printers take 3D color models as input, even when they use more than 3 inks.
My fav part - if you're preparing an ad for a newspaper you need to contain the sum of all of your CMYK components to under 120 or so value otherwise the print will either dissolve the paper and it will go through.
For example, the ETC Source4 LED Lustr X8 has: Deep Red, Red, Amber, Lime, Green, Cyan, Blue, Indigo[0]
RGB LEDs are pretty crappy at rendering colours as they miss quite a lot of the colour spectrum, so the solution is just add more to fill in the gaps!
[0] https://www.etcconnect.com/WorkArea/DownloadAsset.aspx?id=10...
Akiyoshi's color constancy demonstrations are good examples of this. The RGB model (and any three-values "perceptual" model) fails to predict the perceived color here. You are seeing different colors but the RGB values are the same.
https://www.psy.ritsumei.ac.jp/akitaoka/marie-eyecolorconsta...
It's excellent at compressing the visible part of the EM spectrum, however. This is what I meant by stimuli encoding.
I find it confusing to claim that cone response isn’t color yet, that’s going to get you in trouble in serious color discussions. Maybe better to just be careful and qualify that you’re talking about perception than say something that is highly contestable?
The claim that a color model must model perception is also inaccurate. Whether to incorporate human perception is a choice that is a goal in some models. Having perceptual goals is absolutely not a requirement to designing a practical color model, that depends entirely on the design goals. It’s perfectly valid to have physical color models with no perceptual elements.
And so when I say "color" I only mean it to be the construction that we make out of the physical thing.
We project back these construction outside of us (e.g. the apple is red), but we must no fool ourselves that the projection is the thing, especially when we try to be more precise about what is happening.
This is why I'm saying a 3D model of color (brain thing) is very far from modelling color (brain thing) at all. But! It's not purely physical either, otherwise it would just be a spectral band or something. So this is pseudo-perceptual. It's the physical stuff, tailored for the very first bits of anatomy that we have to read this physical stuff. It's stimuli encoding.
If you build a color model, it's therefore always perceptual, and needs to be evaluated against what you are trying to model - perception. You create a model to predict things. RGB and all the other models based on three values in a vaccum will always fail at predicting color (brain!) when the stimuli's surround is more complex.
It’s fine for you to think of perception when you say color, but that’s not what everyone means, and therefore, you’re headed for miscommunication when you make assumptions and/or insist on non-standard definitions of these words.
Physical color is of course a thing. (BTW, it seems funny to say it’s not a thing after you introduced the term physical-color to this thread.) Physical color can mean, among other things, the wavelength distribution of light power. A physical color model is also a thing, it can include the quantized numerical representation of a spectral power distribution. Red can mean 700nm light. Some people, especially researchers and scientists, use physical color models all the time. You’re talking about meanings that are more specific than the general terms you’re using, so maybe re-familiarizing yourself with the accepted definitions of color and color model would help? https://en.wikipedia.org/wiki/Color_model
Again, it’s fine to talk about perception and human vision, but FWIW the way you’re talking about this makes it seem like you’re not understanding the specific goals behind 3D color spaces like LAB. Nobody is claiming or fooling themselves to think they solve all perception problems or meet all possible goals, so it seems like a straw man to keep insisting on something that was never an issue in this thread. If you want to talk about 3D models not being good enough for perception, then please be more precise about your goals. That’s why I asked what use cases you’re thinking of, and we haven’t discussed a goal that justifies needing something other than a 3D color model - color constancy illusions do not make that point.
As with most recent technological breakthroughs it uses math from 1931 paper to magically blend colors in ways that seems so realistic it's almost uncanny.
RGB just means that color is expressed as a triplet of specific wavelengths. But what is red? And what does red = 1.0 mean w/o context (aka primaries & whitepoint)? What about HDR? What does green = 2.0 convey? Etc.
For context, I worked in VFX production from the 90's to the early 2010's. About 25 years.
And in commercially available VFX-related software, until the early 2000's, mostly, RGB meant non-linear sRGB, unfortunately (or actually: "whatever" would be more true).
And it shows. We have VGX composed in non-linear color space with blown-out, oversaturated colors in highlights, fringes from resulting alpha blending errors, etc. A good compositor can compensate for some of these issues but only so far. If the maths are wrong, stuff will look shitty to some extend. Or as people in VFX say: "I have comments."
After that, SIGGRAPH courses etc. ensured people were developing an understanding on how much this matters.
And after that we had color spaces and learned to do everything in linear. And never looked back.
Games, as always, caught up a decade after. But they, too, did, eventually.
I saw a BBC? documentary about this years ago and it showed how some cultures had the ability to clearly identify different colours where I couldn't see any difference.
It turns out that knowing subtle differences in colours can have a strong impact on your daily life, so cultures pick unique parts of the colour spectrum to assign names to.
VOX : The surprising pattern behind color names around the world https://youtu.be/gMqZR3pqMjg
If you're interested in this is as a board game - https://boardgamegeek.com/boardgame/302520/hues-and-cues
11 lines of JavaScript thanks to AFrame, threejs and some of my own tinkering :
As I learned more about color models, I kept adding different ones over time. The perceptual models helped me understand the “missing” areas much better.
Later, after building an API around the list (https://github.com/meodai/color-name-api ), I started including other lists with permissive licenses too.
Appreciate all the thoughts and feedback here. I’ve also changed it so the cube stops spinning once you interact with it.
If you want to extend your color naming game by being able to say: This looks like Afghanistan-Water, or this looks like Ecuador-Forest
Page is here: https://landshade.com
I've been researching the way classic Macs quantize colors to limited palettes:
https://rezmason.net/retrospectrum/color-cube
This cube is the "inverse table" used to map colors to a palette. The animated regions are tints and shades of pure red, green, and blue. Ideally, this cube would be a voronoi diagram, but that would be prohibitively expensive for Macs of the late eighties. Instead, they mapped the palette colors to indices into the table, and expanded the regions assigned to those colors via a simultaneous flood fill, like if you clicked the Paint Bucket tool with multiple colors in multiple places at the same time. Except in 3D.
Feature request: I want the name of the color I'm hovering over to pop up next to the color. I don't want to have to look in the top left to see the name, especially with the board spinning. Also, I want the specific circle I'm hovering over to get a bit bigger so that I can see its exact color better and know that I've selected it.
4 more comments available on Hacker News