Adobe Photoshop 1.0 Source Code (1990)
Key topics
The release of Adobe Photoshop 1.0's source code has sparked a lively discussion about open-source alternatives, with GIMP being a prime example. While some commenters praised GIMP's capabilities, others pointed out its lacking text editing features, sparking a debate about the limitations of open-source software. One commenter cheekily suggested submitting a pull request to address the issue, while others questioned the meaning of "good texting tool," revealing a range of interpretations. As the conversation unfolded, it became clear that even with available source code, motivation and usage frequency play a significant role in driving contributions to open-source projects.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5d
Peak period
68
120-132h
Avg / period
20.4
Based on 143 loaded comments
Key moments
- 01Story posted
Dec 18, 2025 at 10:37 AM EST
18 days ago
Step 01 - 02First comment
Dec 23, 2025 at 4:46 AM EST
5d after posting
Step 02 - 03Peak activity
68 comments in 120-132h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 26, 2025 at 12:24 AM EST
10 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
and having the source available didnt help so far either :-))
And thats the irony covered in my post: Even that the source is available didnt motivate someone enough so far to create better version of the built
could you please show me a good textting tool plugin for GIMP, then?
you can check their forums & other sites: the textingtools is on top of their discussion lists?
Reddit: https://www.reddit.com/r/GIMP/comments/1fecr6u/suggestion_im...
Its just the first two results from top of Google.
Maybe the tool was improved in version 3.0, I'm running an older 2.x version. I will check it next time.
The versions were difficult in: - font size applying - random loss / reset settings - there were some issues with the preview when editting - font preview before selection etc.
The strange font sizes and setting reset was mostly fixed as part of the 2020 massive refactor [0]. There are still some minor inconsistencies between the two font editor panels, but they're being worked on.
Thankfully, you shouldn't have any random setting changes since about 2018 build.
[0] https://gitlab.gnome.org/GNOME/gimp/-/issues/344
It's not intuitive. It's actually possibly my most hated widget in the entire FOSS ecosystem.
I feel like that has changed? Even Blender felt good the last time I used it, Firefox became kinda fine, though these are probably bad examples as they are both mainstream software. But what about OSS that is used primarily by OSS enthusiasts? What about GIMP now?
> To change GIMP to single-window mode (merging panels into one window), go to "Windows" in the top menu and select or check "Single-Window Mode"; this merges all elements like the Toolbox, Layers, and History into one unified view.
Whereas Photoshop and other "mainstream" software use terms and procedures non-programmers are more likely to be familiar with: heal this area with a patch, clone something with a clone stamp, scissors/lasso to cut something out (not saying GIMP doesn't have those)...
Unfortunately, designers are rare among the FOSS community. You can't attract real casual or professional users if you don't recognize the value of professional UI/UX.
> 2. Restrictions. Except as expressly specified in this Agreement, you may not: (a) transfer, sublicense, lease, lend, rent or otherwise distribute the Software or Derivative Works to any third party; or (b) make the functionality of the Software or Derivative Works available to multiple users through any means, including, but not limited to, by uploading the Software to a network or file-sharing service or through any hosting, application services provider, service bureau, software-as-a-service (SaaS) or any other type of services. You acknowledge and agree that portions of the Software, including, but not limited to, the source code and the specific design and structure of individual modules or programs, constitute or contain trade secrets of Museum and its licensors.
Edit: Disappointed is really not the right word but I am failing at finding the right word.
Your disappointment seems to be a form of FOMO, but there isn't actually anything that you're MO here.
> When will we get the linux port of Photoshop 1.0?
1) these historical source code releases really are largely historical interest only. The original programs had constraints of memory and cpu speed that no modern use case does; the set of use cases for any particular task today is very different; what users expect and will tolerate in UI has shifted; available programming languages and tooling today are much better than the pragmatic options of decades past. If you were trying to build a Unix clone today there is no way you would want to start with the historical release of sixth edition. Even xv6 is only "inspired by" it, and gets away with that because of its teaching focus. Similarly if you wanted to build some kind of "streamlined lightweight photoshop-alike" then starting from scratch would be more sensible than starting with somebody else's legacy codebase.
2) In this specific case the licence agreement explicitly forbids basically any kind of "running with it" -- you cannot distribute any derivative work. So it's not surprising that nobody has done that.
I think Doom and similar old games are one of the few counterexamples, where people find value in being able to run the specific artefact on new platforms.
I think Adobe decided to release the code because they knew it was only valuable from a historical standpoint and wouldn't let anyone actually compete with Photoshop. If you wanted to start a new image editor project from an existing codebase, it would be much easier to build off of something like Pinta: https://www.pinta-project.com/
My personal thoughts are: open-source software is great, probably the ideal condition, but I wish the general software distribution environment was not effectively all or nothing. open-source or compiled binary. I wish that protected-source software was considered a more valid distribution model. where you can compile, inspect fix and run the software but are not allowed to distribute it. Because trying to diagnose a problem when all you have is a compilation artifact is a huge pain. You see some enterprise software like this but for the most part it either open-source or no-source.
I am a bit surprised that there is no third party patch to get photoshop 1.0 to run under modern linux or windows, not for any real utility(at this point MS paint probably has better functionality), but for the fun of it. "This is what it feels like to drive photoshop 1"
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
This makes the license transitive so that derived works are also MIT licensed.
[1] https://en.wikipedia.org/wiki/MIT_License?wprov=sfti1#Licens...
AGPL and GPL are, on the other hand, as you describe.
You also could not legally remove the MIT license from those files and distribute with all rights reserved. My original granting of permission to modify and redistribute continues downstream.
On the contrary: https://opensource.org/osd
> Need more of a citation to understand that..?
This nonsense is not at all relevant to the claim for which I asked for a citation: "No, the original definition of open-source is source code that is visible (open) to the public."
No support for that claim has ever been offered.
[1] https://www.gnu.org/philosophy/free-sw.en.html
[2] https://opensource.org/osd
https://fsck.technology/software/Silicon%20Graphics/Software...
https://github.com/autc04/executor
https://github.com/jjuran/metamage_1/
Just as an experiment, I fed the resource fork to GPT-5.2 to see whether it could render the windows/dialogs in the resource fork - it did a fairly okay job. I think the fundamental limit it ran against (and acknowledges) is that lot of Mac's classic look and feel was defined programmatically, literally, down to calls to RoundRect(...).
https://chatgpt.com/s/t_694dddb290308191babcb07a72367e97
Thanks for posting your experience.
And, for purity/completeness, avoid Maxx Desktop and/or NSCDE; EMWM with XMToolbar it's close enough to SGI's Irix desktop.
https://fastestcode.org/emwm.html
Just supporting a modern OS's graphical API (The pre-OSX APIs are long dead and unsupported) is a major effort.
I think all floppies are magical :)
https://computerhistory.org/wp-content/uploads/2019/08/photo...
E.g: https://c7.alamy.com/comp/2AA9BC4/ajaxnetphoto-2019-worthing...
It wasn't even broadband that destroyed that experience, when CDs came around developers realised they had space to just stick a PDF version of the manual on the CD itself and put in a slip that tells you to stick in the CD, run autorun.exe if it didn't already, and refer to the manual on the CD for the rest!
The Office 4.3 set of manuals were large too, but didn't have the information density the AutoCAD ones did.
Even some well-documented modern software is obviously documented by the programmers and programmer-adjacent.
They weren’t like textbooks, which have knowledge that tends to be relevant for decades. You’d get a new set with every software release, making the last 5-20 lbs of manuals obsolete.
You did lose some of the readability of an actual book. Hard-copy manuals were better for that. But for most software manuals, I did more “look up how to do this thing” than reading straight through. And with a pdf on a CD you had much better search capabilities. Before that you’d have to rely on the ToC, the book index and your own notes. For many manuals, the index wasn’t great. Full text search was a definite step up.
Even the good ones, like the 1980s IBM 2-ring binder manuals, which had good indexes, were a pain to deal with and couldn’t functionally match a PDF or text file on a CD for searchability.
You might expect now and again to get some optional updates/patches later, but that was rare - and rarer still for most people to even know about them.
These days, software is never complete. Nothing is done. It's just a point-in-time state with a laundry list of bugs and TODOs that just roll out whenever. The software is just whatever git tag we're pointing to today.
I understand how/why it has become like this - but it still makes me sad.
For trivial CRUD apps, and maintaining modified versions of the generated code was a nightmare.
It is also a great way to document existing architectures.
It is like the YAML junk that gets pushed nowadays in detriment of proper schemas, and validation tools we have in XML.
"There are only a few comments in the version 1.0 source code, most of which are associated with assembly language snippets. That said, the lack of comments is simply not an issue. This code is so literate, so easy to read, that comments might even have gotten in the way."
"This is the kind of code I aspire to write.”
I'm looking at the code and just cannot agree. If I look at a command like "TRotateFloatCommand.DoIt" in URotate.p, it's 200 lines long without a single comment. I look at a section like this and there's nothing literate about it. I have no idea what it's doing or why:
Every comment is a line of code, and every line of code is a liability, and, worse, comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion. It’s an excuse to name things poorly, because “good comment.” The purpose of variables should be in their name, including units if it’s a measurement. Parameters and return values should only be documented when not obvious from the name or type—for example, if you’re returning something like a generic Pair, especially if left and right have the same type. We’d been living with decades of autocomplete, you don’t need to make variables be short to type.
The problem with AI-generated code is that the myth that good code is thoroughly commented code is so pervasive, that the default output mode for generated code is to comment every darn line it generates. After all, in software education, they don’t deduct points for needless comments, and students think their code is now better w/ the comments, because they almost never teach writing good code. Usually you get kudos for extensive comments. And then you throw away your work. Computer science field is littered with math-formula-influenced space-saving one or two letter identifiers, barely with any recognizable semantic meaning.
> comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion
This gets endlessly repeated, but it's just defending laziness. It's your job to update comments as you update code. Indeed, they're the first thing you should update. If you're letting comments "rot", then you're a bad programmer. Full stop. I hate to be harsh, but that's the reality. People who defend no comments are just saying, "I can't be bothered to make this code easier for others to understand and use". It's egotistical and selfish. The solution for confusing comments isn't no comments -- it's good comments. Do your job. Write code that others can read and maintain. And when you update code, start with the comments. It's just professionalism, pure and simple.
(Please note: I'm not arguing against comments. I'm simply arguing that trusting comments is problematic. It is understandable why some people would prefer to have clearly written code over clearly commented code.)
That doesn't justify matching their sloth.
Lead by example! Write comments half a page long or longer, explaining things, not just expanding identifier names by adding spaces in between the words.
That, and I have mixed feelings about commenting code. (Thankfully I don't managed developers. I simply exploit personally since it a skill that I have.) I understand why we do it. I especially appreciate well documented libraries and development tools. On the other hand, I fully understand that comments only work if they are written, read, and updated. The order is important here since documentation will only be updated if it is read and it will only be read if it is (well) written. Even then you are lucky if well written documentation is read.
The flip side is that comments are duplication. Duplication is fine if they are consistent with each other. In some respects, duplication is better since it offers multiple avenues for understanding. Yet there is also a high probability that they will get out of sync. Sometimes it is "intentional" (e.g. someone isn't doing their job by updating it). Sometimes it is "unintentional", since the interpretation of human languages is not as precise as the compiler's translation of source code into object code. (Which is a convoluted way of saying that sometimes comments are misinterpreted.)
I like to add myself as a mandatory reviewer of all PRs and then reject changes that don't come with some explanatory comment or fail to update comments.
Even if huge swaths of the codebase are undocumented boring boilerplate, you still have to draw the line somewhere, otherwise you get madness like ten pages of authentication and authorization spaghetti logic without a single descriptive comment.
I've worked at places (early on) that were basically cowboy coding -- zero code review, global variables everywhere, not a comment or test to be seen. Obviously you can't enforce good comments there.
And I've worked at places that were 100% professional -- design documents, full code review, proper design, tests, full comments and comments kept fully up-to-date just like code.
It's just the culture and professionalism. If proper comments are enforced through code review, they happen. Ultimately, the head of engineering just decides whether it's part of policy or not. It's not hard. It's just a top-down decision.
This is exactly my view. Comments, while can be helpful, can also interrupt the reading of the code. Also are not verified by the compiler; curious, in the era when everyone goes crazy for rust safety, there is nothing unsafer as comments, because are completely ignored.
I do bot oppose to comments. But they should be used only when needed.
A name and signature is often not sufficient to describe what a function does, including any assumptions it makes about the inputs or guarantees it makes about the outputs.
That isn't to say that it isn't necessary to have good names, but that isn't enough. You need good comments too.
And if you say that all of that information should be in your names, you end up with very unwieldy names, that will bitrot even worse than comments, because instead of updating a single comment, you now have to update every usage of the variable or function.
ORD4 = cast as 32bit integer.
BSR(x,1) simply meant x divided by 2. This is very comment coding idom back in those days when Compiler don't do any optimization and bitwise-shift is much faster than division.
The snippet in C would be:
If I understand it correctly, it was calculating the top-left point of the bounding box.
pt == point, r == rect, h, v == horizontal, vertical, BSR(...,1) is a fast integer divide by 2, ORD4 promotes an expression to an unsigned 4 byte integer
The algorithms are extremely common for 2D graphics programming. The first is to find the center of a 2D rectangle, the second offsets a point by half the size, the third clips a point to be in the range of a rectangle, and so on.
Converting the idiomatic math into non-idiomatic words would not be an improvement in clarity in this case.
(Mac Pascal didn't have macros or inline expressions, so inline expressions like this were the way to go for performance.)
It's like using i,j,k for loop indexes, or x,y,z for graphics axis.
There's no context in those names to help you understand them, you have to look at the code surrounding it. And even the most well-intentioned, small loops with obvious context right next to it can over time grow and add additional index counters until your obvious little index counter is utterly opaque without reading a dozen extra lines to understand it.
(And i and j? Which look so similar at a glance? Never. Never!)
> (And i and j? Which look so similar at a glance? Never. Never!)
This I agree with.
> There's no context in those names to help you understand them, you have to look at the code surrounding it.
Hard disagree. Using "meaningful" index names is a distracting anti-pattern, for the vast majority of loops. The index is a meaningless structural reference -- the standard names allow the programmer to (correctly) gloss over it. To bring the point home, such loops could often (in theory, if not in practice, depending on the language) be rewritten as maps, where the index reference vanishes altogether.
The issue isn't the names themselves, it's the locality of information. In a 3-deep nested loop, i, j, k forces the reader to maintain a mental stack trace of the entire block. If I have to scroll up to the for clause to remember which dimension k refers to, the abstraction has failed.
Meaningful names like row, col, cell transform structural boilerplate into self-documenting logic. ijk may be standard in math-heavy code, but in most production code bases, optimizing for a 'low-context' reader is not an anti-pattern.
That was my "vast majority" qualifier.
For most short or medium sized loops, though, renaming "i" to something "meaningful" can harm readability. And I don't buy the defensive programming argument that you should do it anyway because the loop "might grow bigger someday". If it does, you can consider updating the names then. It's not hard -- they're hyper local variables.
But once you nest three deep (as in the example that kicked off this thread), you're defining a coordinate space. Even in a 10-line block, i, j, k forces the reader to manually map those letters back to their axes. If I see grid[j][i][k], is that a bug or a deliberate transposition? I shouldn't have to look at the for clause to find out.
You seem to be missing my point. It's not about improving "clarity" about the math each line is doing -- that's precisely the kind of misconception so many people have about comments.
It's about, how long does it take me to understand the purpose of a block of code? If there was a simple comment at the top that said [1]:
then it would actually be helpful. You'd understand the purpose, and understand it immediately. You wouldn't have to decode the code -- you'd just read the brief remark and move on. That's what literate programming is about, in spirit -- writing code to be easily read at levels of the hierarchy. And very specifically not having to read every single line to figure out what it's doing.[1] https://news.ycombinator.com/item?id=46366341
To help understand, you need to see this code as math. As an algorithm. Graphics programming algorithms are literally math.
You're asking for training wheels comments, which just get in the way.
The comments are domain specific. I'm sure a few graphics programming engineers might want react useState(), useEffect(), etc. to be documented, yet a react programmer would scoff at the idea.
Part of figuring out a reasonable level of commenting (and even variable naming) is a solid understanding of your audience. When in doubt aiming low is good practice, but keep in mind that this was 2D graphics software written at a 2D graphics software company.
It all depends on who your professional peer is that you are writing the code for.
Also, breaking things down to more atomic functions wasn't the best idea for performance-sensitive things in those days, as compilers were not as good about knowing when to inline and not: compiler capabilities are a lot better today than they were 35 years ago...
Because it's quite clear, everything is well named, and the filename also gives the context.
I'm sure the code would be immediately obvious to anyone who would be working on it at the time.
Comments aren't unnecessary, they can be very helpful, but they also come with a high maintenance cost that should be considered when using them. They are a long-term maintenance liability because by design the compiler ignores them so its very easy to change/refactor code and miss changing a comment and then having the comment be misleading or just plain wrong.
These days one could make some sort of case (though I wouldn't entirely buy it, yet) that an LLM-based linter could be used to make sure comments do not get disconnected from the code they are documenting, but in 1990? not so much.
Clamps the result so it doesn’t go outside the document.
If the region is bigger than the document, it re-centers instead of snapping to (0,0).
Note this is a toxic license. Accepting it and/or reading of the code has potential for legal liability.
Still, applaud releasing the source code, even if encumbered. Preservation is most important, and any legal teeth will eventually expire with the copyright.
How would this potentially expose you to legal liability?
Taking his contribution for Photoshop into account, one could say that if you saw mainstream motion or still pictures in the Western world in the last three decades, you'll probably saw something influenced by him in one way or another.
FYI.., the version I used was registered to Apple. Apparently, the Knoll brothers demoed PS to apple and they promptly shared it amongst themselves and their buddies. Almost all illegitimate copies of it are derived from that pirated copy.