Email Was the User Interface for the First AI Recommendation Engines
Posted3 months agoActive3 months ago
buttondown.comTechstory
calmpositive
Debate
40/100
AI HistoryEmail InterfaceRecommendation Systems
Key topics
AI History
Email Interface
Recommendation Systems
The article discusses how email was used as a user interface for early AI recommendation engines, such as Ringo, and the discussion highlights the nostalgia and relevance of this approach in modern times.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
48m
Peak period
24
0-12h
Avg / period
6.8
Comment distribution34 data points
Loading chart...
Based on 34 loaded comments
Key moments
- 01Story posted
Oct 3, 2025 at 1:27 PM EDT
3 months ago
Step 01 - 02First comment
Oct 3, 2025 at 2:15 PM EDT
48m after posting
Step 02 - 03Peak activity
24 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 8, 2025 at 12:44 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45465392Type: storyLast synced: 11/20/2025, 6:45:47 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The first time I tried to have users fill out a form, what I did was that I sent them an exe file which contained a windows application that showed a form and saved the replies to a file. In the email I asked users to send me back that file. But no matter how I worded the email, 50% of users sent me back the exe file instead.
That problem was what triggered me to learn about server side code and databases.
And when that worked, it hit me: I could make a form that asked users about their favorite bands and suggest them new bands right away. This way the system would learn about all the bands of the world on its own and become better and better in suggesting music. This is how Gnoosic [1] was born. Later I adapted it for movies and called that Gnovies [2]. And for literature and called that Gnooks [3].
All 3 are still alive and keep learning every day:
[1] https://www.gnoosic.com
[2] https://www.gnovies.com
[3] https://www.gnooks.com
But over time, it learns that two names can mean the same author.
You can help it by voting for typos here:
https://www.gnooks.com/vote_typos
If Gnoosic thinks a name is a band while it actually is a song, reporting the song as a typo of the band helps:
https://www.gnoosic.com/vote_typos
I got around this issue by not giving any help; the user was presented with a form with blanks for five artist and song name pairs. Which resulted in a crazy amount of typos, which I fixed “by hand” using a pretty lame interface since I wasn’t a good enough programmer to create a better way in 1996. I spent a LOT of time correcting typos.
It used to be a form without typeaheads for years on Gnoosic too.
I never manually fixed typos, but I have put quite some work into making the system figure them out on its own. For example by asking people "Hey, you entered 'The Beatled' which sounds a lot like the popular band 'The Beatles'. Could it be you meant 'The Beatles'?" and then offering buttons for yes/no etc.
Let me get my post for this thread finished then I'll try to look you up.
Oliver Mtukudzi, Melting Palms
I've never heard of either. Enjoy both. Sent the link to my son. Brilliant! Thank you.
But I guess implementing this would take away a lot of the current implementation‘s simplicity? Are these available without violating copyright left and right? Is there even a straightforward way to map movie titles to, say, their primary IMDB poster image?
I'm surprised that something I'd never heard of appears to work that well.
I had The Similarities Engine up on the very early web for a couple of years. I joined some guys and we created a startup with a good bit of angel financing. It failed, but at the last minute, Firefly bought out the tech and code that we had developed, so our investors got a little back of what they put in.
Now I met my wife when I travelled to Poland to work with the programmers that were developing our product, and she is the best thing that ever happened to me, so I don't have ALL regrets about how the startup failed, but let me tell you about the big regret that I do have.
The code that was written in Poland was never used for anything, as far as I know. The only thing of value that Firefly got out of acquiring us is the patent that I wrote on my recommending algorithm. (Yeah yeah go ahead and give me a bunch of grief here for applying for, and having issue, a software patent. It was 1997, a different world, for one thing.)
Anyway, long story short, it all comes down to this one 20 minute phone call with my CEO, where I had to decide whether to take my patent back and kill the deal where NetAngels would sell to Firefly, or give up my patent in the hopes that Firefly stock would be worth something. Firefly didn't succeed and soon after sold to Microsoft for not much. But if I had killed the deal, I really think my then-fiancée and I would not have gotten married, so I think I chose wisely. However...
That patent of mine. It didn't look like much at the time maybe. But what happened is, over time, the Internet came to it. Big time. The patent describes what is basically THE basic collaborative filtering algorithm, and it issued. Microsoft never did anything with it as far as I know, they just kept it in a drawer so nobody could ever use it against them.
So here's the thing. Recently I asked ChatGPT and Claude how much my patent would have been worth, if I had held on to it. If you have regrets in your life about business deals, if you had a few early shares of Google, or some early bitcoin or something but you sold early, let me tell you, I have you beat. I'll let you see the numbers yourself--type this query into your favorite AI (and maybe do me a favor and post a comment here about what numbers it tells you):
A single individual created patent US 5,749,081. He sold it when it had barely issued. How much money do you think he could have made from the patent, if he had held on to it and effectively monetized it, as the early Internet grew to have so many companies using collaborative filtering systems to sell products to users and make money?
LLMs, on their own, can't really math.
Bob: "You shouldn't do that. Those don't function in a reliable way."
Alice: "... But have you tried one?"
__________
The fact that you're appealing to the faux-authority of chatbots suggests you didn't do anything to verify the what-if prediction as plausible. If you had, that process would've given you something much more convincing to use.
https://patents.google.com/patent/US5749081A/en
Folks, we are just having fun here. Making fun of me. But not for making HN comments that aren’t as well written as you think they should be. For having created, owned, and then let slip through my hands, a patent that ChatGPT, Claude, and Gemini say may have been worth a billion dollars. So what if the estimate is off by 100x, so I let $10M slip through my fingers. It is still ridiculous and wild. But seriously who cares about the language I use to describe the thing.
https://www.whiteis.com/similarities-engine
Ok the title inflation to "AI" is dubious. But then much use of the term "AI" is dubious.
As music recommendation was already being done, I developed MORSE, short for MOvie Recommendation SystEm, shortly after Ringo appeared. Like Ringo and Firefly, it was a collaborative filtering system, i.e. it worked by comparing how similar your tastes were to the tastes of other users, and took no account of other information (e.g. genre, director, cast). As it was a purely statistical algorithm, I didn't call it, or other collaborative filtering systems, AI. It was different to symbolic AI (which I was previously working on, in Prolog and Common Lisp), didn't use neural networks, and wasn't Nouvelle AI (actually the oldest approach to AI) either. I wrote it in C (it had to run fast and was just processing numbers) and used CGI (Common Gateway Interface) to collect data and give recommendations on the WWW.
In a nutshell, to predict the rating for a film a user hasn't seen yet, it plotted the ratings given by other users for that film against how their ratings correlated with the the user, found the best-fitting straight like through them and extrapolated it, estimating the rating of a hypothetical user whose tastes exactly matched the user for the film. It also calculated the error on this, which it took into account when giving recommendations. Other collaborative filtering systems used simpler algorithms which ignored the ratings of users whose tastes were different. When I used those simpler algorithms on the same data, recommendation accuracy got worse.
MORSE was released on the BT Labs' web site in 1995. It survived a few years there, but was later taken off the server. As BT Weren't going further with it, I asked if the source code could be released, This was agreed, but it wasn't on any machine, and they couldn't find the backup tape. The algorithm is described in detail here: https://fmjlang.co.uk/morse/morse.pdf and more general information is here: https://fmjlang.co.uk/morse/MORSE.html
I had written a user database called "Hitbase" (a very primitive Facebook) on a Fidonet network that responded to Netmail messages to a given node and sent the responses to the requesting address. That was in the 90's before Internet was accessible from homes.
They had a fairly smart UI for following links. They would appear as footnotes and IIRC you could just hit reply, type the footnote number, and then send.
I think there’s a lot of alpha in classic RFCs.
This was collaborative filtering. Which is great and useful, but it's just simple statistics. You can write it in a SQL query a few lines long.
Or you can run more robust models using SVD (singular value decomposition) to reduce dimensionality and enable a form of statistical inference. I can't tell if Ringo/Firefly were using that or not. (If you have enough users, and a relatively limited set of objects like musical artists, you don't need this.)
But nobody calls these AI -- not now and not at the time. They're very clearly in the realm of statistics.
So the article is fun, but not everything needs to jump on the AI train, c'mon.
You'd submit query sequences as an email, and get an email back with predictions.
The input format has not changed, still FASTA.