AI note-taking startup Fireflies was really two guys typing notes by hand
Mood
skeptical
Sentiment
mixed
Category
tech
Key topics
AI
startup
transcription services
deception
Fireflies, a $1B AI note-taking startup, was initially two co-founders manually typing notes by hand for its transcription service, contradicting its AI-powered marketing claims.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
58
Day 1
Avg / period
58
Based on 58 loaded comments
Key moments
- 01Story posted
11/15/2025, 2:11:43 AM
4d ago
Step 01 - 02First comment
11/15/2025, 3:13:47 AM
1h after posting
Step 02 - 03Peak activity
58 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/15/2025, 5:18:51 PM
3d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Claiming that the transcripts were generated by a nonexistant AI is fraud and should be treated as such.
> "Good luck with all the lawsuits," added another. "This might read like a gritty founder hustle story," said software engineer Mauricio Idarraga. "But it's actually one of the most reckless and tone-deaf posts I've seen in a while."
> "We told our customers there's an 'AI that'll join a meeting'," said Udotong. "In reality it was just me and my co-founder calling in to the meeting sitting there silently and taking notes by hand."
They charged $100/month for this. If it were free then whatever, but lying to paying customers about the service is not okay.
How do you get from 'AI that'll join a meeting' to 'an MIT engineering grad as your note taker'?
The rest about note takers is irrelevant when the problem is lying about the "note taker" as that could be the deciding factor for choosing a service, not price
Erm, to the customer, what is the difference between a bunch of humans transcribing your meetings or an AI doing it? If I'm paying $100/month to get my meetings transcribed, why do I care whether it's the founder, an AI, or magic pixie dust (but I repeat myself)?
Misleading investors is a different problem.
Why you, personally, care or not is your business. If you were one of the customers who bought an automated (AI) service and instead got 2 guys gathering info during meetings, and are okay with it and see no difference between them -- then cool-emojis.
or for a more pointed response, see: https://news.ycombinator.com/item?id=45935354
If, for example, I have found some way to make some equivalent part cheaper, it is NOT incumbent upon me to disclose how I did it to you or my competitors. In order to protect trade secrets that may give away the answer or process to a competitor, I may lie straight to your face to misdirect you or possibly to throw a competitor down the wrong trail. As long as I'm not causing you harm and am delivering to spec, I consider myself to have a lot of latitude.
In any AI implementation, the company principals would have access to the data anyway so security wasn't compromised. In fact, because of the fallibility of human memory, security is probably better than running the audio through possibly compromised systems running an AI in a data center god knows where. So, data security and provenance was not harmed.
Sure, if you specified not to use indentured labor and I subcontract it to Bangladeshi orphans, you have a right to be upset as the bad PR could harm you. That didn't happen in this instance--the labor was company principals.
I see lots of posturing, but nobody pointing to what violations occurred other than not using <jazz hands> "AI". At no point has anyone in this thread demonstrated any harm to the end user beyond some very vague insinuations of "wrongdoing".
Instead, what I see is a bunch of people getting upset that they used "Disgusting Pixie Dust (direct employee labor)" instead of "Delicious Pixie Dust (AI)". If I'm being snide, I would point out that when the direction of substitution is reversed (AI instead of labor)--the HN legions would be celebrating the cleverness. If I'm being particularly snide, I would chalk the anger up to the fact that it nicely demonstrated that the AI Emperor isn't wearing any clothes and is making a bunch of people angry whose paychecks depend upon the emperor remaining naked.
Now, if they were pitching their "Custom AI" to investors while hand transcribing, that is a very, very different kettle of fish.
Probably depends upon how sensitive the information is. ie "was PII involved?" should be a fair clear example
AND you didn't have context or interest in the content?
AND you were required to write an essay at the end proving that you paid attention?!
Wait...
But when it’s a SaSS product it becomes an inspirational hustle culture story.
I would bet the TOS mentioned manual reviews.
If I invest in your AI startup and find out it's really people doing the work, I'm going to be pissed.
Seems to be a good example of today’s zeitgeist.
Many of the comments on this very post, seem to take the same position.
I’m not horrified about what they did. This kind of shysterism is pretty common, these days.
What does disturb me, though, is an “end justifies the means” acceptance of these practices.
In law (and law enforcement), they have a “fruit of the poisoned tree” doctrine, where starting something wrong, immediately nullifies everything after that, even if it solves the case.
Coming from a perspective of wanting a lot more ethics and integrity in technology, I think we might be well-served to consider something like this. I’m deeply disturbed by the blatant moral decay in tech. I keep flashing on Dr. Malcolm, taking about “could,” and “should.”
What this startup did isn't that, AFAICT. It wasn't manual work in service of learning...it was just fraud as a business model, no? Like, they were pretending the technology existed before it actually did. There's a bright line between unscalable hustle and misleading customers about what your product actually is.
Doing unscalable things is about being scrappy and close to the problem. Pretending humans are AI is just straight up deceiving people.
A similar exmaple is "Make something people want". This is generally true advice in focusing your efforts on solving customer's problems. Yet, this is disastrous advice if taking literally to the fullest extent (you can only imagine).
> this was for our first few beta customers from 2017 and we made it clear that there was a human in the loop of the service. LLMs didn't exist yet. It was like offering an EA for $100/mo - several other startups did that as well, but obviously it doesn't scale.
So not necessarily fraud unless they deceived investors. Or he’s covering up his mistake. Getting the popcorn!!
2. Their startup now does what it says on the tin. And it's now a unicorn.
3. To those claiming this was "unethical" - a large company providing this service would still record calls and have QA / engineers listening to calls to improve the service.
What the later post from the CEO describes (and presents as a calrification, but it conflicts with rather than clarifies the initial description) is not fraud, but the question is was the CTO being loose with the truth to paint a rebel image or is the CEO. being loose with the truth to try to protect the company image after the CTO’s post got picked up by multiple news outlets and people were correctly pointing out that it described fraud?
Your quote seems likely to be from an after-the-fact damage control “clarification” post by the CEO [1] describing that the early users as close friends, who knew that it was human assisted and not machine transcription (I say seems likely to be because it expresses something similar to what you claim but slightly more distant from the original story, and doesn't have the quote you present, but it is marked as edited so it seems plausible that it at one point had your quote but decided that it needed even a stronger rewrite of the narrative for PR reasons.)
[0] https://www.linkedin.com/posts/sudotong_we-charged-100month-...
[1] https://www.linkedin.com/posts/krishramineni_we-charged-100m...
p.s. -- I already put this in a chain, but the majority of comments are just claiming this is fraud. Thought it might be worth posting something slightly more visible.
The privacy implications alone make the difference between a human sitting in on your meeting and an actual AI enough to call this fraud. Giving it a fancy name doesn't change that.
Expectation is that sensitive meetings run through a pipeline without being exposed to actual people (and if it is for very specific reasons, there are audit trails).
Here, they literally listen to sensitive information and can act on it.
How do you trust they won't do it again to "enhance summaries" or something in the future?
Bias towards bullshit
https://en.wikipedia.org/wiki/Elizabeth_Holmes
Also, there was this, which also originally claimed to be AI:
https://spectrum.ieee.org/untold-history-of-ai-mechanical-tu...
Don't they realize the company will store their whole conversation in the cloud, and a rogue employee/founder can just as easily pull it up and listen after the fact?
I guess it just depends on your perspective...
CS courses really have to start placing a bit more emphasis on ethics.
3 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.