Effective Altruists Use Threats and Harassment to Silence Their Critics
Postedabout 1 month agoActive27 days ago
realtimetechpocalypse.comNewsstory
heatednegative
Debate
80/100
Effective_altruismHarassmentCriticism
Key topics
Effective_altruism
Harassment
Criticism
Discussion Activity
Active discussionFirst comment
2m
Peak period
13
0-12h
Avg / period
4
Key moments
- 01Story posted
Nov 30, 2025 at 4:50 PM EST
about 1 month ago
Step 01 - 02First comment
Nov 30, 2025 at 4:53 PM EST
2m after posting
Step 02 - 03Peak activity
13 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 7, 2025 at 11:18 AM EST
27 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46100784Type: storyLast synced: 11/30/2025, 10:06:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I just saved you several steps and opportunities for graft and corruption. Let's call it "immediate altruism."
[1]: What I mean is, I don't want to build my own company, and if I did, it would be in a very niche area that wouldn't directly benefit the people that most need help.
Ah, well for you, we have "regular altruism." Just pick a charity and send them money or donate your time to volunteer efforts in your community.
> What I mean is
Completely understandable. I was responding to the idea that being a cut throat capitalist that treads on your customers and workers to make a bunch of money that you then export some fraction of into "effective altruism" is probably missing the point of altruism entirely. I think it creates more suffering than it solves.
Workers represent more of an investment in time and training. Therefore they represent long term value. Customers are fickle, as they should be, but if I get beat on prices today they're gone tomorrow.
> customers who believed in the company enough to give them money
You seem to be describing a donor or possibly a member of a co-op. A customer simply receives an object of value in exchange of the money. As long as they're getting a good value on a quality product then their belief in the company is not material.
Not really, post a job opening and you'll likely get plenty of applicants, many of whom are indeed qualified. You taking the time to vet them and choose one is a benefit of having too many options. Getting customers is harder, you have to advertise and market your product, "acquisition cost" is a real thing.
> As long as they're getting a good value on a quality product
But especially early on, how do new customers know the product is quality? Someone has to be the first to eat at a restaurant or to hire you to paint their house, whatever. Even established companies - ordering clothes online when you can't actually feel the material, picking a dentist when you dont actually know how he/she will treat you, letting Uber decide who will drive you to the airport, how a pair of skis will perform from looking at them on a carpeted floor - most customer purchases and decisions are made with far-from-perfect information and they just have to put faith in the seller or service provider - and that's what I'm suggesting is worth future compensation.
> if I get beat on prices today they're gone tomorrow
If this is the case you really havent built much of a business, you're just selling commodities, and your employees have failed in differentiating your company from your competitors.
When you look at some of the most well-known industrial companies, their founders basically did this.
Difficulty: give away too much of the company trying to raise capital and most investors won't let you do this. Of course, you aren't really the owner then anymore, are you?
I think that's the allure of effective altruism. You founded a company or were early enough in a company to have enough shares to sell to investors. Those investors want big returns. The company is now at their mercy, but hey, they gave you a pile of cash so you can spend it on feeling good.
I really do think that people should be careful about what they say in public and measure their words. And further, I think that the author of that book ought to be silent on that particular subject.
On one hand, I think that people should check before publication and not publish shit. That goes for posting on the internet, and also about publishing books.
Separately and orthogonally, I think that someone who doesn't check before publication and publishes shit should refrain from complaining about other people's shit, even though other people's shit really is shit.
That's what I mean by unbalanced scrutiny.
She made several mistakes, of which I'll describe two. One (modestly serious) was to confuse units and compute the wrong number. The second (against my religion) was to publish without sanity-checking. You and I both know she didn't check, because her estimate for the average water use of one building was 20% of the water use of the continent. Any sort of check would uncover that mistake.
We in the rational camp are supposed to behave differently from Alex Jones, and part of that is to check before we publish.
She's "making arrangements with her editor to rectify the situation". If she fixes every reported error, not just one of them, I'll have a lot of respect for her.
Post a story about him to HN and I'll either comment or miss the thread, both are possible.
Why would you ever want to demand that someone "stay silent" about anything. Taking away somebody's voice is the lowest of the low. You do not have to read it or interact with it if you don't like it. And how would you want to be treated when you make a mistake? Can't you see how that leads straight to a world of no-progress, where people are afraid to do anything because it could turn out to be a mistake and they will be shunned for it? Are you not aware of the research into how bad punishment is for learning and advancement of society?
Williams, K. D., & Nida, S. A. (2022). Ostracism and social exclusion: Implications for separation, social isolation, and loss. Current opinion in psychology, 47, 101353. https://doi.org/10.1016/j.copsyc.2022.101353
Knapton, H. M. (2014). The Recruitment and Radicalisation of Western Citizens: Does Ostracism Have a Role in Homegrown Terrorism?. Journal of European Psychology Students, 5(1), 38-48. https://doi.org/10.5334/jeps.bo
As it reads now, I'm not sure if this is an objective critic of EA or gripes of someone who orbited in the same social space having a public fallout.
They learned the wrong lesson from Death Note
Do these people not understand that crops need water? Higher temperatures mean higher evaporation rates. Vast swathes of Iran have become inhospitable due to water mismanagement. That will lead to millions of refugees fleeing the country. Climate change is like poverty in this respect. If you're poor in water, you can't afford to make any mistakes.
Longtermism is a curse to long term thinking. You're not allowed to think about the next ten thousand years of humanity, because apparently that's too short of a window.
Not just that. This type of thinking is a contradiction of optimal control theory. Your model needs to produce an uninterrupted chain from the present to the future. Longtermism chops the present off, which means the initial state is in the future. You end up with an unknown initial state to which the Longtermists then respond by with hacks: They are adding a minimal set of constraints back. That minimal set is the avoidance of extinction, which is to say they are fine with almost everything.
Based on that logic, you'd think that Longtermists would be primarily concerned with colonizing planets in the solar system and building resilient ecosystems on earth so that they can be replicated on other planets or in space colonies, but you see no such thing. Instead they got their brains fried by the possibility of runaway AI [0] and the earth is treated as a disposable consumable to be thrown away.
[0] The AI they worry about is extremely narrow. Tesla doors that can't be opened in an emergency due to battery loss don't count as runaway AI, but if you had to beg the Tesla car AI to open the door and the AI refused, that would be worthy of AI safety research. However, they wouldn't see the problem in the inappropriate use of AI where it shouldn't be used in the first place.