A new book about the origins of Effective Altruism
Mood
heated
Sentiment
mixed
Category
culture
Key topics
Effective Altruism
Philosophy
Philanthropy
A new book about the origins of Effective Altruism (EA) has sparked a heated discussion on the movement's principles and practices, with some defending its rational approach to charity and others criticizing its potential flaws and misapplications.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
32m
Peak period
147
Day 1
Avg / period
80
Based on 160 loaded comments
Key moments
- 01Story posted
11/17/2025, 5:37:42 PM
2d ago
Step 01 - 02First comment
11/17/2025, 6:09:15 PM
32m after posting
Step 02 - 03Peak activity
147 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/19/2025, 1:37:20 PM
5h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
This is sadly still true, given the percentage of money that goes to getting someone some help vs the amount dedicated to actually helping.
givewell.org is probably the most prominent org recommended by many EAs that does and aggregates research on charitable interventions and shows with strong RCT evidence that a marginal charitable donation can save a life for between $3,000 and $5,500. This estimate has uncertainty, but there's extremely strong evidence that money to good charities like the ones GiveWell recommends massively improves people's lives.
GiveDirectly is another org that's much more straightforward - giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong (https://www.givedirectly.org/gdresearch/).
It absolutely makes sense to be concerned about "is my hypothetical charitable donation actually doing good", which is more or less a premise of the EA movement. But the answer seems to be "emphatically, yes, there are ways to donate money that do an enormous amount of good".
When you see the return on money spent this way other forms of aid start looking like gatekeeping and rent-seeking.
That said I also think that longer term research and investment in things like infrastructure matters too and can't easily be measured as an RCT. GiveWell style giving is great and it's awesome that the evidence is so strong (and it's most of my charitable giving), but that doesn't mean other charities with less easily researched goals are bad necessarily.
As the numbers get larger, it becomes easier and easier to suggest that the organization's continued existence is still a net positive as you waste more and more on the organization bloating.
To be fair, that particular example was obvious from day 1.
Perhaps, but it's exactly the type of thinking the article is describing.
The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor, as it allowed him to build libraries." Yes it is good what Carnegie did later, but it doesn't completely paper over what he did earlier.
Is that an actual EA argument?
The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
I don't really follow the EA space but the actual arguments I've heard are largely about working in FANG to make 3x the money outside of fang to allow them to donate 1x ~1.5x the money. Which to me is very justifiable.
But to stick with the article. I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
A janitor at the CIA in the 1960s is certainly working at an organization that is disrupting the peaceful Iranian society and turning it into a "death to America" one. But I would not agree that they're doing a net-negative for society because the janitor's marginal contribution towards that objective is 0.
It might not be the best thing the janitor could do to society (as compared to running a soup kitchen).
it could be though, if by first centralizing those billions, you could donate more effectively than the previous holders of that money could. the fraud victims may have never donated in the first place, or have donated to the wrong thing, or not enough to make the right difference.
you missed this part: "The arguments always feel to me too similar"
> The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
That isn't what OP was engaging with though, they aren't asking for you to answer the question 'what could Carnegie have done better' they are saying 'the philosophy seems to be arguing this particular thing'.
(Peter Singer’s books are also good: his Hegel: A Very Short Introduction made me feel kinda like I understood what Hegel was getting at. I probably don’t of course, but it was nice to feel that way!)
I do not believe the EA movement to be recoverable; it is built on flawed foundations and its issues are inherent. The only way I see out of it is total dissolution; it cannot be reformed.
> A paradox of effective altruism is that by seeking to overcome individual bias through rationalism, its solutions sometimes ignore the structural bias that shapes our world.
Yes, this just about sums it up. As a movement they seem to be attracting some listless contrarians that seem entirely too willing to dig up old demons of the past.
When they write "rationalism" you should read "rationalization".
It's at least 50% right in my experience.
It's the perfect philosophy for morally questionable people with a lot of money. Which is exactly who got involved.
That's not to say that all the work they're doing/have done is bad, but it's not really surprising why bad actors attached themselves to the movement.
I dont think this is a very accurate interpretation of the idea - even with how flawed the movement is. EA is about donating your money effectively. IE ensuring the donation gets used well. At it's face, that's kind of obvious. But when you take it to an extreme you blur the line between "donation" and something else. It has selected for very self-righteous people. But the idea itself is not really about excusing you being a bad person, and the donation target is definitely NOT unimportant.
Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?
I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.
I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements and they would have to be blind to ignore it. But the book is wrong for reasons intrinsic to its analysis and it would be catastrophic to treat that point as moot.
Coincidentally, libertarian socialism is also a thing.
The fact they're notorious makes them a biased sample.
My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:
- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's
- We should carefully decide which charities to give to, because some are far more effective than others.
That's pretty much it - essentially the message in Peter Singer's book: https://www.thelifeyoucansave.org/.
I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere
> they could have been 13% more effective
If you think the difference between ineffective and effective altruism is a 13% spread, I fear you have not looked deeply enough into either standard altruistic endeavors nor EA enough to have an informed opinion.
The gaps are actually astonishingly large. Which is the impetus for the entire train of thought.
Especially rich people's vanity foundations are mostly a channel for dodging taxes and channeling corruption.
I donate to a lot of different organisations, and I do check which do the most good. Red Cross and Doctors Without Borders are very effective and always worthy of your donation, for example. Others are more a matter of opinion. Greenpeace has long been the only NGO that can really take on giant corporations, but they've also made some missteps over the years. Some are focused on helping specific people, like specific orphans in poor countries. Does that address the general poverty and injustice in those countries? Maybe not, but it does make a real difference for somebody.
And if you only look at the numbers, it's easy to overlook the individuals. The homeless person on the street. Why are they homeless, when we are rich? What are we doing about that?
But ultimately, any charity that's actually done, is going to be more effective than holding off because you're not sure how optimal this is. By all means optimise how you spend it, but don't let doubts hold you back from doing good.
EA is about much more than this. Even among the same cause areas different interventions can vary greatly in their efficacy. So say you're interested in helping the homeless in your area. You can think of a few different interventions for them ranging from giving them free housing, job training, even just handing them cash. The question EA asks is which of those options you should be spending your money on. It doesnt have to turn into "its immoral to do anything other than give to the global poor" or "we need to consider the unborn masses" if you dont want to go there.
For sure this is case. But just knowing what you are donating to doesn't need some sort of special designation. Like yes A is in fact much better than B, so I'll donate to A instead of B is no different than any other decision where you'd weigh options. Its like inventing 'effective shopping'. How is it different than regular shopping? Well, with ES, you evaluate the value and quality of the thing you are buying against its price, you might also read reviews or talk to people to have used the different products before. Its a new philosophy of shopping that no one has ever thought of before and its called 'effective shopping'. Only smart people are doing it.
Nobody said or suggested only smart people can or should or are “doing EA.” What people observe is these knee jerk reactions against what is, as you say, a fairly obvious idea once stated.
However it being an obvious idea once stated does not mean people intuitively enact that idea, especially prior to hearing it. Thus the need to label the approach
This has some truth to it and if EA were primarily about reminding people that not all donations to charitable causes pack the same punch and that some might even be deleterious, then I wouldn't have any issues with it at all. But that's not what it is anymore, at least not the most notable version of it. My knee jerk reaction to it comes from this version. The one where narcissistic tech bros posture moral and intellectual superiority not only because they give, but because they give better than you.
The core notions as you state them are entirely a good idea. But the good you do with part of your money does not absolve you for the bad things you do with the rest, or the bad things you did to get rich in the first place.
Mind you, that's how the rich have always used philanthropy; Andrew Carnegie is now known for his philanthropy, but in life we was a brutal industrialist responsible for oppressive working conditions, strike breaking, and deaths.
Is that really effective altruism? I don't think so. How you make your money matters too. Not just how you spend it.
So basically everyone who has a lot of money to donate has questionable morals already.
The question is, are the large donators to EA groups more or less 'morally suspect' than large donors to other charity types?
In other words, everyone with a lot of money is morally questionable, and EA donors are just a subset of that.
You say this like it's fact beyond dispute, but I for one strongly disagree.
Not a fan of EA at all though!
You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.
A person is not capable of creating that wealth. A group of people have created that wealth, and the 1000x individual has hoarded it to themselves instead of sharing it with the people who contributed.
If you are a billionaire, you own at least 5000x the median (200000k in the US). If you're a big tech CEO, you own somewhere around 50-100,000x the median. These are the biggest proponents of EA.
The bottom 50% only own about 2% of the wealth anymore, the top 10% own two thirds of the wealth, the top 1% owns a whole third and it's only getting worse. Who is responsible for the wealth inequality? The people at the right edge of the Lorenz curve. They could fix it,.but don't. That is why they are exploitative.
The risk profile of early startup founders looks a lot like "winning the lottery", except that the initial investment (in terms of time, effort and lost opportunities elsewhere as well as pure monetary ones) is orders of magnitude higher than the cost of a lottery ticket. There's only a handful of successful unicorns vs. a whole lot of failed startups.
For Google and Facebook, users' data was sold to advertisers, and their behaviour is manipulated to benefit the company and its advertising clients. For Amazon, the workers are squeezed for all the contribution they can give and let go once they burn out, and they manipulate the marketplace that they govern to benefit them. If you make multiple hundreds of millions, you are either exploiting someone in the above way, or you are extracting rent from them.
Just looking at the wealth distribution is a good way to see how unicorns are immoral. If you suddenly shoot up into the billionaire class, you are making the wealth distribution worse, because your money is accruing from the less wealthy proportion of society.
That unicorns propagate this inequality is harmful in itself. The entire startup scene is also often a fishing pond for existing monopolies. The unicorns are sold to the big immoral actors, making them more powerful.
What is taken away when inequality becomes worse is political power and agency. Maybe other contributors close to the founders are better off, but society as a whole is worse off.
That's quite a claim, as there's a higher probability of unicorns screwing people over. If a unicorn lives long enough it ends up at the top of the wealth pyramid. As far as I can tell, all of the _big_ anti-social actors were once unicorns.
That most organizations engaging in bad behavior aren't unicorns says nothing, because by definition most companies aren't unicorns. If unicorns are less than 0.1% of the population of companies X, then P(X | !unicorn(X)) > P(X | unicorn(X)) is almost guaranteed to be true for all P.
He's far from the only example.
I understand the distribution of wealth. I agree that in the US in particular it is setup to exploit poor people.
I don't think being rich is immoral.
That's an interesting position. I would guess that in order to square these two beliefs you either have to think exploiting the poor is moral (unlikely) or that individuals are not responsible for their personal contributions to the wealth inequality.
I'm interested to hear how you argue for this position. It's one I rarely see.
To quote[1]:
> In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations.
[1] https://blog.givewell.org/2014/07/03/the-moral-value-of-the-...
This is an interesting take. So if we found out for certain that an action we are taking today is going to kill 100% of humans in 200 years, it would be immoral to consider that as a factor in making decisions? None of those people are living today, obviously, so that means we should not worry about their lives at all?
But to put future lives on the same scale (as in to allow for the possibility of measuring one against the other) of current lives is immoral.
Future lives are important, but balancing them against current lives is immoral
An even worse trap is to prioritize a future utopia. Utopian ideals are dangerous. They push people towards "the ends justify the means". If the ends are infinitely good, there is no bound on how bad the "justified means" can be.
But history shows that imagined utopias seldom materialize. By contrast the damage from the attempted means is all too real. That's why all of the worst tragedies of the 20th century started with someone who was trying to create a utopia.
EA circles have shown an alarming receptiveness to shysters who are trying to paint a picture of utopia. For example look at how influential someone like Samuel Bankman-Fried was able to be, before his fraud imploded.
Just wait until you find out about vegetarianism's most notorious supporter.
For most it seems EA is an argument that despite no charitable donations being made at all, and despite gaining wealth through questionable means it’s still all ethical because it’s theoretically “just more effective” if the person continues to claim that they would in the far future put some money towards these hypothetical “very effective” charitable causes, that just never seems to have materialized yet, and all of cause shouldn’t be perused “until you’ve built your fortune”.
Maybe you misinterpreted it? To me, It was simply saying that the flaw in the EA model is that a person can be 90% a dangerous sociopath and as long as the 10% goes to charity (effectively) they are considered morally righteous.
It's the 21st century version of Papal indulgences.
A friend of mine used to "gotcha" any use of the expression "X is about Y", which was annoying but trained a useful intellectual habit. That may have been what EA's original stated intent was, but then you have to look at what people actually say and do under the name of EA.
But I want to take another tack. I never see anybody make the following argument. Probably that's because other people wisely understand how repulsive people find it, but I want to try anyway, possibly because I have undiagnosed autism.
EA-style donations have saved hundreds of thousands lives. I know there are people who will quibble about the numbers, but I don't think you can sensibly dispute that EA has saved a lot of lives. This never seems to appear in people's moral calculus, like at all. Most of those are people who are poor, distant and powerless but nevertheless, do they not count for something?
I know I'm doing utilitarianism and people hate it, but I just don't get how these lives don't count for something. Can you sell me on the idea that we should let more poor people die of preventable diseases in exchange for a more spotless moral character?
Whether you agree that someone can put money into saving lives to make up for other moral faults or issues or so on is the core issue. And even from a utilitarian view we'd have to say that more of these donations happened than would have without the movement or with a different movement, which is difficult to measure. Consider the usaid thing - Elon musk may have wiped out most of the EA community gains by causing that defending, and was probably supported by the community in some sense. How to weigh in all these factors?
For me the core issue is why people are so happy to advocate for the deaths of the poor because of things like "the community has issues". Of course the withdrawal of EA donations is going to cause poor people to die. I mean yes, some funding will go elsewhere, but a lot of it's just going to go away. Sorry to vent but people are so endlessly disappointing.
> Elon musk may have wiped out most of the EA community gains by causing that defending
For sure!
> and was probably supported by the community in some sense
You sound fairly under-confident about that, presumably because you're guessing. It's wildly untrue.
And the rationalist community writ large is very much part of that. The whole idea that private individuals should get to decide whether or not to do charity, or where they can casually stop giving funds or etc, or that so much money needs to be tied up in speculative investments and so on, I find that all pretty distasteful. Should life or death matters be up to whims like this?
For sure, not quibbling with any of that. The part I don't get is why it's EA's fault, at least more than it's many, many other people and organizations' fault. EA gets the flak because it wants to take money from rich people and use it to save poor people's lives. Not because it built the Silicon Valley environment / tech culture / investing bubble.
> I find that all pretty distasteful. Should life or death matters be up to whims like this?
Referring back to my earlier comment, can you sell me on the idea that they shouldn't? If you think aid should come from taxes, sell me on the idea that USAID is less subject to the whims of the powerful than individual donations. Also sell me on the idea that overseas aid will naturally increase if individual donations fall. Or, sell me on the idea that the lives of the poor don't matter.
None of this will happen naturally though. We need to make it happen. So ultimately my position is that we need to aim efforts at making these changes, possibly at a higher priority than individual giving - if you can swing elections or change systems of government the potential impact is very high in terms of policy change and amount of total aid, and also in terms of how much money we allow the rich to play and gamble with. None of these are natural states of affairs.
They donate a significant percentage of their income to the global poor, and save thousands of lives every year (see e.g. https://www.astralcodexten.com/p/in-continued-defense-of-eff... )
Just because the market pays for one activity does not mean ots externalitirs are equally solvedby the matkets valuation.
From basic physics, its akin to saying you can drop a vase and return it to predropped state with equal effort.
Entropy alone prevents EA.
For instance -
If I find some sort of fraud that will harm X number of users, but make me Y dollars - if Y > (harm caused), not doing (fraud making me Y dollars) could be interpreted as being "inefficient" with your resources or causing more harm. It's very easy to use the philosophy in this manner, and of course many see it as a huge perk. The types of people drawn to it are all much the same.
I actually think EA is conceptually perfectly fine within its scope of analysis (once you start listing examples, e.g. mosquito nets to prevent malaria, I think they're hard to dispute), and the desire to throw out the conceptual baby with the bathwater of its adherents is an unfortunate demonstration of anti-intellectualism. I think it's like how some predatory pickup artists do the work of being proto-feminists (or perhaps more to the point, how actual feminists can nevertheless be people who engage in the very kinds of harms studied by the subject matter). I wouldn't want to make feminism answer for such creatures as definitionally built into the core concept.
Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.
https://en.wikipedia.org/wiki/Virtue_ethics
EA being a prime example of consequentialism.
Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.
And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.
> Utility has the advantage of sustaining moral care toward people far away from you
Well, in some formulations. There are well-defined and internally consistent choices of utility function that discount or redefine “personhood” in anti-humanist ways. That was more or less Rawls’ criticism of utilitarianism.
Virtue ethics is open-loop: the actions and virtues get considered without checking if reality has veered off course.
Consequentialist is closed-loop, but you have to watch out for people lying to themselves and others about the future.
I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.
You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.
If I want to give $100 to charity, some of the places that I can donate it to will do less good for the world. For example Make a Wish and Kids Wish Foundation sound very similar. But a significantly higher portion of money donated to the former goes to kids, than does money donated to the latter.
If I'm donating to that cause, I want to know this. After evaluating those two charities, I would prefer to donate to the former.
Sure, this may offend the other one. But I'm absolutely OK with that. Their ability to be offended does not excuse their poor results.
The conclusion that many EA people seemed to reach is that keeping your high-paying job and hiring 10 people to do good deeds is more ethically laudable than doing the thing yourself, even though it may be inefficient. Which really rubs a lot of people the wrong way, as it should.
The argument of EA is that feelings can be manipulated (and often are) by the marketing work done by charities and their proponents. If we want to actually be effective we have to cut past the pathos and look at real data.
Secondly, you're missing the point I'm making, which is why many people find EA distasteful: it completely focuses on outcomes and not internal character, and it arrives at these incomes by abstract formulae. This is how we ended up with increasingly absurd claims like "I'm a better person because I work at BigCo and make $250k a year, then donate 10% of it, than the person that donates their time toward helping their community directly." Or "AGI will lead to widespread utopia in the future, therefore I'm ethically superior because I'm working at an AI company today."
I really don't think anyone is critical of EA because they think being inefficient with charity dollars is a good thing, so that is a strawman. People are critical of the smarmy attitude, the implication that other altruism is ineffective, and the general detached, anti-humanistic approach that the people in that movement portray.
The problems with it are not much different from utilitarianism itself, which EA is just a half-baked shadow of. As someone else in this comment section said, unless you have a sense of virtue ethics underlying your calculations, you end up with absurd, anti-human conclusions that don't make much sense to anyone with common sense.
There's also the very basic argument that maybe directly helping other people leads to a better world overall, and serves as an example than just spending money abstractly. That counterargument never occurs to the EA/rationalist crowd, because they're too obsessed with some master rational formula for success.
No, I did not miss that point at all. I think it is WRONG to focus on character. That leads us down the dark path of tribalism and character assassination and culture war.
If we're going to talk about a philosophy and an ethics of behaviour, we have to talk about ACTIONS. That's the only way we'll ever accomplish any good.
"But putting any probability on any event more than 1,000 years in the future is absurd. MacAskill claims, for example, that there is a 10 percent chance that human civilization will last for longer than a million years."
Sam Bankman-Fried was all in with EA, but instead of putting his own money in, he put everybody else's in.
Also his choice of "good causes" was somewhat myopic.
https://www.mcsweeneys.net/articles/i-work-for-an-evil-compa...
It's really amazing when reading this kind of stuff how many people don't appear to realize others don't buy into their cult. The way I see it, "I work for a company that intellectual descendants of the 2nd (or the 1st) most evil ideology invented by man consider evil"
EA-the-brand turned into a speed run of the failure cases of utilitarianism. Because it was simply too easy to make up projections for how your spending was going to be effective in the future, without ever looking back at how your earning was damaging in the past. It was also a good lesson in how allowing thought experiments to run wild would end up distracting everyone from very real problems.
In the end an agency devoted to spending money to save lives of poor people globally (USAID) got shut down by the world's richest man, and I can't remember whether EA ever had anything to say about that.
But again, I recognize the appeal of your narrative so you're on safer ground than I am as far as HN popularity goes.
I have a lot of sympathy for the ideas of EA, but I do think a lot of this is down to EA-as-brand rather than whatever is happening at grassroots level. Perhaps it's in the same place as Communism; just as advocates need a good answer to "how did this go from a worker's rights movement to Stalin", EA needs an answer to "how did EA become most publicly associated with a famous fraudster".
EA had a fairly easy time in the media for a while which probably made its "leadership" a bit careless. The EA foundation didn't start to seriously disassociate itself from SBF until the collapse of FTX made his fraudulent activity publicly apparent.
But mostly, people (especially rich people) fucking hate it when you tell them they could be saving thousands of lives instead of buying a slightly nicer house. That (it seems to me) is why MOMA / Harvard / The British Museum etc get to accept millions of dollars of drug dealer money and come out unscathed, whereas "EA took money from somebody who turned out to be a fraudster" is presented as a decisive moral hammer blow.
I feel like I need to say, there's also a whole thing about EA leadership being obsessed with AI risk, which (at least at the time) most people thought was nuts. I wasn't really happy with the amount of money (especially SBF money) that went into that, but a large majority of EA money was still going into very defensible life-saving causes.
I am not impressed with billionaires who dodge taxes and then give a few pennies to charity.
The government is quite literally all of us. Do better.
Doing that doesn’t buy you personal virtue. It doesn’t excuse heinous acts. But within the bounds of ordinary standards of good behavior, try to do the most good you can with the talents and resources at your disposal.
To an EA, what you said is as laughable of a strawman as if someone summarized your beliefs as "it makes no difference if you donate to starving children in africa or if you do nothing, because it's your decision and neither is immoral".
The popularity of EA is even more obvious than what you described. Here's why it's popular. A lot of people are interested in doing good, but have limited resources. EAs tried to figure out how to do a lot of good given limited resources.
ou might think this sounds too obvious to be true, but no one before EAs was doing this. The closest thing was charity rankings that just measured what percent of the money was spend on administration. (A charity that spends 100% of its donations on back massages for baby seals would be the #1 charity on that ranking.) Finding ways to do a lot of good given your budget is a pretty intuitively attractive idea.
And they're really all about this too. Go read the EA forum. They're not talking about how their hands are clean now because they donated. They're talking about how to do good. They're arguing about whether malaria nets or malaria chemotreatments are more effective at stopping the spread of the disease. They're arguing about how to best mitigate the suffering of factory farmed animals (or how to convince people to go vegan). And so on. EA is just people trying to do good. Yeah, SBF was a bad actor, but how were EA charities supposed to know that when the investors that gave him millions couldn't even do that?
I hope SBF doesn’t buy a pardon from our corrupt president, but I hope for a lot of things that don’t turn out the way I’d like. Apologies for USA-centric framing. I’m tired.
The perfect philosophy for morally questionable people would just be to ignore charity altogether (e.g. Russian oligarchs) or use charity to launder strategically launder their reputations (e.g. Jeffrey Epstein). SBF would fall into that second category as well.
Don't outsource your altruism by donating to some GiveWell-recommended nonprofit. Be a human, get to know people, and ask if/how they want help. Start close to home where you can speak the same language and connect with people.
The issues with EA all stem from the fact that the movement centralizes power into the hands of a few people who decide what is and isn't worthy of altruism. Then similar to communism, that power gets corrupted by self-interested people who use it to fund pet projects, launder reputations, etc.
Just try to help the people around you a bit more. If everyone did that, we'd be good.
Which obviously has great appeal.
This describes a generally wealthy society with some people doing better than average and others worse. Redistributing wealth/assistance from the first group to the second will work quite well for this society.
It does nothing to address the needs of a society in which almost everyone is poor compared to some other potential aid-giving society.
Supporting your friends and neighbors is wonderful. It does not, in general, address the most pressing needs in human populations worldwide.
> Just try to help the people around you a bit more. If everyone did that, we'd be good.
That's why I was replying too. Obviously, if you are willing to "do more", then you can potentially get more done.
Tourism does redistribute money, but a lot of resources go to taking care of the tourists.
Utilitarianism suffers from the same problems it always had: time frames. What's the best net good 10 minutes from now might be vastly different 10 days, 10 months or 10 years from now. So whatever arbitrary time frame you choose affects the outcome. Taken further, you can choose a time frame that suits your desired outcome.
"What can I do?" is a fine question to ask. This crops up a lot in anarchist schools of thought too. But you can't mutual aid your way out of systemic issues. Taken further, focusing on individual action often becomes a fig leaf to argue against any form of taxation (or even regulation) because the government is limiting your ability to be altruistic.
I expect the effective altruists have largely moved on to transhumanism as that's pretty popular with the Silicon Valley elite (including Peter Thiel and many CEOs) and that's just a nicer way of arguing for eugenics.
I had assumed it was just simple mathematics and the belief that cash is the easiest way to transfer charitable effort. If I can readily earn 50USD/hour, rather than doing a volunteering job that I could pay 25USD/hour to do, I simply do my job and pay for 2 people to volunteer.
Effective altruism is a political movement, with all the baggage implicit in that.
An (effective) charity needs an accountant. It needs an HR team. It needs people to clean the office, order printer toner, and organise meetings.
Define "needs". Some overheads are part of the costs of delivering the effective part, sure. But a lot of them are costs of fundraising, or entirely unnecessary costs.
The rationalists thought they understood time discounting and thought they could correct for it. They were wrong. Then the internal contradictions of long-termism allowed EA to get suckered by the Silicon Valley crew.
Alas.
60 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.