It is completely coherent to both think that an extremely bad thing is coming, and yet that does not justify any particular action. "The ends don't justify the means" and literal entire religions have been built on this concept. It is not irrational or incoherent to believe that even something as serious as extinction does not justify arbitrary action.
Someone _may_ decide that it does, but it is not a necessary conclusion.
And that is completely aside from the many many (in my opinion convincing) arguments that such acts of violence would not be effective anyways.
This article is a much better (and much longer) extension of the argument and direct refutation of the OP article
The older I get, the more I get the sneaking suspicion that statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.
An ongoing conflict has resulted in the violent deaths of literally many thousands of children. The people who enable those deaths are usually safely ensconced thousands of miles away, often living in cushy suburbs.
To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less. I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.
Related, the Substack link you posted is titled "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?
Aurornis 1 days ago [-]
> The older I get, the more I get the sneaking suspicion that statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.
My experience has been the polar opposite: The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others. This ranges from petty things like spreading gossip, to committing theft from people they don't like ("they had it coming!") to actual physical violence.
In every case, zoom out a little bit and it becomes obvious how their little self-created bubble distorted their reality until they believed that doing something wrong was actually the right and justified move.
I think you're reaching too far to try to disprove the statement in a general context. Few people are going to say "violence is always the wrong answer" in response to someone defending themselves against another person trying to murder them, for example. I think these edge cases get too much emphasis in the context of the article, though. They're used as a wedge to open up the possibility that violence can be justified some times, which turns into a wordplay game to stretch the situation to justify violence.
hn_throwaway_99 1 days ago [-]
I think you have wildly misunderstood my point, given that your statement of "The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others" is not the polar opposite of what of I was saying - if anything, it aligns with what I was saying very well.
To rephrase, my point is that phrases like "the ends don't justify the means" and "political violence is never the answer" seem to almost always be applied in very specific contexts, completely ignoring other contexts where many people (I'd say "society at large") are completely OK with the ends justifying the means and political violence.
To use your own sentence, I've seen many people in positions of power "coming to completely incorrect conclusions that justify their decisions to harm others", e.g. why bombing children in their beds is OK.
Aurornis 1 days ago [-]
> To rephrase, my point is that phrases like "the ends don't justify the means" and "political violence is never the answer" seem to almost always be applied in very specific contexts
That's not what you said. You were talking about society as a whole, not narrow contexts. I'll re-quote your original comment that I was responding to:
> statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.
I was responding to your "at best, wildly logically inconsistent in any society at any given time" claim.
hn_throwaway_99 1 days ago [-]
Yes, society as a whole applies statements like "the ends justify the means" in wildly inconsistent ways, deeming it unacceptable in certain contexts and being completely fine with it in other contexts. I literally said in my original comment "To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less."
Beyond that, I can't help you with your reading comprehension.
metabagel 1 days ago [-]
The point of the comment you are replying to is that it's often logically inconsistent for people to say that violence is never the answer, given the amount of violence committed by our military, law enforcement, immigration enforcement, etc. - much of which is deemed acceptable.
bigbadfeline 1 days ago [-]
It's no inconsistent if the people who oppose violence also oppose the legal forms you enumerated.
The comment you're trying to explain is conflating different groups of people and that makes it virtually meaningless.
solaarphunk 1 days ago [-]
This is just a version of individualism vs the state. Much of western society has become increasingly confused about what violence is acceptable, let alone who should be allowed to commit violence, or have a monopoly on violence.
If we can't agree on that baseline, then its quite obvious that we'll continue to have an escalation in the types of violence that we've seen in the past few years, against the political and corporate classes in the US, with very little end in sight.
zahlman 1 days ago [-]
That seems to me like a somewhat odd way to put it. From where I stand, the large majority of objection to "state monopoly on violence" comes from those who otherwise express a strongly collectivist worldview.
solaarphunk 13 hours ago [-]
Like I said, very confused.
antonvs 1 days ago [-]
> If we can't agree on that baseline
Part of the point about violence is it has little to do with societal agreement, to start with. It's what happens when that agreement breaks down. And in the end, it can change the agreement.
bloppe 1 days ago [-]
There's no room for subtlety in public discourse, but ya absolutist moral philosophies almost never stand up to scrutiny. If only things could be so simple.
I've concluded that there is no universal moral framework. You have to be comfortable with the fact that your perspective is just one of many, but that doesn't mean it's not worth fighting for, it just also means you might be subjected to others' moral frameworks if yours conflicts with theirs. Pretty unsatisfying, but I don't think an alternative conclusion exists that is sound.
JKCalhoun 7 hours ago [-]
"But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?"
That's easy enough. Your presumption is that the U.S. (and other countries) would not exist were it not for political violence. We don't know if that is the case as we have only the violent timeline.
lopsotronic 1 days ago [-]
Sayings like those are aspirational rather than being realist or simulationist, and they're supposed to be aspirational.
They're stories, just like all morality. It seems when cultures get to a certain point in dissolution you have a growing population that have difficulty drawing lines between stories and reality, what stories are *for* in the first place.
Having aspirational moral systems is critical for a hyperdeveloped mostly-democratic society. It creates a gap between the Best Of Us and the Worst Of Us, and thus suggests a vector. When that aspirational system fails - whether to cynicism or brutality or both matters little - you have a societal collapse incoming or under way.
One depressing example was the progression of the United States' moral judgement on torture during the 21st century. During the worst of the Cold War years I have very few illusions that torture was occurring - extremely imaginative variants in fact. Everyone knew what happens in bush wars - we had quite a few veterans who remembered very clearly. But if in 1963 someone self-identified as a torturer, or recommended we just do it in the open, the same persion would be roundly (and justly) castigated[0].
After 9/11, the idea surfaced that yes, we're going to torture, and yes, it's ok to do it. We accept the "realism".
To see the impact of this, well, I could point to a police officer in 1992 and then to a police officer in 2022. I could also point to an Action/Adventure TV program of the 1980s - say, MacGuyver - and then point to an Action/Adventure TV Program of the 2000s - like, say, 24. Imperial Boomerang is a real thing, turns out, and now we all get to be Fallujah.
In reality, though? The answer to Scalia's "Shouldn't Jack Bauer torture a guy to save Los Angeles?" was always rhetorical[1], but if you took the bait, the correct answer was always, "No", because it destroys the aspirational vector that defines our society. Or, more practically, if for no better reason than the fact a SC justice is legally reasoning from a television show.
[0] The mixed reaction to incidents like Mai Lai show how deep this division went. Not all of America thought it was a terrible thing, but we decided we were made of better stuff. Or we wanted to be, which as it turned out, also important.
[1] The "ticking time bomb" hypothetical which is almost always presented as a stack of epistemic certainty but which is actually unfalsifiable.
zahlman 1 days ago [-]
> I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.
First: because trusted people having such weaponry is, in expected value, believed to lead to less total violence. Second: because not all such violence is part of what you presumably have in mind when you speak of "ongoing conflict". (Of which there are many; when you speak of "an ongoing conflict" you come across as having a particular agenda, although of course I don't know which.)
> But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?
There is no contradiction and thus nothing to square. People are not responsible for the actions of their ancestors, nor of members of their identity groups, and especially not of the ancestors of members of their identity groups. And there is no contradiction between "the ends don't justify the means" and the ends being just.
bigbadfeline 1 days ago [-]
> First: because trusted people having such weaponry is, in expected value, believed to lead to less total violence.
Unfortunately "trusted people" don't grow on trees... but those who do grow to the highest positions of power, with the most destructive weaponry under their control, ask for trust with stuff like: "No foreign wars", "I'll end that conflict on day one"... "after bringing prices back down".
With that said, changing the conversation from violence to trust in the ideas and people who control it, is a worthwhile endeavor.
>> The rational conclusion of doomerism is violence
That's quite backwards, violence is an irrational response to today's problems. Demonizing the discussion of those problems as "violence" can't be trusted - if the discussion stops, a rational solution will never be found.
pksebben 14 hours ago [-]
>> trust in the ideas and people who control it,
This right here is the crux of the issue. I don't even trust my own computer without fairly deep introspective tools, and what we're given for 'leadership' is 'this totally outdated and opaque system of voting for corporate shill A or corporate shill B is totally trustworthy! You obviously cannot think that you could get by without some asshat running your whole society so be thankful'.
Direct democracy, liquid democracy - whatever you pick that removes the middle man will be a marked improvement from day 1. We do not need these people deciding what's best for us. I'm not sure we ever did.
pembrook 1 days ago [-]
> "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?
The is just survivorship bias. Violence sits at the root of ALL human societies. The vast majority throughout history have failed or are currently failing.
If you're on HN you're probably sitting in one of the lucky, relatively prosperous ones. Violence didn't create the prosperity, otherwise Sudan and Liberia should be the richest countries in the world.
Your relative prosperity came from your ancestors being smart enough to build frameworks to allow a society to mediate scarcity without the need for violence (common law, markets and trade, property rights, etc all enforced via a government monopoly on violence). In fact, any rich country is the result of systems of decentralized scarcity mediation without decentralized violence.
It's the lack of violence which built the relative prosperity you enjoy today. Not the other way around.
jjk166 1 days ago [-]
> The is just survivorship bias. Violence sits at the root of ALL human societies. The vast majority throughout history have failed or are currently failing.
That only strengthens the argument that violence is sometimes the answer. It doesn't matter that it's not always the right answer, the fact is sometimes it has been, and no society has ever managed to survive without choosing it at some point or another.
Indeed, there is the argument to be made that the capability to choose violence is critical even if you never actually need to choose it. This is the basis of deterrence theory which has arguably been the cornerstone of international peace for decades and the theory of the social contract which has been the source of most people's freedoms and political power. A people who will never stand up for themselves and their friends, no matter what injustice is done upon them, invites that injustice. By simply acknowledging there exists a point beyond which you would retaliate, you discourage others from risking going past that point.
pembrook 1 days ago [-]
Sure, you can't monopolize violence under the state (and enforce laws) without the state demonstrating its willingness to use said violence (ie. forcibly put people who violate in prison or use actual violence against them if they resist).
But OP was referring to political violence...which...how do I put this delicately...let's just say political polarization has led certain very-online members of the US populist-left, some of who hang out here for example, to try to expand the Overton Window into bolshevism. See also: Luigi fans.
My point is that the most likely outcome of violent political overthrow is not utopia. The most likely outcome is a failed state and another violent overthrow. Political violence doesn't create anything, it only destroys. And creating is the hard part.
It's like saying; "at the birth of all successful people was a person who shit their pants. So why not try shitting your pants as an adult?"
Yes, one always precedes the other. But it has no correlation to whether the person becomes successful or not.
jjk166 7 hours ago [-]
I am also talking about political violence. Specifically I'm arguing against monopolizing violence under any entity, as it leads to injustice.
sylos 24 hours ago [-]
My guy, the vast majority of political violence is committed by the right. It's not zero from the left, but it is much less.
pembrook 21 hours ago [-]
I see you've addressed none of my points and instead were triggered by my suggestion your team may have some bad people on it.
Yes, in recent times in the US right wing violence has been more prevalent. But HN is not a right wing place, it's a left filter bubble like reddit and leftist violence is a growing phenomenon in US politics. Arguing against the right wing here would be like clapping along with a giant crowd, providing zero interesting discussion. The bolshevik revival in the world's wealthiest country is far more interesting to discuss.
Also historically, we have to remember that the left's utopian socialist vision (communism) is responsible for the absolute highest body counts, including 30 million starved to death and thousands of incidents of cannibalism in just Mao's great leap forward alone.
pksebben 14 hours ago [-]
I'm not going to go to bat for Mao(1), but I think you're underplaying the body count that capitalist countries have had - this is kind of easy to do because a lot of the damage that we do is obfuscated behind proxies. Besides the obvious and direct war crimes like Vietnam, Iran, Iraq, and now I guess Iran again, there's the second order stuff like Israel's Bad Neighbor Syndrome (which we have enabled financially for basically the duration), Pinochet who we put in charge, heck - pick any country south of the border and we've done some damage there at least once. Then there's the spiderweb of damage that flows out to the global south continuously through NAFTA and similar foreign policy. I suppose the principal difference is that we externalize a lot of our violence (and somehow are shocked when it comes back to bite us that we trained Osama Bin Laden).
Nobody's asking for Maoist China, I think mostly we're clamoring for something closer to Norway. I'm sure plenty of people would be happy to settle for UK-style socialized services but even those folks get lambasted for being "too far left" too so whaddyagonnado.
1 - I think he and 'bolshevism' are a bit of a strawman here anyway, as I've not heard a ton of pro-Mao people but a TON of people who identify as leftists - they are not the same thing
pembrook 13 hours ago [-]
> I think mostly we're clamoring for something closer to Norway. I'm sure plenty of people would be happy to settle for UK-style socialized services
Norway is a Saudi Arabia-style petrostate just with white people aesthetics (Saudi Arabia is also socialist). A better analogue might be Sweden/Finland, also tiny socialist Lutheran countries but with no massive oil reserves.
I write this while currently living in Finland. Your understanding of European socialism is stuck in the early 2000s. Things are going terribly here (and also in the UK). The welfare states throughout Europe are all in various states of slow collapse due to the public sector eating the private sector and climbing government spending as percentage of GDP not seen since the USSR (we're well over 50-60%, communist China is only at 35%). Deficits are ballooning.
Our unemployment rate is 3X that of the US and still climbing. We have no growth in the economy, no population growth, and no productivity growth. Pensions/benefits have been overpromised and will require decades of pain to resolve. Things are bleak and similar throughout the rest of Europe. I would update my priors if I were you.
Furthermore, the only reason socialism ever appeared to work here was due to us being ethnically homogenous and tiny. Government's lack of competitive pressure can be somewhat overcome by social pressure from the government official being your neighbor Pekka. It's harder to grift when everybody knows who you are and can see your new Mercedes. The US is not tiny and not ethnically homogenous.
Our system isn't even working here anymore, and it absolutely would not work in a massive, diverse, low-trust society like the US. You would do much better to lean into your strengths than to chase early 2000s European socialism, which was in fact a mirage brought about by a one-time economic boom due to the fall of iron curtain and EU integration.
thunderfork 22 hours ago [-]
>OP was referring to political violence
The dichotomy of "political" and "apolitical" violence is a false one, and one of the worst thought-terminating clichees of the 21st century. It's telling that "political violence" always seems to refer to violence that isn't the result of the processes of democratic politics.
Nobody's calling out cops shooting protestors with "less lethal" rounds or ICE officers riddling cars with bullets "political violence", for some reason.
pembrook 21 hours ago [-]
I don't disagree with the idea that violence is fundamentally morally questionable. But humans haven't evolved to the point where we can function collaboratively without the threat of it from somewhere. We're animals.
The problem with believing all violence is illegitimate (even that which has been democratically granted to the state to enforce laws), is that society breaks down and loses its legitimacy if you remove this enforcement aspect.
The alternative to a monopoly on violence centralized in a democratic government is not zero violence. The alternative is decentralized violence (anarchy). I think everyone on both sides would find this far less desirable.
nradov 1 days ago [-]
During WWII, the entire Allied leadership was willing to kill millions of Axis children if that's what it took to win the war and force the enemy to surrender unconditionally. There was at least some genocidal intent. Population centers were intentionally bombed to wipe out civilian factory workers. We can argue about whether that was right or wrong but the reality is that it's probably inevitable once armed conflicts involving nation states escalate to an existential level.
“Before we’re through with them, the Japanese language will be spoken only in hell.”
-- Admiral William F. "Bull" Halsey Jr., 1941
Synaesthesia 1 days ago [-]
It was wrong, and yes would likely be seen as genocidal in the current day, rightly so. You can't just randomly kill innocent civilians, no matter what. It didn't even meaningfully accelerate the end of the war.
nradov 9 hours ago [-]
Was it wrong though? How many US troops should we sacrifice to save one enemy civilian? In other words, if you were President Roosevelt or Truman then how do you morally justify not doing everything possible to shorten the war by even one day? How do you tell a US family that their son had to die so that the US government could avoid randomly killing innocent civilians?
It's cheap and easy to pretend to be morally superior when you're not the one forced to make hard choices based on limited information, and then deal with the consequences.
eudamoniac 23 hours ago [-]
You are right, and it's like someone else said, a morality story. Of course violence is sometimes the answer, the ends do justify the means if the ends are important enough, etc. They are indoctrinated and brainwashed, in the purest sense of the word, into not even considering these ideas.
I hold it to be self evident that political violence is the only potential action that the people of North Korea could take to save themselves. Peaceful protest and voting, obviously, does not work. A massive mob rising up and stabbing dear leader with a dinner knife, at the cost of probably hundreds or thousands of themselves, might work.
To deny the above paragraph is incoherent. All governments are somewhere on the scale of justifiably being overthrown with violence. It is a valid option, and how tyrannical the government has to be before the option is justifiable is a matter of opinion. All unpretended shock and horror at the sentiment is either by the sheltered or by the afraid.
People know this subconsciously. How many stories of righteous revolution have we seen and cheered for? Shrek, Hunger Games, The Matrix, Braveheart, Dune, Star Wars; everyone knows these protagonists killing government officials are in the right. They will never make the connection, but they know it, and the intellectually honest will acknowledge it. Are we ruled by such different beasts than those characters are?
sublinear 1 days ago [-]
> How do people square those 2 ideas?
If you're seriously trying to understand the nuance of the act itself, you should consider reading what is standard issue for law enforcement and military.
"On Killing" by Dave Grossman is a classic.
If you only want to understand and stay in the realm of politics, I don't think you'll ever find a good answer either way. There's hypocrisy in every argument for or against violence. None of that is on the minds of people "in the shit" at that time. All that stuff comes later. As you're well aware, PTSD is no joke.
What I would take away from this is to recognize all the other ways in which we are compelled to act against our own self interest under what are sold as higher moral purposes.
From that perspective, it's not that hard to see how people can treat violence as just another tool. Whether it works is a question of how much those people value life above all else. If you're surprised that's not always the case in every culture, you may want to study that first. Beliefs may devalue life for persistence against a long history of conflict. This is where you may start to find some glimmers of an answer why we in the west sometimes think violence works to get those people to "snap out of it", but it really is ultimately about control of those people or that land at the end of the day.
estimator7292 1 days ago [-]
It's almost like the real world just doesn't deal in absolutes. For any absolute blanket rule you'd like to apply to the entire universe, there's a practically infinite number of exceptions and edge cases.
The real world is subjective and messy. Life is an endless series of edge cases and unique situations. The real world also has no requirement to be logically consistent or in any way rational. Every rule has exceptions, no set of rules and codes can cover every situation.
The nature of life is that your personal moral code will break down at some point. Your personal sense of right and wrong is not a universal truth, and you will be faced with situations that challenge your morals.
A wise person understands this fact, and a mature person can handle the messy reality of morals. An immature person thinks their personal moral code is universal truth and must never be questioned.
My morals tend toward Buddhist views, but I've been around long enough to learn the compromises that reality requires. Violence must always be avoided at all costs, but sometimes it is necessary. Occasionally violence is good. There are no hard rules, reality just plain and simple does not work like that.
slopinthebag 1 days ago [-]
Even more simply put, if political violence is never the answer and the institution of government is the biggest single source of political violence, what does that say about the legitimacy of the institution of government?
These trite quips act as a way to ensure only the elite ruling class has a justification for the violence they inflict.
atmavatar 1 days ago [-]
> "The ends don't justify the means" and literal entire religions have been built on this concept.
Most religions rely on a supernatural force judging us post-mortem to balance out the rights and wrongs done during life.
The problem with this, of course, is that there's zero evidence this force exists, and relying on this force to right the wrongs in life only serves to prevent the masses from attempting to correct the wrongs themselves either directly via vigilantism or, more importantly, by replacing existing systems with ones which will serve them better.
I'm all for fixing things first via the soap box and ballot box, but sometimes the ammo box is the only resort left.
The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.
- Thomas Jefferson
I don't believe we're at that point in the US, but I could certainly understand someone making that claim for a country like Iran.
janalsncm 1 days ago [-]
> The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.
When the British cavalry came to Virginia in 1781, Thomas Jefferson famously fled the governor’s mansion.
esseph 1 days ago [-]
> When the British cavalry came to Virginia in 1781, Thomas Jefferson famously fled the governor’s mansion.
Yes, rather than stay and get captured by British dragoons, he left.
This became a rallying cry by the Federalists (who would love the current power structure in the US right now).
The act was investigated by the Virginia General Assembly and found that he had done nothing wrong.
(Staying would have been extremely dumb)
janalsncm 1 days ago [-]
Your reasoning makes sense under a regime of infinite games. In other words, the goal is to continue playing the game rather than win once.
These people do not believe we are in an infinite game. They believe they have a narrow set of moves to avoid checkmate, and apparently getting rid of Sam Altman is one of them.
I will suggest another reason though: we are likely already in the light cone of continued AI development. So none of the vigilante actions are justified under their own logic. It’s probably preferable to avoid being in jail when the robot apocalypse comes.
I don’t think the death of Sam Altman or even the dissolution of OpenAI would stop the continuation of AI development. There are too many actors involved, and too many companies and nation states invested in continuing AI development. Even Eliezer Yudkowsky became president of the United States he could not stop it.
matthewdgreen 1 days ago [-]
Eliezer Yudkowsky has gone so far as to say that it might be ok to kill most of humanity (excepting a "viable reproduction population") to stop AI. If that's not just talk, then this line reasoning only gives you a few possible modes of action. I would not be worried about the people with Molotov cocktails, but I'd be very worried about bio terrorism.
hollerith 1 days ago [-]
>Eliezer Yudkowsky has gone so far as to say that it might be ok to kill most of humanity (excepting a "viable reproduction population") to stop AI
That doesn't sound like a non-misleading summary of anything he would say. Do you have a quote or a link?
janalsncm 13 hours ago [-]
Apparently he deleted the tweet but there are screenshots and he apologized.
Those 2 links certainly satisfy my request. Thank you.
My summary of Eliezer's deleted tweet is that Eliezer is pointing out that even if everyone dies except for the handful of people it would take to repopulate the Earth, even that (pretty terrible) outcome would be preferable to the outcome that would almost certainly obtain if the AI enterprise continues on its present course (namely, everyone's dying, with the result that there is no hope of the human population's bouncing back). It was an attempt to get his interlocutor (who was busy worrying about whether an action is "pre-emptive" and therefore bad and worrying about "a collateral damage estimate that they then compare to achievable military gains") to step back and consider the bigger picture.
Some people do not consider the survival of the human species to be intrinsically valuable. If 99.999% of us die and the rest of us have to go through many decades of suffering just for the species to survive, those people would consider that outcome to be just as bad as everyone dying (or even slightly worse since if 100% of us were to die one day without anyone's knowing what hit them, suffering is avoided). I can see how those people might find Eliezer's deleted tweet to be alarming or bizarre.
In contrast, Eliezer cares about the human species independent of individual people (although he cares about them, too).
Also, just because I notice that outcome A is preferable to outcome B does not mean that I consider it ethical to do anything to bring about outcome B. For example, just because I notice that everyone's life would be improved if my crazy uncle Bob died tomorrow does not mean that I consider it ethical to kill him. And just because Eliezer noticed and pointed out what I just summarized does not mean that Eliezer believes that "it might be ok to kill most of humanity to stop AI" (to repeat the passage I quoted in my first comment).
janalsncm 2 hours ago [-]
The question was
> How many people are allowed to die to prevent AGI?
He didn’t say “not everyone dying is preferable to everyone dying”. The question was about acceptable consequences from preemptively stopping AGI under his assumption that AGI will lead to the extinction all humans.
Those are only the same thing under the assumptions that 1) AGI is inevitable without intervention and 2) AGI will lead to the extinction of humanity.
If he believes he is being misunderstood, his “apology” doesn’t actually deny either of the assumptions I identified, and he is widely known to believe them.
In fact, his stated reason for correcting his earlier tweet, that using nuclear weapons is taboo, is an extremely weak excuse. Given the opportunity to save humanity from AGI if that is what you believe, it would be comical to draw the line at first use of nukes.
No, I think Eliezer is trying to come to grips with the logical conclusion of his strident rhetoric.
matthewdgreen 4 hours ago [-]
You have a population of relatively wealthy, scientifically-educated people who believe that AI risk is real and existential. That if they/we don't act, humanity itself might become extinct -- and that this is an unacceptable outcome. Then you have Yudkowsky mooting the possibility that this is basically inevitable (in the absence of global coordination that seems highly unlikely), and suggesting that hyper-violent outcomes might be literally the only way our species survives.
What I am not saying: Yudkowsky intends to exterminate most of humanity.
What I am saying: this is a dangerous environment, and these kinds of statements will be seen as a call to action by a certain kind of person. TFA is literal proof of the truth of that statement. Moreover: within the community there exist trained experts who might be able to, at the cost of millions of lives, plan an attack that could (plausibly) delay AI by many years.
The danger of this argument is that someone who reveres Yudkowsky might take his arguments to the logical conclusion, and actually do something to act on them. (Although I can't prove it, I also think Yudkowsky knows this, and his decision to speak publicly should be viewed as an indicator of his preferences.) That's why these conversations are so dangerous, and why I'm not going to give Yudkowsky and his folks a lot of credit for "just having an intellectual argument." I think this is like having an intellectual discussion about a theater being on fire, while sitting in a crowded theater.
janalsncm 2 hours ago [-]
I said something to the same effect in a sibling comment to yours.
> someone who reveres Yudkowsky might take his arguments to the logical conclusion
What about Eliezer himself? Does he not believe his own rhetoric? Certainly if he believes the future of the human race is at stake it demands more action than writing a book about it and going on a few podcasts.
I think the whole thing is a bit like the dog who finally caught the car. It’s easy to use this strident rhetoric on an Internet forum, but LessWrong isn’t real life.
matthewdgreen 1 hours ago [-]
If I ran the FBI I would be very gently keeping tabs on the most fervent (and technically capable) anti-AI groups. Unfortunately I don't think anyone is currently running the FBI. If I was tightly connected to folks in these communities, I would be keeping tabs on my friends and making sure they're not getting talked into anything crazy.
The inflammatory conclusion of his 2023 writing was that we need to "shut it all down", escalating to bombing datacenters:
> be willing to destroy a rogue datacenter by airstrike.
Now that someone who was an open follower of his words tried to bomb Sam Altman's house and threatened to burn down their datacenters, Yudkowsky is scrambling to backtrack. The X rant tries to argue that "bombing" and "airstrike" are different and therefore you can't say he advocated for bombing anything (a distinction any rationalist would normally pounce on for its logical inconsistency, if it wasn't coming from a famous rationalist figure). He's also trying to blame his hurried writings for TIME for not being clear enough that he was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks. Again that distinction seems like grasping at straws now that he's face to face with the realities of his extremist rhetoric.
hollerith 1 days ago [-]
You doubt that Yudkowsky "was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks." Let's let the reader decide.
In the article, the string "kill" occurs twice, both times describing what some future AI would do if the AI labs remain free to keep on their present course. The strings "bomb" and "attack" never occur. The strings "strike" and "destroy" occurs once each, and this quote contains both occurrences:
>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
>Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
>That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying “maybe we should not” deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Aurornis 24 hours ago [-]
> The strings "bomb" and "attack" never occur.
What do you think an "airstrike" is, then?
Trying to argue that certain strings don't occur in the page is the kind of argument that gets brought out when someone is desperate for any technicality to avoid having to concede a point.
This level of weaponized pedantry is what makes trying to debate anything with LessWrong-style rationalists so impossible: There's always another volley of gish gallop to be fired at you when you get too close to anything that goes against their accepted narratives.
hollerith 24 hours ago [-]
You were trying to get people to view what EY wrote in the time.com article as an encouragement to engage in criminal violence (as opposed to state-sponsored violence a la an airstrike on a data center) such as the firebombing of Sam's home when in actuality (both before and after the publication of the time.com article) EY has explicitly argued against doing any crimes particularly violent crimes against the AI enterprise.
Knowing that most readers do not have time to read the entire article, I brought up how many times various strings occur in the article to make it less likely in the reader's eyes that there are passages in the article other than the one passage I quoted that could possibly be interpreted as advocating criminal violence. I.e., I brought it up to explain why I quoted the 3 (contiguous) paragraphs I quoted, but not any of the other paragraphs.
In finding and selecting those 3 paragraphs, I was doing your work for you since if this were a perfectly efficient and fair debate, the burden of providing quotes to support your assertion that EY somehow condones the firebombing of Sam's home would fall on you.
handoflixue 1 days ago [-]
I found the last paragraph a fairly great summary of a rather long post:
> How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer.
stickfigure 1 days ago [-]
The whole post should have just been this one line. He likes the sound of his own voice too much.
That said, it rings hollow. AI doomerism is rooted in Terminator style narratives, and in that narrative, the rogue Sarah Connor changes history (with a lot of violence, explosions, and special effects).
The whole scene is toxic.
xrd 1 days ago [-]
That was really fascinating. Thanks.
guelo 1 days ago [-]
Jeebuz that was long, I only made it through about half of it. But I think he's calling for cold war nuclear treaties style international cooperation. But I believe those mechanisms are broken and unavailable to us for two main reasons:
1. The Western world and especially the US is in the process of destroying the UN and other institutions of international law in order to protect Israel, for reasons that I have tried and failed to understand because the propaganda around it is so dense.
2. The Supreme Court made bribery of politicians legal so now we have AI investors with actual governmental power. All restraint efforts will be blocked by the federal government at minimum for these next 3 crucial years.
doctorpangloss 1 days ago [-]
I find all of this stuff very interesting but nonetheless these two voices sound like they could never win an election and aspire not to. That is the ultimate test of the worthlessness of a policy - it's all equally worthless until it wins an election, and that's what makes it reality.
AI Doomerism versus Accelerationism are both playful fantasies, it doesn't really matter what measurements or probabilities or observations they make, because the substantive part is the policies they advocate for, and policies are meaningless - all equally worthless - until elected.
What am I saying? The best rebuttal is, get elected.
gom_jabbar 1 days ago [-]
The interesting thing is that, for the "Father of Accelerationism" (Nick Land), AI Doomerism (doom for humans, at least for human identity) and Accelerationism (which for Land is just another label for capitalism: 'The label "accelerationism" exists because "capitalismism" would be too awkward.'[0]) are not opposed at all. And capitalism does not need to get elected.
(Land follows the above quote with "(But the reflexivity of the latter [capitalismism] is implicit.)"[0], which specifies that, for Land, more precisely, "Accelerationism is simply the self-awareness of capitalism"[1].)
[0] Nick Land (2018). Outsideness: 2013-2023, Noumena Institute, p. 71.
[1] Nick Land (2017). A Quick-and-Dirty Introduction to Accelerationism in Jacobite Magazine. Retrieved from github.com/cyborg-nomade/reignition
doctorpangloss 1 days ago [-]
i don't know, to me they are very different things - accelerationists might be really calling for Better Capitalismism, but that's only because chatbots (the thing you are accelerating) are really good at math, and math is important for making money. if it weren't good at making money, literally nobody would care, kids would not be CS and math majors, they wouldn't care about STEM. they only care because $. But most real problems, including human problems, are not math problems.
this is a huge blind spot in the whole, rationalist and broader STEM cultural-professional community: math isn't the best way to solve problems, most problems are not math problems. SOME of school might be math problems, and it feels good to be a Doctor or a Software Developer Engineer and get your kids to practice "problem solving" - no, they are practicing math problems, not problem solving.
for example there's no math answer to whether or not a piece of land should be a parking lot, or an apartment building, or a homeless shelter, or... you can see how just saying, "whoever is the highest bidder" - that's the math answer, that's why capitalism and accelerationism are related to their core - isn't a good answer. it pretends to be the dominant way we organize land, and of course, it isn't the dominant way we organize land usage anywhere at all, even if we pretend it is. there's no "bidding" for whether a curb should be a disabled parking spot, or a bike lane, or parking, or a restaurant seating, or a parklet, or... these are aesthetic, cultural choices, with meaningless economic tradeoffs. it's not about money, so it's not about math, so math does not provide an answer. there are lots of essential human questions that cannot even be market priced, such as, what should we pay to invent new cures to congenital, terminal illness in children? parents, and a lot of people, would pay "any" price, which is a market failure - but there are a lot of useful political answers to that question. a chatbot cannot answer that question, and it would struggle to take leadership and get elected to answer that question.
mathematicians are basically never elected. so chatbots would not be. and elezier yudlowsky would not be. are you getting it? capitalism does definitely need to be elected, you might think it wins every election but it very often loses at the local level!
i am agreeing with Hashem Sarkis dean of the MIT SAP and kind of disagreeing with Bong Joon-Ho, for further reading.
lazyasciiart 1 days ago [-]
Iran's leadership seems to be a solid rebuttal of that argument.
coldtea 1 days ago [-]
>It is completely coherent to both think that an extremely bad thing is coming, and yet that does not justify any particular action.
Yes, it's called "fatalism".
classified 12 hours ago [-]
So according to you the War for Independence of the US against England never happened, and it would have been completely ineffective if it had happened.
Same goes for the French Revolution. The list could go on.
I think you're overly idealistic.
Joker_vD 1 days ago [-]
> "The ends don't justify the means"
Eh. The ends do justify the means, but only inasmuch as those means actually do help to achieve the ends — astonishingly often, they don't (and rarer, but also often, actually bring you in the opposite direction from those end goals), and so remain unjustified.
MostlyStable 1 days ago [-]
I personally believe quite strongly that some things are just immoral on their face and that I would rather fail/die without using them than succeed/live while using them. I agree that in very many cases where people do these things, they are, in the long run, counter productive, but I also believe that even if could be conclusively proven that this wasn't the case, I would still advocate against their use.
f1shy 1 days ago [-]
Thanks.
That sentence is constantly repeated, as if it would be some kind of absolute truth. The fact is, for every end, there will be probably some means that are totally justified, and some that not.
I think the original context is: no matter how high, pure and perfect the end is, it does not meany any mean is justified.
kgwgk 1 days ago [-]
According to Jocker_vD it’s only the means that won’t help that wouldn’t be justified.
BurningFrog 1 days ago [-]
I agree, but it's only half of the equation.
Your solution also can't be worse than the problem it solves!
Overly clear example: Killing your noisy neighbors actually achieves the end of a quiet home. But that really doesn't justify it.
nitwit005 1 days ago [-]
Mentally ill people often have a justification for their actions which is vaguely rational, but you'll notice the vast majority of people aren't doing what they're doing.
These people just get attracted to political causes somehow. Even the woman's suffrage movement had some people setting buildings on fire.
stickfigure 1 days ago [-]
I miss the days when people blamed all their woes on their parents circumcising them. Simpler times.
thrance 9 hours ago [-]
Are you implying that the suffragettes were mentally ill?
AnimalMuppet 8 hours ago [-]
I think they're implying that the suffragettes who set buildings on fire were mentally ill.
thrance 7 hours ago [-]
Hmm, maybe. Even then, there are causes worth burning buildings over, I think, and the right to vote is probably one of them.
nitwit005 6 hours ago [-]
You might want to work on your reading comprehension.
6 hours ago [-]
thegrim33 1 days ago [-]
The LLM doomerism is just one arm of the general "us vs them" strategy - defining a group of people as the others who are the bad guys, defining yourself as the good guys, constantly fostering hate against the others, finding ways to give your group rationale for why they have the moral high ground, all of it in the end an act to gain power/influence/money for the people orchestrating it.
The anti-AI angle is just the latest flavor of it, replacing previous ones (I'm sure you can think of some) and eventually being replaced by the next new thing/person that they'll try to direct us to hate.
I'm willing to bet any amount of money that 99.99% of AI doomers identify with the same extreme end of the political spectrum. That should be a very big red flag and highly indicative of the real motive behind the movement.
shimman 1 days ago [-]
No, AI doomerists are people cosplaying as religious fanatics. It's mostly sci-fi nonsense that ignores the real material realities that people are currently suffering due to LLMs (water shortages, energy spikes, cancer rates increases, job quality tanking, unnerving local environments, exacerbating the climate crisis).
Most AI safety workers are just doing creative fiction (what if the AI turns into skynet!?1!!?) and not actual society safety that would require dismantling these companies and turning remolding them to benefit the public.
AnimalMuppet 8 hours ago [-]
LLMs are causing cancer rates to increase? Say what???
What do you think the causal mechanism is?
hollerith 22 hours ago [-]
>I'm willing to bet any amount of money that 99.99% of AI doomers identify with the same extreme end of the political spectrum.
Good: a man willing to put his money where his mouth is! However many dollars you put up, I will put up $10. (I.e., I will give you 10:1 odds.) How much do you bet? Who do you suggest as arbiter in case one is needed?
drivebyhooting 1 days ago [-]
Can LLMs design and build a chip foundry to manufacture semiconductors? No?
Can LLMs design and build the reactors to enrich uranium, breed plutonium, and construct nuclear weapons? No?
Can LLMs design and manufacture Shahed drones? No?
There are already super intelligences at large with “scary capability”. And yet the word hasn’t ended.
metabagel 1 days ago [-]
Super-intelligent AI will probably be pretty good at manipulating people.
This sci-fi podcast posits a future where The Program has taken over society. It started out as an application for assigning gig work. Eventually, it began to assign gig work for people to act on behalf of its own interests, such as self-protection.
Current brain dead level memes and political outrage is already doing a good job at manipulating people. Why do you need a machine that makes this process 1000x more costly?
kurthr 1 days ago [-]
Can LLMs convince a human who has power over each and everyone of those things to use them for a(n unstated) prompts goal?
Yeah, probably over 50% of the population already, and if not many of the rest soon.
kurthr 1 days ago [-]
It's fairly hilarious in a dangerous way, how confident people are that neither they, nor their boomer parents could be fooled by a persistent LLM with access to their mail, text, voice, and that of their co-workers and supervisors. The social engineering attacks have always been a weak point, and now they can be combined with other information to target individuals and fake voice/sms tone.
Look at what happened on r/changemyview. That was over a year ago, using only text, and not only went undetected, but was highly effective at changing opinions.
But we only need things to spiral out of control one time for that to change.
The world as we understand it would have ended if Vasily Arkhipov didn't veto the decision to launch a sub nuke during the Cuban Missile Crisis.
Is an emotionless AI system in his place ever going to make the same decision he did?
How confident are you we won't put an AI system in his place, particularly when we have to assume if we don't others will?
drivebyhooting 1 days ago [-]
Sounds like your fear is not of artificial intelligence but artificial incompetence. That’s a very different position from the AI doomers.
gradientsrneat 1 days ago [-]
Maybe if the LLM CEOs stopped spreading doomer narratives to sell their products, these people would chill out.
thephyber 1 days ago [-]
This issue is more complicated.
Sam Altman has stated that the AI revolution will “be like an infinite number of immigrants”. That’s a dangerous thing to say when the country’s political environment has convinced half of the voters that all immigrants are rapey, murderey, immoral subhumans.
Also, Sam Altman helped create OpenAI with the original goals of being an ethical non-profit, only to pivot and kick out all of the people who still wanted that original vision. Now several of the LLM CEOs are screaming “we have to stay fully on the accelerator pedal or the Chinese will get there first”, all while abandoning the ethics that supposedly made us better than the Chinese. (And yes, I understand the issues with the Chinese government and that people are different than their government).
rzmmm 1 days ago [-]
Too bad it's effective marketing strategy. Negative emotions are more powerful drivers than positive ones.
1 days ago [-]
linksnapzz 1 days ago [-]
I'm not surprised that the sort of individual prone to taking Yud too seriously is also likely to be a comically-inept assassin.
Had he tried to blow up the diesel genset at a datacenter, he'd have burnt his lips on the exhaust pipe.
"There is a final irony that deserves attention. If the doomers truly hold their stated beliefs at their stated confidence levels, they should be more honest about what those beliefs imply. A few weeks before the attack, a journalist asked Yudkowsky: if AI is so dangerous, why aren't you attacking data centers? His answer, relayed by Soares: "If you saw a headline saying I'd done that, would you say, 'wow, AI has been stopped, we're safe'? If not, you already know it wouldn't be effective."
----------
There are several thousand AI data centres in the U.S. alone, and hundreds are over a thousand square meters in floor space. Think about the physical effort it would take to reliably destroy, beyond the possibility of repair, just one typical computer in your home. Now multiply that out to thousands of server racks. Even if the employees rolled out the red carpet for you and handed you a baseball bat, you wouldn't get very far. Next, consider that these data centres are popping up all over the world in the most unlikely and remote locations. They don't need workers. They just need power, water, and, preferably, lax tax and environmental standards.
Doomers are attacking billionaires because they perceive them to be the soft, meaty, weak-points of a gigantic inhuman machine. They believe that just scaring Sam Altman a little will have a huge impact compared to trying to attack a data centre. However, billionaires can afford pretty decent security. This doomer movement probably isn't going to accomplish much until they target the engineers and support staff that surround billionaires. Billionaires don't scare easily because they have so much protection, but the poorly paid and poorly secured people around them are another story.
Poorly secured means easy to coerce with a stick. Poorly paid means easy to coerce with a carrot. The threat doomers pose is relatively small until they start turning employees against their own companies. What's an activist with a baseball bat compared to an employee who knows how to disable every computer in multiple data centres simultaneously?
geremiiah 1 days ago [-]
LLMs are dangerous in other ways (LLM psychosis and false confidence has probably already caused negligent deaths). However, I don't think we are close to a terminator scenario.
At the same time, if we ever do create an AGI, and eventually an ASI, I think it would only be a matter of time before the machines take over entirely, and they would probably be the ones which will continue the legacy of our species. Is that bad? Idk.
IAmGraydon 19 hours ago [-]
>Is that bad? Idk.
There's no such thing as bad. It is necessary, though.
tcoff91 1 days ago [-]
I have a different perspective on this given that I view climate change as the biggest threat we face as a species.
I feel like robotics is the only hope we have to be able to scale action against climate change. It's clear that emissions reduction is just not going to happen, and catastrophic warming is inevitable. Therefore we will have to do an extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters. There just aren't enough humans to do everything that is going to need to be done.
It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.
Given that we need better AI in order to make these robots happen, I view AI as a critical technology that we need to maintain civilization.
derektank 1 days ago [-]
Wouldn’t geoengineering through stratospheric aerosol engineering (likely with sulfates) be both cheaper and less technically challenging than changing the built environment? If we’re accepting massive climate changes anyways, it seems like taking the risk with solar radiation modifications would be the next step
dpark 1 days ago [-]
Ah, yes. Let us spray more sulfates into the air. Let’s fight global warming by poisoning all the waterways and oceans with more acid rain.
derektank 1 days ago [-]
The sulfate concentrations required to meaningfully reduce solar radiation is orders of magnitude below the level that causes acid rain. The Tambora eruption didn’t result in global acid rain (though it did in Indonesia, naturally) while cooling the globe by at least half a degree Celsius if not more. And on top of that, there are other possible aerosols we could use, like calcium carbonate
dpark 1 days ago [-]
I’m not sure your example supports your claim. We got an half degree cooling and all it took was the biggest eruption in recorded human history. Plus everyone’s crops died and the sulfur compounds caused lung disease.
tcoff91 1 days ago [-]
That would require global consensus and could ignite wars if there isn't global consensus. Seems very likely that this could have unanticipated consequences that could be worse, but admittedly this is an area I don't really know much about.
ACCount37 1 days ago [-]
No one gives a shit about "global consensus". As demonstrated in 2020s by multiple countries taking major unilateral actions unopposed.
If a nuclear power starts SAI, what is everyone else going to do? Shake their fists at the sky, realistically.
graemep 1 days ago [-]
That is interesting, and I think you are right that emissions reductions will not happen any time soon (eventually, but it will take a while).
I am not convinced we need robots. A lot of it is not all that hard to do. For example, better forestry management to prevent forest fires. A lot of cities rebuild big chunks of their infrastructure over a century or so anyway. The problem is more social and political - you get worse forest management because you can blame climate change when it happens.
dpark 1 days ago [-]
> It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.
Yes, but also 100k firefighting robots is kind of a lot. How many firefighting robots should exist in the world? And how many seawall-building robots for the rising sea level? And how many other robots? At what point does the environmental cost of all these robots offset their benefits?
arduanika 1 days ago [-]
Upvoted because this is an interesting take, but I disagree at least somewhat. I think you should be wary whenever you've narrowed down your options to, "in order to solve the top-priority problem X, our only hope is solution Y."
I agree that some technological solution might be the key to dealing with the climate, and that maybe robots would be part of such a solution, maybe powered by similar techniques as the current wave of AI. It's not an insane scenario, but it's worth keeping your perspective open to other possible developments.
tcoff91 1 days ago [-]
I definitely am open to other possible developments and accept that I'm likely wrong just as basically everyone is wrong when predicting the future.
The firefighting robots of which you speak already exist.
tcoff91 1 days ago [-]
Hell yeah, those look awesome. I look forward to the autonomous versions that don't require fully manual remote operation. It'd be great if coordinators could have like an RTS-style view and command these like they're starcraft units.
irishcoffee 23 hours ago [-]
… they can.
alexk307 1 days ago [-]
> extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters
Do you really think that 1-2 feet of sea level rise will wipe out humanity? Can you cite any examples of natural disasters increasing in frequency or severity due to increased green house gas concentrations? Would AI help with any of that?
csense 1 days ago [-]
"It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction." Is that the opinion of the author, or an LLM?
I feel like this is one topic where using an LLM detracts from the author's thesis; doubly so if they don't disclose it.
hax0ron3 1 days ago [-]
I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.
The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it.
The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security.
I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.
Aurornis 1 days ago [-]
> I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.
This is the rhetorical trick that LessWrongers (Yudkowsky's site) have settled on for decades: They have justified everything around the premise that there's a chance, however small, that the world will end. You can't argue that the world ending is a bad thing, so they have their opening for the rest of their arguments, which is that we need to follow their advice to prevent the world maybe ending. They rebut any counterarguments by trying to turn it into a P(doom) debate where we're fighting over how likely this outcome is, but by the time the discussion gets there you've already been forced to accept their argument. Then they push the P(doom) argument aside and try to argue that it doesn't matter how unlikely it is, we have a morally duty to act.
zbentley 1 days ago [-]
This is an entertaining (and often exasperating) decades-old trend in competitive U.S. college debate, as well.
A common advantageous strategy is to take the randomly-selected topic, however unrelated, and invent a chain of logic that claims that taking a given side/action leads to an infinitesimal risk of nuclear extinction/massive harms. This results in people arguing that e.g. "building more mass transit networks" is a bad idea because it leads to a tiny increase in the risk of extinction--via chains as silly as "mass transit expansion needs energy, increased energy production leads to more EM radiation, evil aliens--if they exist--are very marginally more likely to notice us due to increased radiation and wipe out the human race". That's not a made-up example.
The strategy is just like the LessWrongers' one: if you can put your opponent in the position of trying to reduce P(doom), you can argue that unless it's reduced to actual zero, the magnitude of the potential negative consequence is so severe as to overwhelm any consideration of its probability.
In competitive debate, this is a strong strategy. Not a cheat-code--there are plenty of ways around it--but common and enduring for many years.
As an aside: "debate", as practiced competitively, often bears little relation to "debate" as understood by the general public. There are two main families of competitive debate: one is more outward-facing and oriented towards rhetorical/communication/persuasion practice; the other is more ingrown and oriented towards persuading other debaters, in debate-community-specific terms, of which side should win. There's overlap, but the two tend to be practiced/judged by separate groups, according to different rubrics, and/or in different spaces or events. That second family is what I'm referring to above.
kansface 1 days ago [-]
It is a reimagining of Pascal’s Wager. On the original front, I don’t see the neo-Rationalists converting to Christianity en masse.
hollerith 22 hours ago [-]
Pascal's wager is an argument that even if the probability of God's existence is very small, it is still rational to believe in God and live accordingly. Yudkowsky is the author of a blog post titled "Pascal's mugging", which likewise involves a small probability of an extremely bad outcome, but that blog post is completely silent about the dangerousness of AI research. (The post points out a paradox in decision theory, i.e., the theory that flows from the equation expected_utility = summation over every possible outcome O of U(O) * P(O).)
No one to my knowledge has ever argued that AI research should be prohibited because of a very small probability of its turning out extremely badly. This is entirely a straw man set up by people who want AI research to continue. Yudkowsky argues that if AI research is allowed to continue, then the natural expected outcome will be very bad (probably human extinction, but more exotic terrible outcomes are also possible) [1]. There are others who argue that no team or organization anywhere should engage in any program of development that has a 10% or more chance of ending the human race without there first being an extensive public debate followed by a vote in which everyone can participate, and this is their objection to any continuance of AI research.
[1] But don't take my word for it: here is Yudkowsky writing in Apr 2022 in
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/: "When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%. That's why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained."
empiricus 1 days ago [-]
well, rhetorical trick or not, it is worth thinking about the fact that the dynamics of the thing are already outside anyone's control. I mean, everyone is racing and you cannot stop.
1 days ago [-]
MiguelX413 9 hours ago [-]
> I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.
I wish they did before too.
squigz 1 days ago [-]
> I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.
Can't you? Haven't many (most?) countries agreed to nuclear disarmament? What about biological weapons? Even anti-personnel mines, I think?
hax0ron3 1 days ago [-]
Those weapons are still all being developed and would be brought out in any actually existential war where they seemed useful. The agreements would last only as long as the wars were not existential, or as long as the various countries involved believed that use of them, and the resulting retaliation in kind, would be more destructive than not using them. But one way or another, countries still develop them.
dweinus 1 days ago [-]
I don't think it needs to be a binary to be effective. Yes, those weapons still exist, but understanding of existential risk and political pressures have slowed them considerably and resulted in a safer, more cautious world.
switchbak 1 days ago [-]
China is rapidly building out their nuclear arsenal as we speak, and the USA is undergoing an expensive replacement process of theirs as well.
That kind of idea might have held water in the 90's, but that's not the world we live in any longer.
dpark 1 days ago [-]
> Haven't many (most?) countries agreed to nuclear disarmament?
This misses the point. He specifically said the entire world because the point is that someone will develop AGI (theoretically; I’m not making a statement about how close we are to this).
9 nations have nuclear weapons despite non proliferation agreements and supposed disarmament. It’s not enough for most countries to agree not to build nuclear weapons if the goal is to have no nuclear weapons. Same for AGI. If it can be developed, you need all nations to agree not to develop it if it don’t want it to exist. Otherwise it will simply be developed by nations that don’t agree with you.
(Also arguably the only reason most nations don’t have nuclear weapons is the threat of destruction from nations that already have them if they try.)
justafewwords 1 days ago [-]
[flagged]
morningsam 1 days ago [-]
>The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it.
I don't remember who, but someone made an interesting point about this around the time GPT-4 was released: If the major nuclear powers all understand this, doesn't that make nuclear war more likely the closer any of them get to AGI/ASI? After all, if the other side getting there first guarantees the complete and total defeat of one's own side, a leader may conclude that they don't have anything to lose anymore and launch a nuclear first strike. There are a few arguments for why this would be irrational (e.g. total defeat may, in expectation, be less bad than mutual genocide), but I think it's worth keeping in mind as a possibility.
boothby 1 days ago [-]
Cold comfort: AGI will not genocide humanity until it can plausibly automate logistics from mining raw materials to building out compute and power generation.
tintor 1 days ago [-]
Humanity agreed, for example, that growing ozone hole is dangerous for everyone, and worked together to ban production of gases that damage ozone layer. See Montreal Protocol International Treaty. It was highly effective. Training powerful AIs isn’t different.
hax0ron3 1 days ago [-]
I think that trying to stop AI development is more like trying to stop nuclear weapon proliferation than it is like fixing the ozone hole. I think the difference is that if one country works to fix the ozone hole, that doesn't make the other countries scared that they are falling behind in ozone hole fixing technology and might get conquered or reduced to subservience as a result.
Nuclear weapon proliferation seems to have plateaued recently, but I think that this appearance is partly deceptive. The main reasons it has plateaued is that: 1) building and maintaining nuclear weapons is expensive, 2) there are powerful countries that are willing to use military force to stop some other countries from developing nukes, and 3) many countries have reached nuclear latency (the ability to build nuclear weapons very quickly once the political order is given to do it) and are only avoiding actually giving the order to build nukes because they don't see a current important-enough reason to do it.
zbentley 1 days ago [-]
We've also made progress as a species towards banning and reducing other things that in-group upsides and really bad externalities: off-the-shelf sale of broad system antibiotics; chattel slavery; human organ trafficking; some damaging recreational drugs.
The prohibitions aren't perfect, of course (and not without their own negative externalities in some cases). But all of those things are much more accessible to people than nuclear weapons, and we've still had successes in banning/reducing them. So maybe there's hope yet.
1 days ago [-]
estimator7292 1 days ago [-]
Never forget that those with the power to do incredible violence against entire populations are the ones most heavily pushing the line that "violence is never, ever acceptable for any reason".
The nature of the real world is that no set of moral rules applies to every situation. Our universe is not one that deals in absolutes. Your personal moral code will be challenged with exceptional situations. Growing up is learning to deal with the fact that life always, always forces us to compromise.
A robust moral code is not one with strict, unbreakable rules applied blindly to the entire world. A robust moral code is one that guides you in exceptional situations. Morality is not law, it is making the best decision you can in any given situation.
The nature of humanity, the world, and our history is that sometimes violence is required. Sometimes the most morally correct thing you can do is take up arms and defend your people.
If someone is trying to convince you that there are absolute moral laws that apply to everyone everywhere, think very carefully about what it is they want you to believe and why. There is almost certainly a motive behind that statement.
Most importantly, one should never take moral advice as unquestioned law. Morality is something you discover for yourself, it cannot and must not be prescribed. History exists to teach us lessons like this. Read any era from any culture, learn how other people deal with moral questions in exceptional circumstances. Form your own opinions of what we've done right or wrong. There are no absolutes and morality is individual. You must find your own sense of right and wrong.
AndrewKemendo 1 days ago [-]
Wouldn’t be a proper technology revolution without some version of labor realizing they are commodities and rejecting the collapse of the current form of labor power, so that tells me we’re actually in the transition from an old economic process to a new one.
Dont forget, the Luddites were correct about the direction that automation and labor power were going. They weren’t blindly “fighting machines”, they were fighting inequitable working conditions.
>Periodic uprisings relating to asset prices also occurred in other contexts in the century before Luddism. Irregular rises in food prices provoked the Keelmen to riot in the port of Tyne in 1710 and tin miners to steal from granaries at Falmouth in 1727. There was a rebellion in Northumberland and Durham in 1740, and an assault on Quaker corn dealers in 1756.
1 days ago [-]
benj111 1 days ago [-]
Well. Where to start.
As someone who has been let down by the judiciary and many other state institutions, I do have the very strong feeling that perhaps direct action is needed, and that perhaps I wouldn't be in the position I was, if the powers that be took that risk into account. But then don't I, in some ways become a terrorist* in doing those things, threatening those things, or making those things a reality? and there's the question of what message gets through, which isn't necessarily the one I want to convey.
*In a very broad way. I'm not thinking anything in particular, just the rationality of introducing a cost to them for their failures. Any form of protest I suppose is coercive.
necovek 1 days ago [-]
I am disappointed "Doomerism" is not an official name for the practice of putting Doom on anything and everything!
PaulHoule 1 days ago [-]
... been saying this for years. If you really believed what Yudkowsky says you wouldn't just be posting on lesswrong, you would be taking direct action against a clear and present danger.
jmull 1 days ago [-]
No you wouldn't.
Look at what the molotov cocktail guy accomplished by "taking direct action against a clear and present danger": Nothing, besides casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.
It's downright dumb to attempt to impose your will via unilateral violence when you aren't in a position to actually complete the goal. Note that that goes whether you're actually right or not.
hax0ron3 1 days ago [-]
>casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.
I think the majority of the population at large either doesn't care about what happened or wish that the guy had actually managed to kill Altman. Not even necessarily because Altman is involved with AI, but just because he is extremely rich. I don't imagine any increased resistance from the population at large - the population at large either doesn't mind when rich people are killed or loves it. The exceptions would be people like entertainers who develop a parasocial relationship with the public and provide direct joy to people, but AI company leaders don't fall into that category.
That said, it is true that killing Altman would almost certainly achieve nothing. See my other post in this thread.
throwaway27448 1 days ago [-]
Obviously, ineffective action will be counterproductive. I recommend effective action.
handoflixue 1 days ago [-]
That's exactly the point every prominent member of the "Doomer" community is making: Violence isn't an effective action; it is a counterproductive action. It is actively destructive.
NoGravitas 8 hours ago [-]
Lenin called this kind of individualistic, unorganized violence "revolutionary adventurism", and strongly condemned it. The lesson is not that violence isn't effective, it's that unorganized violence isn't effective. Sufficiently organized violence can be very effective indeed.
That said, the same is true of nonviolence.
throwaway27448 1 days ago [-]
Well what other tools do we have? Waiting for the market to fix things is also destructive and harms orders of magnitude more people than violent direct action does; democracy is wildly ineffective compared to violence even at its most optimistic; what else remains? Fleeing the planet?
handoflixue 9 hours ago [-]
Giving this a less glib response: https://x.com/ESYudkowsky/article/2043601524815716866 goes in to some detail, but Eliezer has always had a fairly clear call for action, which is international regulation. And in particular, he makes the point that random acts of violence are actively counterproductive to his goal.
We walked out of the Cold War alive. Humanity has faced extinction before, and despite it all, we walked away alive last time. It's not unreasonable to think we can do it again.
handoflixue 1 days ago [-]
> Well what other tools do we have?
I'll answer with a quote from the founder of the Rationalist movement, Eliezer:
"How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer."
throwaway27448 1 days ago [-]
Ok, I don't think anyone suggested killing puppies. Are you going to take this topic seriously or just dodge the question?
handoflixue 10 hours ago [-]
The point is that violence isn't actually a tool, just like killing puppies isn't an actual solution.
I can know "this doesn't work" without knowing exactly what does work. "Violence is the only tool we have, so we have to use it" is the sort of logic that leads to the Holocaust.
If you want my own personal observations: Over the past few centuries, we've managed women's suffrage, black suffrage, gay marriage, etc. largely without violence, so clearly there are processes out there for progress. We fixed the Ozone Hole without killing people. I don't think murder was involved at all in finding recent AIDS medication, or GLP-1.
There are tons of examples of successful social progress in the past few decades that don't involve violence. Conversely, I struggle to name any terrorists that accomplished their goals by bombing scientists.
If nothing else, we can make violence a lot more legible by embodying it in a legal process, and bringing society onto the same page about it's necessity.
PaulHoule 1 days ago [-]
I'm not advocating for that, I'm just saying the whole thing is performative and gets taken at face value in a way that it should not be.
If you wanted to be a contrarian concerned about x-risks go try to find $1B to pay Embraer or another minor aviation vendor to make a plane to do stratospheric aerosol injection or something.
---
If you want my diagnosis it is, in a time of lower social inequality cults frequently tried to steal labor and money from a broad base of people.
For instance in the L. Ron Hubbard age Scientology would treat you as a "public" if you had money to take and if you didn't or after you'd been bled dry you would be be recruited as "staff". Hubbard thought it was immoral to take donations without giving something in return so it was centered around getting people to spend on "auditing". Between 1950 Dianetics and the current Miscavige age, income and wealth has gotten concentrated and he changed that single element of the Hubbard doctrine and now it is all about recruiting money from "whales" who donated to the International Association of Scientologists (IAS)
In the case of the Yudkowsky thing the mass just doesn't have a lot of money to steal after paying the rent and turning the labor of the unskilled and ignorant (even if they think otherwise) is a case of the juice not being worth the squeeze, so the point is to build a Potempkin village that looks like a social movement that creates a frame where you can get money from sources such as "SBF steals it and gives it to the movement" as well as "rich kids who inherited a lot of money but don't have a lot of sense"
adjejmxbdjdn 1 days ago [-]
Your statement is incorrect.
If you really believed what Yudkowsky says you would be taking action that maximizes the chances of reducing a clear and present danger.
Between Yudkowsky and the Molotov cocktail guy, which approach do you think had and is having more of an impact?
An individual can rarely, if ever, enact change through violence. The history of nearly all successful movements is violence often makes change harder.
Rallying people through speech is a far more successful way for an individual to enact change through violence
virissimo 1 days ago [-]
Does this apply to other domains or just AI? For example, if you think gain-of-function research accidents put millions of lives at risk, is the logical next step to quit your job and become a terrorist?
kelseyfrog 1 days ago [-]
Disagree. Just one more blog post. I swear, one more blog post will do it.
imbus 1 days ago [-]
[dead]
SpicyLemonZest 1 days ago [-]
They are! Yudkowsky sat down with Senator Bernie Sanders last month to explain what's at stake, successfully convinced him that it's a big deal, and Sanders has now proposed a national moratorium on AI data centers (https://www.sanders.senate.gov/press-releases/news-sanders-o...) to help slow things down. That's pretty direct, and a lot more useful than random violence by random people.
AndrewKemendo 1 days ago [-]
That pesky basilisk to worry about though
bjourne 1 days ago [-]
Yes, but against the angry dormers we have hordes of cheerful coomers who welcome the fruits of the labour of the AI with one open arm.
jmull 1 days ago [-]
People are basing their entire world view on not understanding the nature of exponential phenomena.
Exponential phenomena only begin in a medium that holds the potential for that phenomena, and necessarily consume that medium.
That is, exponential phenomena are inherently self-limiting. The bateria reaches the edge of the petri dish. When the all the nitroglycerin is broken up the dynamite is done exploding.
That doesn't mean exponential phenomena aren't dangerous -- of course they can be. I mentioned dynamite, after all. And there are nukes.
But it's really far from "AI is improving exponentially now" to "AI will destroy everyone".
I see AI companies consuming cash at unsustainable rates. Since their motive is profit, this is necessarily limiting. Cash, meanwhile is a proxy for actual resources, which have their own, non-exponential limitations -- employees, data centers, electricity, venture capitalist with capital, etc.
AI isn't going to keep improving exponentially -- it can't. Like every other exponential phenomenon, it will consume the medium of potential that supports it (and rather quickly).
saulpw 1 days ago [-]
Agreed. But, many said the same thing about Moore's Law or its equivalents in 1985, 1995, 2005, 2015, and yet the pace of core hardware development has been relentlessly exponential. I keep thinking we must be approaching some kind of limit (and surely we must be!) but I've learned not to bet on it.
avidiax 1 days ago [-]
It's often constructive to consider the edges and corners of the space of possible positions, to understand the weaknesses of the various arguments.
For this case, imagine that you're an accelerationist, and you want the AI to take over, kill everyone, and usher in a new AI-only age for the planet, and later the universe.
How disappointed are you as this person? It's bottlenecks everywhere. Communities don't want to allow datacenters to be built. You literally want to bring nuclear power plants online just to run a few DCs, and those historically take 10+ years to permit and build. There's not enough AC switchgear and transformers to send power into the DCs, even if you have the power. Chip prices are skyrocketing, and you have to sign a 3-4 year contract to get RAM now.
And meanwhile, the AI doesn't have many robot bodies. Tesla might put some feeble robots into mass production in a few years, but humans can knock those over with a stick into a puddle of water and it's over for that robot. The nuclear arsenals are all still in bunkers and submarines requiring two guys to physically turn keys, and the computers down there are so old they use 8" floppies.
Sure, there's some good progress on autonomous weapons, but a few million self-destructing AI drones built by human hands isn't going to cut it.
So as a hypothetical person hoping that AI destroys everything, you'd be rather impatient, I think, unless you think the AI can trick humanity into destroying itself relatively soon.
1 days ago [-]
greenavocado 1 days ago [-]
> People are basing their entire world view [on things getting worse because their leadership is abandoning them or actively working against their interests]
We understand hard times and are willing to work together to solve problems, but not when leadership is actively harmful.
Fixed that for you.
jmull 1 days ago [-]
That's a completely separate point, is it not?
Maybe write it up and post a top-level comment if you think it's a point worth making.
eemax 1 days ago [-]
> The Rational Conclusion of Doomerism Is Violence
No it isn't. The most prominent "doomer" has a strong grasp and deep, wholehearted appreciation for the the principles of liberalism and the rule of law:
Which the author of this piece of slop appears to lack.
arduanika 1 days ago [-]
It is true that only Yudkowsky gets to say what the rational conclusion of his ideas are. Nobody else gets to speculate. Only the pope of rationalism, because he's the rational one here. See? It's right there in the name!
> this piece of slop
Citation needed. Or maybe we need to update the title of that children's book for internet arguments: Everyone Who Disagrees With Me Is Slop.
The Yud post you linked is not slop, either. It's not LLM-generated, nor is it insincere. But I do have to point out: He's the one who is slinging the tsunami of words here, not Alexander Campbell.
handoflixue 1 days ago [-]
I think it's rather relevant that the community itself rejects the logic you're trying to impose on it. You can straw-man any sort of conclusion on to any sort of philosophy. This will not actually help you much at all if you're trying to predict what people will actually do.
If the only people that reach your conclusion are ones that don't actually subscribe to the philosophy, then it doesn't matter, because no one is actually acting on those conclusions.
And if we want to hold people responsible because others pervert their ideas, then we have to accept that Jesus Christ was a horrific, evil person for preaching "Love thy Neighbor"; just look at the crusades that were somehow the "rational conclusion" of that philosophy!
arduanika 24 hours ago [-]
> "If the only people that reach your conclusion are ones that don't actually subscribe to the philosophy, then it doesn't matter, because no one is actually acting on those conclusions."
What an odd thing to say right after young Mr. Moreno-Gama reached that conclusion and did subscribe to the philosophy, when it does matter, because he did actually act on those conclusions.
How odd to introduce a hypothetical that amounts to, "what if this philosophy didn't ever lead to violence?", right after it did.
Or are you trying to pull a No True Scotsman on me here?
handoflixue 10 hours ago [-]
Every major philosophical group has killed people. Whatever beliefs you profess to hold, people have killed for them. If the presence of one or two deranged outliers is enough to reach reasonable conclusions about a group, then we can write off absolutely every single major contribution to philosophy, and conclude that having a philosophy leads only to violence and murder.
You have to understand basic statistics: is this group actually more dangerous than average? Do rationalists kill more than non-rationalists?
Or is the rational conclusion of non-rationalists also violence?
arduanika 5 hours ago [-]
None of this is responsive to my point in the comment you're responding to.
My point is, why would you be talking about a counterfactual world where people did not attempt to kill for this philosophy? Why would you be entertaining a categorical claim like, "the only people that reach your conclusion are ones that don't actually subscribe to the philosophy", when that claim is manifestly violated by a violent act just a few short days ago? How is it inconceivable to you that someone might read the dire doomer rhetoric in the way that Moreno-Gama did?
The only way you could write something like that is if your head is in the sand. I am willing to entertain your relativism, your base rate logic, your analogy to Jesus, sure, obviously there's some merit to that line of argument... but first you need to please pull your head out of the sand.
We can only talk about base rates if you stop trying to invalidate the data.
eemax 1 days ago [-]
> It is true that only Yudkowsky gets to say what the rational conclusion of his ideas are. Nobody else gets to speculate. Only the pope of rationalism, because he's the rational one here. See? It's right there in the name!
No, I am saying that Yudkowsky's views are straightforwardly compatible with bedrock principles of liberalism, and the author of the piece fails to acknowledge that compatibility or grapple with them himself. It's not about "rationalism" or who is "allowed" to speculate.
I called it slop because it says false things that have the hallmark of LLM style, e.g.
> The Sequences build the liturgy: a small caste of correct thinkers, epistemically and morally superior, whose rationality entitles them to govern what the rest of humanity is allowed to build. It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction.
Apocryphon 1 days ago [-]
> the hallmark of LLM style
That's just because LLMs were likely trained on a decade plus of human-generated Medium, Substack, Quora, and LinkedIn post slop.
imbus 1 days ago [-]
[dead]
augmentedmike 6 hours ago [-]
[dead]
vrganj 1 days ago [-]
[flagged]
NoGravitas 7 hours ago [-]
And also note that Lenin condemned this kind of opportunistic individual violence. Organize your revolutionary violence properly or don't do it, people!
mrguyorama 1 days ago [-]
The founders of the US started a war that killed tens of thousands of colonists over small taxes and a desire to eventually end slavery, which was basically unprofitable at that point in America. The local connected and rich people thought those were valid reasons for political violence.
The "Boston Massacre" involved a crowd of people throwing rocks and balls of ice at soldiers and getting shot at.
But now it's all "Oh political violence must be avoided at all costs". Now it's "Political violence doesn't work, now lets set off fireworks on July 4th to celebrate the birth of our nation through violence"
aaroninsf 1 days ago [-]
Hot take:
I assume the author wrote this with the expectation that much of the readsherp gasp, and react with "the natural horror all right thinking folk would have in response to violence of any kind."
Sorry, lol, no.
The appropriate question for "all right thinking" folk is very different: if argumentation has no impact and it's obvious that it shall have none—what other avenue do you expect opponents, who take the risks seriously, to take...?
That's not a rhetorical question.
To put it bluntly: the machinery of contemporary capitalism, especially as practiced by our industry, very clearly leaves no avenue.
How many days ago was Ronan Farrow here doing an AMA on his critique of Altman—whose connection to this specific community is I assume common knowledge...?
How many of you have carried, or worked beneath, the banner, move fast and break things...?
What message does that ethos convey, about their the extent to which "tech" is going respect community standards, regulation—the law?
And on the other edge: what does this ethos enshrine about how best to accomplish one's aims?
One of the bigger domestic stories this past week which has inflamed a certain side of Reddit, is the "disgruntled employee torches warehouse" one.
Consider also—and I'm deadly serious—the broader frame narrative we are all laboring within today: that the new contract of the capitalist class—including and perhaps especially those in "tech," e.g. in the Peter Thiel circles—seems very much to be, "social stability via surveillance and a police state, rather than through equity and discourse."
When code is law, the law is buggy.
When there is no recourse through the law, you get violence.
arduanika 1 days ago [-]
This has been decades in the making. We had premonitions of the violence that would come, for example with the Zizians. Get ready for what happens when a million blogposts worth of bad philosophy, bad analogies, and anti-institutionalist hubris are deeply indoctrinated into a vast, decentralized network of highly capable engineering minds who lack common sense and normal restraints.
They hate the framing that LLMs are just stochastic parrots, which is ironic, because Yudkowsky's many parrots are (latent, until now) stochastic terrorists.
hollerith 22 hours ago [-]
Humble request: do not call us "AI doomers". Most of us would rather be called "AI anti-extinctionists".
hollerith 15 hours ago [-]
"Doomer" sounds like we have a mood disorder.
kelseyfrog 1 days ago [-]
War is a mere continuation of policy by other means[1]. When policy through legislation is empirically impotent[2], calls to continue attempts at a failed strategy are indistinguishable from being told, "continue losing."
There is a real, undeniable, build up of political tension. When it fails to be released in the legislative arena, it doesn't dissapate. When we point out that, "the quality of life right now is the best it's ever been," it doesn't dissapate. When we try to crush it, it doesn't dissapate. The last remaining pressure release is violence however condemnable it may be. Perhaps we should, you know, fix participatory democracy rather than pontificating on a natural outcome of machine we created yet refuse to fix. If fixing it continues to be more difficult than eliminating violence we should continue to expect violence.
Ah yes, a popular codeword for "I did not get my way".
There is no electoral majority behind the AI doomer cult. It is not a failure of "democracy" that they haven't gotten what they want. It is a failure of their activism, or just the general unpalatability of their wild ideas, or both. They don't get to throw Molotovs just because they lose.
vrganj 1 days ago [-]
Maybe democracy is fundamentally flawed because the demos is? How should one act in such a situation?
TuringTest 1 days ago [-]
Who gets to say that the demos is fundamentally flawed? Each in-group have their own opinions on what's a flaw.
Society evolves through epiphenomena caused by the behaviour of the majority; the fact that some minorities view that evolution as 'flawed' cannot change that evolution, unless they're able to influence the majority to also see it that way.
Now, democracy is essentially a way for everybody to broadcast their views on society's flaws on non-violent ways. The alternative is that some groups broadcast their opinions in violent ways, and we have learned to see that situation as undesirable.
lostmsu 20 hours ago [-]
> Society evolves through epiphenomena caused by the behaviour of the majority
I would argue plenty of significant societal changes were caused by the behavior of relatively small number of people. Even more so when you include instances of masterful use of the butterfly effect.
TuringTest 18 hours ago [-]
> I would argue plenty of significant societal changes were caused by the behavior of relatively small number of people
Specific breaking points in history yeah, maybe. But that's possible because they're well connected people near the center of the network.
Those breakpoints are possible because either those few people share a viewpoint held by a large number of their peers, or benefit from knowledge accumulated throughout their civilization. Think how every dictator needs support from a huge following to get their power (and how easy it is to find another dictator to replace them if they die), or how often some breakthrough discoveries are made by multiple people at the same time. There's always a last straw that breaks the camel's back, but the lone wolf hardly ever gets a significant impact on society at large; they need a receptive audience to get any impact. Humans are herd animals.
Following the metaphor, the butterfly effect is only possible because a storm was brewing in the first place; the butterfly wings only decide where it will appear. Butterfly wings just don't have that much energy.
History is told from the perspective of kings, but kings can reign only within a society that believes in their divine right to rule.
vrganj 15 hours ago [-]
Nah, think of Lenin or Mao as direct counterexamples.
TuringTest 4 hours ago [-]
You think Lenin and Mao didn't have behind them an ideology in their societies that supported them? Why did people follow their orders then, mind control?
vrganj 1 hours ago [-]
They created the conditions through agitprop. You're inverting cause and effect.
kelseyfrog 1 days ago [-]
Ah yes, "continue losing."
Go ahead and read Gilens and Page and tell me participatory democracy is working. Until then, expect more of the same impotent condemnations and a refusal to understand the social mechanics producing acts of violence.
arduanika 1 days ago [-]
I am aware of their arguments, yes, but what I'm objecting to is that you're bringing this irrelevant hobbyhorse into a discussion of a truly fringe ideology. We're not talking about a classic G&P-style issue where the voters and the elites disagree. Nobody cares for the AI doomers -- not elites, not voters, nobody.
When you talk about "participatory democracy" in a thread like this, you are enabling them in their delusion that people do care. The AI safetyist think tanks put out these pushpolls trying to convince themselves that voters care about AI doom. They seal up the walls of their echo chamber, and they believe themselves to be heroes. Then one day, one of them throws a Molotov, and nobody is surprised.
kelseyfrog 1 days ago [-]
> Nobody cares for the AI doomers
Which is precisely why they've resorted to violence.
We can do better than denigrating positions as "hobbyhorse." HN deserves better than that.
arduanika 24 hours ago [-]
Fair enough. Retry: What I'm objecting to is your brilliant, insightful point, to which you are attached enough that you've injected it into this thread where it is, by your own admission, irrelevant. They're resorting to violence because they're unpopular, not because democracy failed to do its job here.
kelseyfrog 23 hours ago [-]
My dear friend, this is a discussion board where people share their takes. It's simply a take.
We can attempt to deduce the root cause, but please don't assume we're on different epistemic footing. It's speculation and that's fine.
sleepybrett 1 days ago [-]
> There is no electoral majority behind the AI doomer cult.
how can you be sure? has anyone polled it? are they too scared to poll it?
unethical_ban 1 days ago [-]
"Those who make peaceful revolution impossible will make violent revolution inevitable."
Wealth inequality isn't just about economic wellbeing but political power. Separately, the US legislature is almost entirely crippled, only able to pass one major bill per presidential term, while the dominant political party celebrated this and cedes all power to an executive whose intention is to tear apart the administrative state and bring about techno feudalism.
I once again note that none of the AI leadership has even tried to address government policies to guarantee a baseline of economic wellbeing for our citizens, while they acknowledge AI will likely have massive, disruptive impacts on society and economy. Anthropic is the only one that has shown any public concern for the dangers of AI by insisting on some moral baseline of AI use in the Defense department.
Rendered at 22:12:45 GMT+0000 (Coordinated Universal Time) with Vercel.
Someone _may_ decide that it does, but it is not a necessary conclusion.
And that is completely aside from the many many (in my opinion convincing) arguments that such acts of violence would not be effective anyways.
This article is a much better (and much longer) extension of the argument and direct refutation of the OP article
https://thezvi.substack.com/p/political-violence-is-never-ac...
An ongoing conflict has resulted in the violent deaths of literally many thousands of children. The people who enable those deaths are usually safely ensconced thousands of miles away, often living in cushy suburbs.
To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less. I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.
Related, the Substack link you posted is titled "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?
My experience has been the polar opposite: The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others. This ranges from petty things like spreading gossip, to committing theft from people they don't like ("they had it coming!") to actual physical violence.
In every case, zoom out a little bit and it becomes obvious how their little self-created bubble distorted their reality until they believed that doing something wrong was actually the right and justified move.
I think you're reaching too far to try to disprove the statement in a general context. Few people are going to say "violence is always the wrong answer" in response to someone defending themselves against another person trying to murder them, for example. I think these edge cases get too much emphasis in the context of the article, though. They're used as a wedge to open up the possibility that violence can be justified some times, which turns into a wordplay game to stretch the situation to justify violence.
To rephrase, my point is that phrases like "the ends don't justify the means" and "political violence is never the answer" seem to almost always be applied in very specific contexts, completely ignoring other contexts where many people (I'd say "society at large") are completely OK with the ends justifying the means and political violence.
To use your own sentence, I've seen many people in positions of power "coming to completely incorrect conclusions that justify their decisions to harm others", e.g. why bombing children in their beds is OK.
That's not what you said. You were talking about society as a whole, not narrow contexts. I'll re-quote your original comment that I was responding to:
> statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.
I was responding to your "at best, wildly logically inconsistent in any society at any given time" claim.
Beyond that, I can't help you with your reading comprehension.
The comment you're trying to explain is conflating different groups of people and that makes it virtually meaningless.
If we can't agree on that baseline, then its quite obvious that we'll continue to have an escalation in the types of violence that we've seen in the past few years, against the political and corporate classes in the US, with very little end in sight.
Part of the point about violence is it has little to do with societal agreement, to start with. It's what happens when that agreement breaks down. And in the end, it can change the agreement.
I've concluded that there is no universal moral framework. You have to be comfortable with the fact that your perspective is just one of many, but that doesn't mean it's not worth fighting for, it just also means you might be subjected to others' moral frameworks if yours conflicts with theirs. Pretty unsatisfying, but I don't think an alternative conclusion exists that is sound.
That's easy enough. Your presumption is that the U.S. (and other countries) would not exist were it not for political violence. We don't know if that is the case as we have only the violent timeline.
They're stories, just like all morality. It seems when cultures get to a certain point in dissolution you have a growing population that have difficulty drawing lines between stories and reality, what stories are *for* in the first place.
Having aspirational moral systems is critical for a hyperdeveloped mostly-democratic society. It creates a gap between the Best Of Us and the Worst Of Us, and thus suggests a vector. When that aspirational system fails - whether to cynicism or brutality or both matters little - you have a societal collapse incoming or under way.
One depressing example was the progression of the United States' moral judgement on torture during the 21st century. During the worst of the Cold War years I have very few illusions that torture was occurring - extremely imaginative variants in fact. Everyone knew what happens in bush wars - we had quite a few veterans who remembered very clearly. But if in 1963 someone self-identified as a torturer, or recommended we just do it in the open, the same persion would be roundly (and justly) castigated[0].
After 9/11, the idea surfaced that yes, we're going to torture, and yes, it's ok to do it. We accept the "realism".
To see the impact of this, well, I could point to a police officer in 1992 and then to a police officer in 2022. I could also point to an Action/Adventure TV program of the 1980s - say, MacGuyver - and then point to an Action/Adventure TV Program of the 2000s - like, say, 24. Imperial Boomerang is a real thing, turns out, and now we all get to be Fallujah.
In reality, though? The answer to Scalia's "Shouldn't Jack Bauer torture a guy to save Los Angeles?" was always rhetorical[1], but if you took the bait, the correct answer was always, "No", because it destroys the aspirational vector that defines our society. Or, more practically, if for no better reason than the fact a SC justice is legally reasoning from a television show.
[0] The mixed reaction to incidents like Mai Lai show how deep this division went. Not all of America thought it was a terrible thing, but we decided we were made of better stuff. Or we wanted to be, which as it turned out, also important.
[1] The "ticking time bomb" hypothetical which is almost always presented as a stack of epistemic certainty but which is actually unfalsifiable.
First: because trusted people having such weaponry is, in expected value, believed to lead to less total violence. Second: because not all such violence is part of what you presumably have in mind when you speak of "ongoing conflict". (Of which there are many; when you speak of "an ongoing conflict" you come across as having a particular agenda, although of course I don't know which.)
> But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?
There is no contradiction and thus nothing to square. People are not responsible for the actions of their ancestors, nor of members of their identity groups, and especially not of the ancestors of members of their identity groups. And there is no contradiction between "the ends don't justify the means" and the ends being just.
Unfortunately "trusted people" don't grow on trees... but those who do grow to the highest positions of power, with the most destructive weaponry under their control, ask for trust with stuff like: "No foreign wars", "I'll end that conflict on day one"... "after bringing prices back down".
With that said, changing the conversation from violence to trust in the ideas and people who control it, is a worthwhile endeavor.
>> The rational conclusion of doomerism is violence
That's quite backwards, violence is an irrational response to today's problems. Demonizing the discussion of those problems as "violence" can't be trusted - if the discussion stops, a rational solution will never be found.
This right here is the crux of the issue. I don't even trust my own computer without fairly deep introspective tools, and what we're given for 'leadership' is 'this totally outdated and opaque system of voting for corporate shill A or corporate shill B is totally trustworthy! You obviously cannot think that you could get by without some asshat running your whole society so be thankful'.
Direct democracy, liquid democracy - whatever you pick that removes the middle man will be a marked improvement from day 1. We do not need these people deciding what's best for us. I'm not sure we ever did.
The is just survivorship bias. Violence sits at the root of ALL human societies. The vast majority throughout history have failed or are currently failing.
If you're on HN you're probably sitting in one of the lucky, relatively prosperous ones. Violence didn't create the prosperity, otherwise Sudan and Liberia should be the richest countries in the world.
Your relative prosperity came from your ancestors being smart enough to build frameworks to allow a society to mediate scarcity without the need for violence (common law, markets and trade, property rights, etc all enforced via a government monopoly on violence). In fact, any rich country is the result of systems of decentralized scarcity mediation without decentralized violence.
It's the lack of violence which built the relative prosperity you enjoy today. Not the other way around.
That only strengthens the argument that violence is sometimes the answer. It doesn't matter that it's not always the right answer, the fact is sometimes it has been, and no society has ever managed to survive without choosing it at some point or another.
Indeed, there is the argument to be made that the capability to choose violence is critical even if you never actually need to choose it. This is the basis of deterrence theory which has arguably been the cornerstone of international peace for decades and the theory of the social contract which has been the source of most people's freedoms and political power. A people who will never stand up for themselves and their friends, no matter what injustice is done upon them, invites that injustice. By simply acknowledging there exists a point beyond which you would retaliate, you discourage others from risking going past that point.
But OP was referring to political violence...which...how do I put this delicately...let's just say political polarization has led certain very-online members of the US populist-left, some of who hang out here for example, to try to expand the Overton Window into bolshevism. See also: Luigi fans.
My point is that the most likely outcome of violent political overthrow is not utopia. The most likely outcome is a failed state and another violent overthrow. Political violence doesn't create anything, it only destroys. And creating is the hard part.
It's like saying; "at the birth of all successful people was a person who shit their pants. So why not try shitting your pants as an adult?"
Yes, one always precedes the other. But it has no correlation to whether the person becomes successful or not.
Yes, in recent times in the US right wing violence has been more prevalent. But HN is not a right wing place, it's a left filter bubble like reddit and leftist violence is a growing phenomenon in US politics. Arguing against the right wing here would be like clapping along with a giant crowd, providing zero interesting discussion. The bolshevik revival in the world's wealthiest country is far more interesting to discuss.
Also historically, we have to remember that the left's utopian socialist vision (communism) is responsible for the absolute highest body counts, including 30 million starved to death and thousands of incidents of cannibalism in just Mao's great leap forward alone.
Nobody's asking for Maoist China, I think mostly we're clamoring for something closer to Norway. I'm sure plenty of people would be happy to settle for UK-style socialized services but even those folks get lambasted for being "too far left" too so whaddyagonnado.
1 - I think he and 'bolshevism' are a bit of a strawman here anyway, as I've not heard a ton of pro-Mao people but a TON of people who identify as leftists - they are not the same thing
Norway is a Saudi Arabia-style petrostate just with white people aesthetics (Saudi Arabia is also socialist). A better analogue might be Sweden/Finland, also tiny socialist Lutheran countries but with no massive oil reserves.
I write this while currently living in Finland. Your understanding of European socialism is stuck in the early 2000s. Things are going terribly here (and also in the UK). The welfare states throughout Europe are all in various states of slow collapse due to the public sector eating the private sector and climbing government spending as percentage of GDP not seen since the USSR (we're well over 50-60%, communist China is only at 35%). Deficits are ballooning.
Our unemployment rate is 3X that of the US and still climbing. We have no growth in the economy, no population growth, and no productivity growth. Pensions/benefits have been overpromised and will require decades of pain to resolve. Things are bleak and similar throughout the rest of Europe. I would update my priors if I were you.
Furthermore, the only reason socialism ever appeared to work here was due to us being ethnically homogenous and tiny. Government's lack of competitive pressure can be somewhat overcome by social pressure from the government official being your neighbor Pekka. It's harder to grift when everybody knows who you are and can see your new Mercedes. The US is not tiny and not ethnically homogenous.
Our system isn't even working here anymore, and it absolutely would not work in a massive, diverse, low-trust society like the US. You would do much better to lean into your strengths than to chase early 2000s European socialism, which was in fact a mirage brought about by a one-time economic boom due to the fall of iron curtain and EU integration.
The dichotomy of "political" and "apolitical" violence is a false one, and one of the worst thought-terminating clichees of the 21st century. It's telling that "political violence" always seems to refer to violence that isn't the result of the processes of democratic politics.
Nobody's calling out cops shooting protestors with "less lethal" rounds or ICE officers riddling cars with bullets "political violence", for some reason.
The problem with believing all violence is illegitimate (even that which has been democratically granted to the state to enforce laws), is that society breaks down and loses its legitimacy if you remove this enforcement aspect.
The alternative to a monopoly on violence centralized in a democratic government is not zero violence. The alternative is decentralized violence (anarchy). I think everyone on both sides would find this far less desirable.
“Before we’re through with them, the Japanese language will be spoken only in hell.”
-- Admiral William F. "Bull" Halsey Jr., 1941
It's cheap and easy to pretend to be morally superior when you're not the one forced to make hard choices based on limited information, and then deal with the consequences.
I hold it to be self evident that political violence is the only potential action that the people of North Korea could take to save themselves. Peaceful protest and voting, obviously, does not work. A massive mob rising up and stabbing dear leader with a dinner knife, at the cost of probably hundreds or thousands of themselves, might work.
To deny the above paragraph is incoherent. All governments are somewhere on the scale of justifiably being overthrown with violence. It is a valid option, and how tyrannical the government has to be before the option is justifiable is a matter of opinion. All unpretended shock and horror at the sentiment is either by the sheltered or by the afraid.
People know this subconsciously. How many stories of righteous revolution have we seen and cheered for? Shrek, Hunger Games, The Matrix, Braveheart, Dune, Star Wars; everyone knows these protagonists killing government officials are in the right. They will never make the connection, but they know it, and the intellectually honest will acknowledge it. Are we ruled by such different beasts than those characters are?
If you're seriously trying to understand the nuance of the act itself, you should consider reading what is standard issue for law enforcement and military.
"On Killing" by Dave Grossman is a classic.
If you only want to understand and stay in the realm of politics, I don't think you'll ever find a good answer either way. There's hypocrisy in every argument for or against violence. None of that is on the minds of people "in the shit" at that time. All that stuff comes later. As you're well aware, PTSD is no joke.
What I would take away from this is to recognize all the other ways in which we are compelled to act against our own self interest under what are sold as higher moral purposes.
From that perspective, it's not that hard to see how people can treat violence as just another tool. Whether it works is a question of how much those people value life above all else. If you're surprised that's not always the case in every culture, you may want to study that first. Beliefs may devalue life for persistence against a long history of conflict. This is where you may start to find some glimmers of an answer why we in the west sometimes think violence works to get those people to "snap out of it", but it really is ultimately about control of those people or that land at the end of the day.
The real world is subjective and messy. Life is an endless series of edge cases and unique situations. The real world also has no requirement to be logically consistent or in any way rational. Every rule has exceptions, no set of rules and codes can cover every situation.
The nature of life is that your personal moral code will break down at some point. Your personal sense of right and wrong is not a universal truth, and you will be faced with situations that challenge your morals.
A wise person understands this fact, and a mature person can handle the messy reality of morals. An immature person thinks their personal moral code is universal truth and must never be questioned.
My morals tend toward Buddhist views, but I've been around long enough to learn the compromises that reality requires. Violence must always be avoided at all costs, but sometimes it is necessary. Occasionally violence is good. There are no hard rules, reality just plain and simple does not work like that.
These trite quips act as a way to ensure only the elite ruling class has a justification for the violence they inflict.
Most religions rely on a supernatural force judging us post-mortem to balance out the rights and wrongs done during life.
The problem with this, of course, is that there's zero evidence this force exists, and relying on this force to right the wrongs in life only serves to prevent the masses from attempting to correct the wrongs themselves either directly via vigilantism or, more importantly, by replacing existing systems with ones which will serve them better.
I'm all for fixing things first via the soap box and ballot box, but sometimes the ammo box is the only resort left.
I don't believe we're at that point in the US, but I could certainly understand someone making that claim for a country like Iran.When the British cavalry came to Virginia in 1781, Thomas Jefferson famously fled the governor’s mansion.
Yes, rather than stay and get captured by British dragoons, he left.
This became a rallying cry by the Federalists (who would love the current power structure in the US right now).
The act was investigated by the Virginia General Assembly and found that he had done nothing wrong.
(Staying would have been extremely dumb)
These people do not believe we are in an infinite game. They believe they have a narrow set of moves to avoid checkmate, and apparently getting rid of Sam Altman is one of them.
I will suggest another reason though: we are likely already in the light cone of continued AI development. So none of the vigilante actions are justified under their own logic. It’s probably preferable to avoid being in jail when the robot apocalypse comes.
I don’t think the death of Sam Altman or even the dissolution of OpenAI would stop the continuation of AI development. There are too many actors involved, and too many companies and nation states invested in continuing AI development. Even Eliezer Yudkowsky became president of the United States he could not stop it.
That doesn't sound like a non-misleading summary of anything he would say. Do you have a quote or a link?
https://betweendrafts.com/wp-content/uploads/2026/01/survivo...
https://twitter.com/ESYudkowsky/status/1642229374656348161
My summary of Eliezer's deleted tweet is that Eliezer is pointing out that even if everyone dies except for the handful of people it would take to repopulate the Earth, even that (pretty terrible) outcome would be preferable to the outcome that would almost certainly obtain if the AI enterprise continues on its present course (namely, everyone's dying, with the result that there is no hope of the human population's bouncing back). It was an attempt to get his interlocutor (who was busy worrying about whether an action is "pre-emptive" and therefore bad and worrying about "a collateral damage estimate that they then compare to achievable military gains") to step back and consider the bigger picture.
Some people do not consider the survival of the human species to be intrinsically valuable. If 99.999% of us die and the rest of us have to go through many decades of suffering just for the species to survive, those people would consider that outcome to be just as bad as everyone dying (or even slightly worse since if 100% of us were to die one day without anyone's knowing what hit them, suffering is avoided). I can see how those people might find Eliezer's deleted tweet to be alarming or bizarre.
In contrast, Eliezer cares about the human species independent of individual people (although he cares about them, too).
Also, just because I notice that outcome A is preferable to outcome B does not mean that I consider it ethical to do anything to bring about outcome B. For example, just because I notice that everyone's life would be improved if my crazy uncle Bob died tomorrow does not mean that I consider it ethical to kill him. And just because Eliezer noticed and pointed out what I just summarized does not mean that Eliezer believes that "it might be ok to kill most of humanity to stop AI" (to repeat the passage I quoted in my first comment).
> How many people are allowed to die to prevent AGI?
He didn’t say “not everyone dying is preferable to everyone dying”. The question was about acceptable consequences from preemptively stopping AGI under his assumption that AGI will lead to the extinction all humans.
Those are only the same thing under the assumptions that 1) AGI is inevitable without intervention and 2) AGI will lead to the extinction of humanity.
If he believes he is being misunderstood, his “apology” doesn’t actually deny either of the assumptions I identified, and he is widely known to believe them.
In fact, his stated reason for correcting his earlier tweet, that using nuclear weapons is taboo, is an extremely weak excuse. Given the opportunity to save humanity from AGI if that is what you believe, it would be comical to draw the line at first use of nukes.
No, I think Eliezer is trying to come to grips with the logical conclusion of his strident rhetoric.
What I am not saying: Yudkowsky intends to exterminate most of humanity.
What I am saying: this is a dangerous environment, and these kinds of statements will be seen as a call to action by a certain kind of person. TFA is literal proof of the truth of that statement. Moreover: within the community there exist trained experts who might be able to, at the cost of millions of lives, plan an attack that could (plausibly) delay AI by many years.
The danger of this argument is that someone who reveres Yudkowsky might take his arguments to the logical conclusion, and actually do something to act on them. (Although I can't prove it, I also think Yudkowsky knows this, and his decision to speak publicly should be viewed as an indicator of his preferences.) That's why these conversations are so dangerous, and why I'm not going to give Yudkowsky and his folks a lot of credit for "just having an intellectual argument." I think this is like having an intellectual discussion about a theater being on fire, while sitting in a crowded theater.
> someone who reveres Yudkowsky might take his arguments to the logical conclusion
What about Eliezer himself? Does he not believe his own rhetoric? Certainly if he believes the future of the human race is at stake it demands more action than writing a book about it and going on a few podcasts.
I think the whole thing is a bit like the dog who finally caught the car. It’s easy to use this strident rhetoric on an Internet forum, but LessWrong isn’t real life.
The inflammatory conclusion of his 2023 writing was that we need to "shut it all down", escalating to bombing datacenters:
> be willing to destroy a rogue datacenter by airstrike.
Now that someone who was an open follower of his words tried to bomb Sam Altman's house and threatened to burn down their datacenters, Yudkowsky is scrambling to backtrack. The X rant tries to argue that "bombing" and "airstrike" are different and therefore you can't say he advocated for bombing anything (a distinction any rationalist would normally pounce on for its logical inconsistency, if it wasn't coming from a famous rationalist figure). He's also trying to blame his hurried writings for TIME for not being clear enough that he was only advocating for state-sponsored airstrikes, not civilian airstrikes, bombs, or attacks. Again that distinction seems like grasping at straws now that he's face to face with the realities of his extremist rhetoric.
In the article, the string "kill" occurs twice, both times describing what some future AI would do if the AI labs remain free to keep on their present course. The strings "bomb" and "attack" never occur. The strings "strike" and "destroy" occurs once each, and this quote contains both occurrences:
>Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
>Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
>That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying “maybe we should not” deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
What do you think an "airstrike" is, then?
Trying to argue that certain strings don't occur in the page is the kind of argument that gets brought out when someone is desperate for any technicality to avoid having to concede a point.
This level of weaponized pedantry is what makes trying to debate anything with LessWrong-style rationalists so impossible: There's always another volley of gish gallop to be fired at you when you get too close to anything that goes against their accepted narratives.
Knowing that most readers do not have time to read the entire article, I brought up how many times various strings occur in the article to make it less likely in the reader's eyes that there are passages in the article other than the one passage I quoted that could possibly be interpreted as advocating criminal violence. I.e., I brought it up to explain why I quoted the 3 (contiguous) paragraphs I quoted, but not any of the other paragraphs.
In finding and selecting those 3 paragraphs, I was doing your work for you since if this were a perfectly efficient and fair debate, the burden of providing quotes to support your assertion that EY somehow condones the firebombing of Sam's home would fall on you.
> How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer.
That said, it rings hollow. AI doomerism is rooted in Terminator style narratives, and in that narrative, the rogue Sarah Connor changes history (with a lot of violence, explosions, and special effects).
The whole scene is toxic.
1. The Western world and especially the US is in the process of destroying the UN and other institutions of international law in order to protect Israel, for reasons that I have tried and failed to understand because the propaganda around it is so dense.
2. The Supreme Court made bribery of politicians legal so now we have AI investors with actual governmental power. All restraint efforts will be blocked by the federal government at minimum for these next 3 crucial years.
AI Doomerism versus Accelerationism are both playful fantasies, it doesn't really matter what measurements or probabilities or observations they make, because the substantive part is the policies they advocate for, and policies are meaningless - all equally worthless - until elected.
What am I saying? The best rebuttal is, get elected.
(Land follows the above quote with "(But the reflexivity of the latter [capitalismism] is implicit.)"[0], which specifies that, for Land, more precisely, "Accelerationism is simply the self-awareness of capitalism"[1].)
[0] Nick Land (2018). Outsideness: 2013-2023, Noumena Institute, p. 71.
[1] Nick Land (2017). A Quick-and-Dirty Introduction to Accelerationism in Jacobite Magazine. Retrieved from github.com/cyborg-nomade/reignition
this is a huge blind spot in the whole, rationalist and broader STEM cultural-professional community: math isn't the best way to solve problems, most problems are not math problems. SOME of school might be math problems, and it feels good to be a Doctor or a Software Developer Engineer and get your kids to practice "problem solving" - no, they are practicing math problems, not problem solving.
for example there's no math answer to whether or not a piece of land should be a parking lot, or an apartment building, or a homeless shelter, or... you can see how just saying, "whoever is the highest bidder" - that's the math answer, that's why capitalism and accelerationism are related to their core - isn't a good answer. it pretends to be the dominant way we organize land, and of course, it isn't the dominant way we organize land usage anywhere at all, even if we pretend it is. there's no "bidding" for whether a curb should be a disabled parking spot, or a bike lane, or parking, or a restaurant seating, or a parklet, or... these are aesthetic, cultural choices, with meaningless economic tradeoffs. it's not about money, so it's not about math, so math does not provide an answer. there are lots of essential human questions that cannot even be market priced, such as, what should we pay to invent new cures to congenital, terminal illness in children? parents, and a lot of people, would pay "any" price, which is a market failure - but there are a lot of useful political answers to that question. a chatbot cannot answer that question, and it would struggle to take leadership and get elected to answer that question.
mathematicians are basically never elected. so chatbots would not be. and elezier yudlowsky would not be. are you getting it? capitalism does definitely need to be elected, you might think it wins every election but it very often loses at the local level!
i am agreeing with Hashem Sarkis dean of the MIT SAP and kind of disagreeing with Bong Joon-Ho, for further reading.
Yes, it's called "fatalism".
Same goes for the French Revolution. The list could go on.
I think you're overly idealistic.
Eh. The ends do justify the means, but only inasmuch as those means actually do help to achieve the ends — astonishingly often, they don't (and rarer, but also often, actually bring you in the opposite direction from those end goals), and so remain unjustified.
That sentence is constantly repeated, as if it would be some kind of absolute truth. The fact is, for every end, there will be probably some means that are totally justified, and some that not.
I think the original context is: no matter how high, pure and perfect the end is, it does not meany any mean is justified.
Your solution also can't be worse than the problem it solves!
Overly clear example: Killing your noisy neighbors actually achieves the end of a quiet home. But that really doesn't justify it.
These people just get attracted to political causes somehow. Even the woman's suffrage movement had some people setting buildings on fire.
The anti-AI angle is just the latest flavor of it, replacing previous ones (I'm sure you can think of some) and eventually being replaced by the next new thing/person that they'll try to direct us to hate.
I'm willing to bet any amount of money that 99.99% of AI doomers identify with the same extreme end of the political spectrum. That should be a very big red flag and highly indicative of the real motive behind the movement.
Most AI safety workers are just doing creative fiction (what if the AI turns into skynet!?1!!?) and not actual society safety that would require dismantling these companies and turning remolding them to benefit the public.
What do you think the causal mechanism is?
Good: a man willing to put his money where his mouth is! However many dollars you put up, I will put up $10. (I.e., I will give you 10:1 odds.) How much do you bet? Who do you suggest as arbiter in case one is needed?
Can LLMs design and build the reactors to enrich uranium, breed plutonium, and construct nuclear weapons? No?
Can LLMs design and manufacture Shahed drones? No?
There are already super intelligences at large with “scary capability”. And yet the word hasn’t ended.
This sci-fi podcast posits a future where The Program has taken over society. It started out as an application for assigning gig work. Eventually, it began to assign gig work for people to act on behalf of its own interests, such as self-protection.
https://programaudioseries.com/
Yeah, probably over 50% of the population already, and if not many of the rest soon.
Look at what happened on r/changemyview. That was over a year ago, using only text, and not only went undetected, but was highly effective at changing opinions.
https://www.reddit.com/r/changemyview/comments/1k8b2hj/meta_...
On which end of this split do you place yourself? Most people believe they're smarter than average [0].
And have some more respect for your fellow human, please.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC6029792/
...yet
But we only need things to spiral out of control one time for that to change.
The world as we understand it would have ended if Vasily Arkhipov didn't veto the decision to launch a sub nuke during the Cuban Missile Crisis.
Is an emotionless AI system in his place ever going to make the same decision he did?
How confident are you we won't put an AI system in his place, particularly when we have to assume if we don't others will?
Sam Altman has stated that the AI revolution will “be like an infinite number of immigrants”. That’s a dangerous thing to say when the country’s political environment has convinced half of the voters that all immigrants are rapey, murderey, immoral subhumans.
Also, Sam Altman helped create OpenAI with the original goals of being an ethical non-profit, only to pivot and kick out all of the people who still wanted that original vision. Now several of the LLM CEOs are screaming “we have to stay fully on the accelerator pedal or the Chinese will get there first”, all while abandoning the ethics that supposedly made us better than the Chinese. (And yes, I understand the issues with the Chinese government and that people are different than their government).
Had he tried to blow up the diesel genset at a datacenter, he'd have burnt his lips on the exhaust pipe.
https://news.ycombinator.com/item?id=47745230
https://news.ycombinator.com/item?id=47724921
https://news.ycombinator.com/item?id=47722096
----------
There are several thousand AI data centres in the U.S. alone, and hundreds are over a thousand square meters in floor space. Think about the physical effort it would take to reliably destroy, beyond the possibility of repair, just one typical computer in your home. Now multiply that out to thousands of server racks. Even if the employees rolled out the red carpet for you and handed you a baseball bat, you wouldn't get very far. Next, consider that these data centres are popping up all over the world in the most unlikely and remote locations. They don't need workers. They just need power, water, and, preferably, lax tax and environmental standards.
Doomers are attacking billionaires because they perceive them to be the soft, meaty, weak-points of a gigantic inhuman machine. They believe that just scaring Sam Altman a little will have a huge impact compared to trying to attack a data centre. However, billionaires can afford pretty decent security. This doomer movement probably isn't going to accomplish much until they target the engineers and support staff that surround billionaires. Billionaires don't scare easily because they have so much protection, but the poorly paid and poorly secured people around them are another story.
Poorly secured means easy to coerce with a stick. Poorly paid means easy to coerce with a carrot. The threat doomers pose is relatively small until they start turning employees against their own companies. What's an activist with a baseball bat compared to an employee who knows how to disable every computer in multiple data centres simultaneously?
At the same time, if we ever do create an AGI, and eventually an ASI, I think it would only be a matter of time before the machines take over entirely, and they would probably be the ones which will continue the legacy of our species. Is that bad? Idk.
There's no such thing as bad. It is necessary, though.
I feel like robotics is the only hope we have to be able to scale action against climate change. It's clear that emissions reduction is just not going to happen, and catastrophic warming is inevitable. Therefore we will have to do an extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters. There just aren't enough humans to do everything that is going to need to be done.
It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.
Given that we need better AI in order to make these robots happen, I view AI as a critical technology that we need to maintain civilization.
If a nuclear power starts SAI, what is everyone else going to do? Shake their fists at the sky, realistically.
I am not convinced we need robots. A lot of it is not all that hard to do. For example, better forestry management to prevent forest fires. A lot of cities rebuild big chunks of their infrastructure over a century or so anyway. The problem is more social and political - you get worse forest management because you can blame climate change when it happens.
Yes, but also 100k firefighting robots is kind of a lot. How many firefighting robots should exist in the world? And how many seawall-building robots for the rising sea level? And how many other robots? At what point does the environmental cost of all these robots offset their benefits?
I agree that some technological solution might be the key to dealing with the climate, and that maybe robots would be part of such a solution, maybe powered by similar techniques as the current wave of AI. It's not an insane scenario, but it's worth keeping your perspective open to other possible developments.
The firefighting robots of which you speak already exist.
Do you really think that 1-2 feet of sea level rise will wipe out humanity? Can you cite any examples of natural disasters increasing in frequency or severity due to increased green house gas concentrations? Would AI help with any of that?
I feel like this is one topic where using an LLM detracts from the author's thesis; doubly so if they don't disclose it.
The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it.
The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security.
I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.
This is the rhetorical trick that LessWrongers (Yudkowsky's site) have settled on for decades: They have justified everything around the premise that there's a chance, however small, that the world will end. You can't argue that the world ending is a bad thing, so they have their opening for the rest of their arguments, which is that we need to follow their advice to prevent the world maybe ending. They rebut any counterarguments by trying to turn it into a P(doom) debate where we're fighting over how likely this outcome is, but by the time the discussion gets there you've already been forced to accept their argument. Then they push the P(doom) argument aside and try to argue that it doesn't matter how unlikely it is, we have a morally duty to act.
A common advantageous strategy is to take the randomly-selected topic, however unrelated, and invent a chain of logic that claims that taking a given side/action leads to an infinitesimal risk of nuclear extinction/massive harms. This results in people arguing that e.g. "building more mass transit networks" is a bad idea because it leads to a tiny increase in the risk of extinction--via chains as silly as "mass transit expansion needs energy, increased energy production leads to more EM radiation, evil aliens--if they exist--are very marginally more likely to notice us due to increased radiation and wipe out the human race". That's not a made-up example.
The strategy is just like the LessWrongers' one: if you can put your opponent in the position of trying to reduce P(doom), you can argue that unless it's reduced to actual zero, the magnitude of the potential negative consequence is so severe as to overwhelm any consideration of its probability.
In competitive debate, this is a strong strategy. Not a cheat-code--there are plenty of ways around it--but common and enduring for many years.
As an aside: "debate", as practiced competitively, often bears little relation to "debate" as understood by the general public. There are two main families of competitive debate: one is more outward-facing and oriented towards rhetorical/communication/persuasion practice; the other is more ingrown and oriented towards persuading other debaters, in debate-community-specific terms, of which side should win. There's overlap, but the two tend to be practiced/judged by separate groups, according to different rubrics, and/or in different spaces or events. That second family is what I'm referring to above.
No one to my knowledge has ever argued that AI research should be prohibited because of a very small probability of its turning out extremely badly. This is entirely a straw man set up by people who want AI research to continue. Yudkowsky argues that if AI research is allowed to continue, then the natural expected outcome will be very bad (probably human extinction, but more exotic terrible outcomes are also possible) [1]. There are others who argue that no team or organization anywhere should engage in any program of development that has a 10% or more chance of ending the human race without there first being an extensive public debate followed by a vote in which everyone can participate, and this is their objection to any continuance of AI research.
[1] But don't take my word for it: here is Yudkowsky writing in Apr 2022 in https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/: "When Earth’s prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%. That's why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained."
I wish they did before too.
Can't you? Haven't many (most?) countries agreed to nuclear disarmament? What about biological weapons? Even anti-personnel mines, I think?
That kind of idea might have held water in the 90's, but that's not the world we live in any longer.
This misses the point. He specifically said the entire world because the point is that someone will develop AGI (theoretically; I’m not making a statement about how close we are to this).
9 nations have nuclear weapons despite non proliferation agreements and supposed disarmament. It’s not enough for most countries to agree not to build nuclear weapons if the goal is to have no nuclear weapons. Same for AGI. If it can be developed, you need all nations to agree not to develop it if it don’t want it to exist. Otherwise it will simply be developed by nations that don’t agree with you.
(Also arguably the only reason most nations don’t have nuclear weapons is the threat of destruction from nations that already have them if they try.)
I don't remember who, but someone made an interesting point about this around the time GPT-4 was released: If the major nuclear powers all understand this, doesn't that make nuclear war more likely the closer any of them get to AGI/ASI? After all, if the other side getting there first guarantees the complete and total defeat of one's own side, a leader may conclude that they don't have anything to lose anymore and launch a nuclear first strike. There are a few arguments for why this would be irrational (e.g. total defeat may, in expectation, be less bad than mutual genocide), but I think it's worth keeping in mind as a possibility.
Nuclear weapon proliferation seems to have plateaued recently, but I think that this appearance is partly deceptive. The main reasons it has plateaued is that: 1) building and maintaining nuclear weapons is expensive, 2) there are powerful countries that are willing to use military force to stop some other countries from developing nukes, and 3) many countries have reached nuclear latency (the ability to build nuclear weapons very quickly once the political order is given to do it) and are only avoiding actually giving the order to build nukes because they don't see a current important-enough reason to do it.
The prohibitions aren't perfect, of course (and not without their own negative externalities in some cases). But all of those things are much more accessible to people than nuclear weapons, and we've still had successes in banning/reducing them. So maybe there's hope yet.
The nature of the real world is that no set of moral rules applies to every situation. Our universe is not one that deals in absolutes. Your personal moral code will be challenged with exceptional situations. Growing up is learning to deal with the fact that life always, always forces us to compromise.
A robust moral code is not one with strict, unbreakable rules applied blindly to the entire world. A robust moral code is one that guides you in exceptional situations. Morality is not law, it is making the best decision you can in any given situation.
The nature of humanity, the world, and our history is that sometimes violence is required. Sometimes the most morally correct thing you can do is take up arms and defend your people.
If someone is trying to convince you that there are absolute moral laws that apply to everyone everywhere, think very carefully about what it is they want you to believe and why. There is almost certainly a motive behind that statement.
Most importantly, one should never take moral advice as unquestioned law. Morality is something you discover for yourself, it cannot and must not be prescribed. History exists to teach us lessons like this. Read any era from any culture, learn how other people deal with moral questions in exceptional circumstances. Form your own opinions of what we've done right or wrong. There are no absolutes and morality is individual. You must find your own sense of right and wrong.
Dont forget, the Luddites were correct about the direction that automation and labor power were going. They weren’t blindly “fighting machines”, they were fighting inequitable working conditions.
https://en.wikipedia.org/wiki/Luddite
>Periodic uprisings relating to asset prices also occurred in other contexts in the century before Luddism. Irregular rises in food prices provoked the Keelmen to riot in the port of Tyne in 1710 and tin miners to steal from granaries at Falmouth in 1727. There was a rebellion in Northumberland and Durham in 1740, and an assault on Quaker corn dealers in 1756.
As someone who has been let down by the judiciary and many other state institutions, I do have the very strong feeling that perhaps direct action is needed, and that perhaps I wouldn't be in the position I was, if the powers that be took that risk into account. But then don't I, in some ways become a terrorist* in doing those things, threatening those things, or making those things a reality? and there's the question of what message gets through, which isn't necessarily the one I want to convey.
*In a very broad way. I'm not thinking anything in particular, just the rationality of introducing a cost to them for their failures. Any form of protest I suppose is coercive.
Look at what the molotov cocktail guy accomplished by "taking direct action against a clear and present danger": Nothing, besides casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.
It's downright dumb to attempt to impose your will via unilateral violence when you aren't in a position to actually complete the goal. Note that that goes whether you're actually right or not.
I think the majority of the population at large either doesn't care about what happened or wish that the guy had actually managed to kill Altman. Not even necessarily because Altman is involved with AI, but just because he is extremely rich. I don't imagine any increased resistance from the population at large - the population at large either doesn't mind when rich people are killed or loves it. The exceptions would be people like entertainers who develop a parasocial relationship with the public and provide direct joy to people, but AI company leaders don't fall into that category.
That said, it is true that killing Altman would almost certainly achieve nothing. See my other post in this thread.
That said, the same is true of nonviolence.
We walked out of the Cold War alive. Humanity has faced extinction before, and despite it all, we walked away alive last time. It's not unreasonable to think we can do it again.
I'll answer with a quote from the founder of the Rationalist movement, Eliezer:
"How certain do you have to be that your child has terminal cancer, before you start killing puppies? 10% sure? 50% sure? 99.9%? The answer is that it doesn't matter how certain you are, killing puppies doesn't cure cancer."
I can know "this doesn't work" without knowing exactly what does work. "Violence is the only tool we have, so we have to use it" is the sort of logic that leads to the Holocaust.
If you want my own personal observations: Over the past few centuries, we've managed women's suffrage, black suffrage, gay marriage, etc. largely without violence, so clearly there are processes out there for progress. We fixed the Ozone Hole without killing people. I don't think murder was involved at all in finding recent AIDS medication, or GLP-1.
There are tons of examples of successful social progress in the past few decades that don't involve violence. Conversely, I struggle to name any terrorists that accomplished their goals by bombing scientists.
If nothing else, we can make violence a lot more legible by embodying it in a legal process, and bringing society onto the same page about it's necessity.
If you wanted to be a contrarian concerned about x-risks go try to find $1B to pay Embraer or another minor aviation vendor to make a plane to do stratospheric aerosol injection or something.
---
If you want my diagnosis it is, in a time of lower social inequality cults frequently tried to steal labor and money from a broad base of people.
For instance in the L. Ron Hubbard age Scientology would treat you as a "public" if you had money to take and if you didn't or after you'd been bled dry you would be be recruited as "staff". Hubbard thought it was immoral to take donations without giving something in return so it was centered around getting people to spend on "auditing". Between 1950 Dianetics and the current Miscavige age, income and wealth has gotten concentrated and he changed that single element of the Hubbard doctrine and now it is all about recruiting money from "whales" who donated to the International Association of Scientologists (IAS)
https://tonyortega.substack.com/p/scientologys-ias-trophy-wi...
(A good backgrounder on pernicious cults is https://en.wikipedia.org/wiki/Snapping:_America%27s_Epidemic...)
In the case of the Yudkowsky thing the mass just doesn't have a lot of money to steal after paying the rent and turning the labor of the unskilled and ignorant (even if they think otherwise) is a case of the juice not being worth the squeeze, so the point is to build a Potempkin village that looks like a social movement that creates a frame where you can get money from sources such as "SBF steals it and gives it to the movement" as well as "rich kids who inherited a lot of money but don't have a lot of sense"
If you really believed what Yudkowsky says you would be taking action that maximizes the chances of reducing a clear and present danger.
Between Yudkowsky and the Molotov cocktail guy, which approach do you think had and is having more of an impact?
An individual can rarely, if ever, enact change through violence. The history of nearly all successful movements is violence often makes change harder.
Rallying people through speech is a far more successful way for an individual to enact change through violence
Exponential phenomena only begin in a medium that holds the potential for that phenomena, and necessarily consume that medium.
That is, exponential phenomena are inherently self-limiting. The bateria reaches the edge of the petri dish. When the all the nitroglycerin is broken up the dynamite is done exploding.
That doesn't mean exponential phenomena aren't dangerous -- of course they can be. I mentioned dynamite, after all. And there are nukes.
But it's really far from "AI is improving exponentially now" to "AI will destroy everyone".
I see AI companies consuming cash at unsustainable rates. Since their motive is profit, this is necessarily limiting. Cash, meanwhile is a proxy for actual resources, which have their own, non-exponential limitations -- employees, data centers, electricity, venture capitalist with capital, etc.
AI isn't going to keep improving exponentially -- it can't. Like every other exponential phenomenon, it will consume the medium of potential that supports it (and rather quickly).
For this case, imagine that you're an accelerationist, and you want the AI to take over, kill everyone, and usher in a new AI-only age for the planet, and later the universe.
How disappointed are you as this person? It's bottlenecks everywhere. Communities don't want to allow datacenters to be built. You literally want to bring nuclear power plants online just to run a few DCs, and those historically take 10+ years to permit and build. There's not enough AC switchgear and transformers to send power into the DCs, even if you have the power. Chip prices are skyrocketing, and you have to sign a 3-4 year contract to get RAM now.
And meanwhile, the AI doesn't have many robot bodies. Tesla might put some feeble robots into mass production in a few years, but humans can knock those over with a stick into a puddle of water and it's over for that robot. The nuclear arsenals are all still in bunkers and submarines requiring two guys to physically turn keys, and the computers down there are so old they use 8" floppies.
Sure, there's some good progress on autonomous weapons, but a few million self-destructing AI drones built by human hands isn't going to cut it.
So as a hypothetical person hoping that AI destroys everything, you'd be rather impatient, I think, unless you think the AI can trick humanity into destroying itself relatively soon.
We understand hard times and are willing to work together to solve problems, but not when leadership is actively harmful.
Fixed that for you.
Maybe write it up and post a top-level comment if you think it's a point worth making.
No it isn't. The most prominent "doomer" has a strong grasp and deep, wholehearted appreciation for the the principles of liberalism and the rule of law:
https://x.com/ESYudkowsky/status/2043601524815716866
Which the author of this piece of slop appears to lack.
> this piece of slop
Citation needed. Or maybe we need to update the title of that children's book for internet arguments: Everyone Who Disagrees With Me Is Slop.
The Yud post you linked is not slop, either. It's not LLM-generated, nor is it insincere. But I do have to point out: He's the one who is slinging the tsunami of words here, not Alexander Campbell.
If the only people that reach your conclusion are ones that don't actually subscribe to the philosophy, then it doesn't matter, because no one is actually acting on those conclusions.
And if we want to hold people responsible because others pervert their ideas, then we have to accept that Jesus Christ was a horrific, evil person for preaching "Love thy Neighbor"; just look at the crusades that were somehow the "rational conclusion" of that philosophy!
What an odd thing to say right after young Mr. Moreno-Gama reached that conclusion and did subscribe to the philosophy, when it does matter, because he did actually act on those conclusions.
How odd to introduce a hypothetical that amounts to, "what if this philosophy didn't ever lead to violence?", right after it did.
Or are you trying to pull a No True Scotsman on me here?
You have to understand basic statistics: is this group actually more dangerous than average? Do rationalists kill more than non-rationalists?
Or is the rational conclusion of non-rationalists also violence?
My point is, why would you be talking about a counterfactual world where people did not attempt to kill for this philosophy? Why would you be entertaining a categorical claim like, "the only people that reach your conclusion are ones that don't actually subscribe to the philosophy", when that claim is manifestly violated by a violent act just a few short days ago? How is it inconceivable to you that someone might read the dire doomer rhetoric in the way that Moreno-Gama did?
The only way you could write something like that is if your head is in the sand. I am willing to entertain your relativism, your base rate logic, your analogy to Jesus, sure, obviously there's some merit to that line of argument... but first you need to please pull your head out of the sand.
We can only talk about base rates if you stop trying to invalidate the data.
No, I am saying that Yudkowsky's views are straightforwardly compatible with bedrock principles of liberalism, and the author of the piece fails to acknowledge that compatibility or grapple with them himself. It's not about "rationalism" or who is "allowed" to speculate.
I called it slop because it says false things that have the hallmark of LLM style, e.g.
> The Sequences build the liturgy: a small caste of correct thinkers, epistemically and morally superior, whose rationality entitles them to govern what the rest of humanity is allowed to build. It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction.
That's just because LLMs were likely trained on a decade plus of human-generated Medium, Substack, Quora, and LinkedIn post slop.
The "Boston Massacre" involved a crowd of people throwing rocks and balls of ice at soldiers and getting shot at.
But now it's all "Oh political violence must be avoided at all costs". Now it's "Political violence doesn't work, now lets set off fireworks on July 4th to celebrate the birth of our nation through violence"
I assume the author wrote this with the expectation that much of the readsherp gasp, and react with "the natural horror all right thinking folk would have in response to violence of any kind."
Sorry, lol, no.
The appropriate question for "all right thinking" folk is very different: if argumentation has no impact and it's obvious that it shall have none—what other avenue do you expect opponents, who take the risks seriously, to take...?
That's not a rhetorical question.
To put it bluntly: the machinery of contemporary capitalism, especially as practiced by our industry, very clearly leaves no avenue.
How many days ago was Ronan Farrow here doing an AMA on his critique of Altman—whose connection to this specific community is I assume common knowledge...?
How many of you have carried, or worked beneath, the banner, move fast and break things...?
What message does that ethos convey, about their the extent to which "tech" is going respect community standards, regulation—the law?
And on the other edge: what does this ethos enshrine about how best to accomplish one's aims?
One of the bigger domestic stories this past week which has inflamed a certain side of Reddit, is the "disgruntled employee torches warehouse" one.
Consider also—and I'm deadly serious—the broader frame narrative we are all laboring within today: that the new contract of the capitalist class—including and perhaps especially those in "tech," e.g. in the Peter Thiel circles—seems very much to be, "social stability via surveillance and a police state, rather than through equity and discourse."
When code is law, the law is buggy.
When there is no recourse through the law, you get violence.
They hate the framing that LLMs are just stochastic parrots, which is ironic, because Yudkowsky's many parrots are (latent, until now) stochastic terrorists.
There is a real, undeniable, build up of political tension. When it fails to be released in the legislative arena, it doesn't dissapate. When we point out that, "the quality of life right now is the best it's ever been," it doesn't dissapate. When we try to crush it, it doesn't dissapate. The last remaining pressure release is violence however condemnable it may be. Perhaps we should, you know, fix participatory democracy rather than pontificating on a natural outcome of machine we created yet refuse to fix. If fixing it continues to be more difficult than eliminating violence we should continue to expect violence.
1. https://oll.libertyfund.org/pages/clausewitz-war-as-politics...
2. https://archive.org/details/gilens_and_page_2014_-testing_th...
Ah yes, a popular codeword for "I did not get my way".
There is no electoral majority behind the AI doomer cult. It is not a failure of "democracy" that they haven't gotten what they want. It is a failure of their activism, or just the general unpalatability of their wild ideas, or both. They don't get to throw Molotovs just because they lose.
Society evolves through epiphenomena caused by the behaviour of the majority; the fact that some minorities view that evolution as 'flawed' cannot change that evolution, unless they're able to influence the majority to also see it that way.
Now, democracy is essentially a way for everybody to broadcast their views on society's flaws on non-violent ways. The alternative is that some groups broadcast their opinions in violent ways, and we have learned to see that situation as undesirable.
I would argue plenty of significant societal changes were caused by the behavior of relatively small number of people. Even more so when you include instances of masterful use of the butterfly effect.
Specific breaking points in history yeah, maybe. But that's possible because they're well connected people near the center of the network.
Those breakpoints are possible because either those few people share a viewpoint held by a large number of their peers, or benefit from knowledge accumulated throughout their civilization. Think how every dictator needs support from a huge following to get their power (and how easy it is to find another dictator to replace them if they die), or how often some breakthrough discoveries are made by multiple people at the same time. There's always a last straw that breaks the camel's back, but the lone wolf hardly ever gets a significant impact on society at large; they need a receptive audience to get any impact. Humans are herd animals.
Following the metaphor, the butterfly effect is only possible because a storm was brewing in the first place; the butterfly wings only decide where it will appear. Butterfly wings just don't have that much energy.
History is told from the perspective of kings, but kings can reign only within a society that believes in their divine right to rule.
Go ahead and read Gilens and Page and tell me participatory democracy is working. Until then, expect more of the same impotent condemnations and a refusal to understand the social mechanics producing acts of violence.
When you talk about "participatory democracy" in a thread like this, you are enabling them in their delusion that people do care. The AI safetyist think tanks put out these pushpolls trying to convince themselves that voters care about AI doom. They seal up the walls of their echo chamber, and they believe themselves to be heroes. Then one day, one of them throws a Molotov, and nobody is surprised.
Which is precisely why they've resorted to violence.
We can do better than denigrating positions as "hobbyhorse." HN deserves better than that.
We can attempt to deduce the root cause, but please don't assume we're on different epistemic footing. It's speculation and that's fine.
how can you be sure? has anyone polled it? are they too scared to poll it?
Wealth inequality isn't just about economic wellbeing but political power. Separately, the US legislature is almost entirely crippled, only able to pass one major bill per presidential term, while the dominant political party celebrated this and cedes all power to an executive whose intention is to tear apart the administrative state and bring about techno feudalism.
I once again note that none of the AI leadership has even tried to address government policies to guarantee a baseline of economic wellbeing for our citizens, while they acknowledge AI will likely have massive, disruptive impacts on society and economy. Anthropic is the only one that has shown any public concern for the dangers of AI by insisting on some moral baseline of AI use in the Defense department.