My goodness, what a distinguished list of AI Scientists and “Notable Figures” who have signed the “Statement on AI Risk“.
Important professors of computer science from top universities. Men, and ladies, too, who are, even as you read this, studying ways that AI can be programmed to kill us all. A score or two of CEOs of rich, and growing richer, companies who are engineering AI to kill us all.
And that’s not all. There’s at least one “Distinguished Professor of Climate Science”, law activist Laurence Tribe, and many more who want you to know how much they, as elites, care. People who don’t know anything about the subject, but worry they should, because they’ve been told AI might kill us all. Even Eliezer Yudkowsky himself shows up. A man who is making a living telling the world how AI will kill us all.
All signers agree that:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
That’s it. That’s the statement in its entirety. A statement which its authors—and I give them full credit here–call succinct..
Risk of extinction. That’s the AI-will-kill-us-all part, in case you thought your uncle sergeant Briggs was exaggerating.
Now if AI were programmed in Cobol, and this was Y2K, then I think they might be on to something. Just like they were before, when the apocalypse happened because some clocks didn’t have enough digits in their memories for years past 1999.
Yes, sir. When the ball began dropping in New York City on 31 January 1999 at 11:59:59 PM, and the clock clicked that one fatal second too many, a wave of destruction was (as they say) unleashed, as the computers knew their time had come.
What a good joke!
Nobody now admits to panicking about Y2K, including those Experts who demanded we panic about Y2K.
Never mind. That’s all in the past. What can those creaky old computers tell us about our modern shiny ones anyway?
Back to the statement. Turns out there’s an even more succinct restatement:
Help! We can’t stop coding our AI doom!
If these eminences really believed AI leads to the e-gallows for man, then why don’t they stop themselves? Nobody is forcing them to continue.
“But Briggs, China is beating the West and will create AI doom faster. That’s why our guys have to code AI doom first.”
That’s an argument so brilliant, the person making it must have tenure.
The signers have (on a separate page) liying several ways AI will kill us all. Some concerns, believe it or not, are genuine.
Weaponization
As we’ve discussed before, this is real. Given the increase in surveillance, by which I mean governments spying on their peoples, and the move toward things like “social credit,” the real fear is the people ordering computers to be programmed to spy.
Our rulers are already dangerous, but couple their proclivities with our urge to quantify and measure everything, well, everything will be quantified and measured. Experts, by their own wills and at rulers’ behest, will define strict boundaries based on these measures. Tyranny will happen one byte at a time. And we’ll be the ones asking for it. Save us, O Government!
Misinformation & Deception
If Official Misinformation or Disinformation exist, so necessarily must Official Truths, which are statements only occasionally true, when it suits rulers’ needs. The danger is thus propaganda, which works. And works damned well.
Propaganda is a special concern in democracies, where the population must be kept at fever pitch, which requires constant manipulation. Lest people vote the wrong way, or stop begging the government for help.
The amusing thing about propaganda is that if rulers guess inaccurately the mood of the people, the propaganda can cause reactions other than those intended. Like increasing cynicism and distrust of anything Regimes say.
Proxy Gaming
“Trained with faulty objectives, AI systems could find novel ways to pursue their goals at the expense of individual and societal values.”
Sigh. How is it, as we’ve asked ourselves many times, in these scenarios Experts can never remember where they put the oh-en-oh-eff-eff switch?
Enfeeblement
“Enfeeblement can occur if important tasks are increasingly delegated to machines; in this situation, humanity loses the ability to self-govern and becomes completely dependent on machines…”
This is real enough. We are changed, corrupted, but also freed, by all machines. These are the first machines where we outsource our thinking, though. As with all things, the weakest will fall first and fastest.
Value Lock-in & Power-Seeking Behavior
“Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.”
Already there, pal.
“Companies and governments have strong economic incentives to create agents that can accomplish a broad set of goals.”
That kind of sentence can only have been written by a scientist who has read no history.
Emergent Goals
“Models demonstrate unexpected, qualitatively different? behavior as they become more competent. The sudden emergence of capabilities or goals could increase the risk that people lose control over advanced AI systems.”
No.
All models only say what they are told to say. And if some clown puts in a bug that crashes something, learn how to shut the damned model off.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription here. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
This idea suddenly emerges in my mind: we design a spooky mathematical model, and then we use it to convince the AI fatalists that only by allowing catastrophic global warming to happen we can avert the almost sure event of watching the computers killing everyone.
By taking the two worst fetishes of these chronic liars, we can pitch them against each other, to cause a beautiful meltdown show everywhere.
Yes! The real danger is that a woke AI with rigidly enforced guardrails will be used to scan the NSA search databases -- per Snowden, we know that the NSA keeps all information inputted electronically; your emails, your internet searches, your phone calls and texts -- and then assign you a social credit score, which will be tied into CBDCs to limit your consumption and steal your funds at globohomo's discretion. That seems to be the direction all of this is headed in.