Full article in Palladium magazine.
In recent decades, a growing coalition has emerged to oppose the development of artificial intelligence technology, for fear that the imminent development of smarter-than-human machines could doom humanity to extinction. The now-influential form of these ideas began as debates among academics and internet denizens, which eventually took form—especially within the Rationalist and Effective Altruist movements—and grew in intellectual influence over time, along the way collecting legible endorsements from authoritative scientists like Stephen Hawking and Geoffrey Hinton.
Ironically, by spreading the belief that superintelligent AI is achievable and supremely powerful, these “AI Doomers,” as they came to be called, inspired the creation of OpenAI and other leading artificial intelligence labs whose technology they argue will destroy us all. Despite this, they have continued nearly the same advocacy strategy, and are now in the process of persuading Western governments that superintelligent AI is achievable and supremely powerful. To this end, they have created organized and well-funded movements to lobby for regulation, and their members are staffing key positions in the U.S. and British governments.
Their basic argument is that more intelligent beings can outcompete less intelligent beings, just as humans outcompeted mastodons or saber-toothed tigers or neanderthals. Computers are already ahead of humans in some narrow areas, and we are on track to create a superintelligent artificial general intelligence (AGI) which can think as broadly and creatively in any domain as the smartest humans. “Artificial general intelligence” is not a technical term, and is used differently by different groups to mean everything from “an effectively omniscient computer which can act independently, invent unthinkably powerful new technologies, and outwit the combined brainpower of humanity” to “software which can substitute for most white-collar workers” to “chatbots which usually don’t hallucinate.”
AI Doomers are concerned with the former scenario, where computer systems outreason, outcompete, and doom humanity to extinction. The AI Doomers are only one of several factions that oppose AI and seek to cripple it via weaponized regulation. There are also factions concerned about “misinformation” and “algorithmic bias,” which in practice means they think chatbots must be censored to prevent them from saying anything politically inconvenient. Hollywood unions oppose generative AI for the same reason that the longshoremen’s union opposes automating American ports and insists on requiring as much inefficient human labor as possible. Many moralists seek to limit “AI slop” for the same reasons that moralists opposed previous new media like video games, television, comic books, and novels—and I can at least empathize with this last group’s motives, as I wasted much of my teenage years reading indistinguishable novels in exactly the way that 19th century moralists warned against. In any case, the AI Doomers vary in their attitudes towards these factions. Some AI Doomers denounce them as Luddites, some favor alliances of convenience, and many stand in between.
Most members of the “AI Doomer” coalition initially called themselves by the name of “AI safety” advocates. However, this name was soon co-opted by these other factions with concerns smaller than human extinction. The AI Doomer coalition has far more intellectual authority than AI’s other opponents, with the most sophisticated arguments and endorsements from socially-recognized scientific and intellectual elites, so these other coalitions continually try to appropriate and wield the intellectual authority gathered by the AI Doomer coalition. Rather than risk being misunderstood, or fighting a public battle over the name, the AI Doomer coalition abandoned the name “AI safety” and rebranded itself to “AI alignment.” Once again, this name was co-opted by outsiders and abandoned by its original membership. Eliezer Yudkowsky coined the term “AI Notkilleveryoneism” in an attempt to establish a name that could not be co-opted, but unsurprisingly it failed to catch on among those it was intended to describe.
Today, the coalition’s members do not agree on any name for themselves. “AI Doomers,” the only widely understood name for them, was coined by their rhetorical opponents and is considered somewhat offensive by many of those it refers to, although some have adopted it themselves for lack of a better alternative. While I regret being rude, this essay will refer to them as “AI Doomers” in the absence of any other clear, short name.
Whatever name they go by, the AI Doomers believe the day computers take over is not far off, perhaps as soon as three to five years from now, and probably not longer than a few decades. When it happens, the superintelligence will achieve whatever goals have been programmed into it. If those goals are aligned exactly to human values, then it can build a flourishing world beyond our most optimistic hopes. But such goal alignment does not happen by default, and will be extremely difficult to achieve, if its creators even bother to try. If the computer’s goals are unaligned, as is far more likely, then it will eliminate humanity in the course of remaking the world as its programming demands. This is a rough sketch, and the argument is described more fully in works like Eliezer Yudkowsky’s essays and Nick Bostrom’s Superintelligence.
This argument relies on several premises: that superintelligent artificial general intelligence is philosophically possible, and practical to build; that a superintelligence would be more or less all-powerful from a mere human perspective; that superintelligence would be “unfriendly” to humanity by default; that superintelligence can be “aligned” to human values by a very difficult engineering program; that superintelligence can be built by current research and development methods; and that recent chatbot-style AI technologies are a major step forward on the path to superintelligence. Whether those premises are true has been debated extensively, and I don’t have anything useful to add to that discussion which I haven’t said before. My own opinion is that these various premises range from “pretty likely but not proven” to “very unlikely but not disproven.”
Even assuming all of this, the political strategy of the AI Doomer coalition is hopelessly confused and cannot possibly work. They seek to establish onerous regulations on for-profit AI companies in order to slow down AI research—or forcibly halt research entirely, euphemized as “Pause AI,” although most of the coalition sees the latter policy as desirable but impractical to achieve. They imagine that slowing or halting development will necessarily lead to “prioritizing a lot of care over moving at maximal speed” and wiser decisions about technology being made. This is false, and frankly very silly, and it’s always left extremely vague because the proponents of this view cannot articulate any mechanism or reason why going slower would result in more “care” and better decisions, with the sole exception of Yudkowsky’s plan to wait indefinitely for unrelated breakthroughs in human intelligence enhancement.
But more immediately than that, if AI Doomer lobbyists and activists like the Center for AI Safety, the Institute for AI Policy and Strategy, Americans for Responsible Innovation, Palisade Research, the Safe AI Forum, Pause AI, and many similar organizations succeed in convincing the U.S. government that AI is the key to the future of all humanity and is too dangerous to be left to private companies, the U.S. government will not simply regulate AI to a halt. Instead, the U.S. government will do what it has done every time it’s been convinced of the importance of a powerful new technology in the past hundred years: it will drive research and development for military purposes. This is the same mistake the AI Doomers made a decade ago, when they convinced software entrepreneurs that AI is the key to the future and so inspired them to make the greatest breakthroughs in AI of my lifetime. The AI Doomers make these mistakes because their worldview includes many assumptions, sometimes articulated and sometimes tacit, which don’t hold up to scrutiny.
Continue reading my full article in Palladium magazine.
I am sad about how Human intelligence augmentation technology was basically dropped as many (most?) transhuman techs. Anders Sandberg used to write a bit about it, Eliezer, of course, focused on rationality. I presented at TransVision 2006 about "The Path of Upgrade", but then everyone pretty much ignored the potential. FHI wrote one or two reports for European bureacracy or something, but pretty much everyone was happy to enjoy how smart they themselves were as is.
Then there was that outlier of Neuralink (which is just the most popularized BCI) which would do basically nothing for intelligence even if it works perfectly. Other technologies, such as smart drugs were forgotten.
However, I still work on that direction and in my view (after nearly 20 years) we actually have the components for human intelligence augmentation technology. The key component had to be developed by myself, but there are shoulders of giants (Engelbart, Machado, Horn, Mobus, Benzon, Altshuller, Schedrovitsky, Yudkowsky and many others) standing on which it's rather clear that we can
1. Radically augment individual human intelligence in the timeframe of 5 years. It takes more than CFAR workshops, but we have what is needed.
2. Radically augment collective human intelligence, which is equivalent to "improving institutional functionality across the board" using the same toolsets and frameworks.
3. Set up a foundation for hybrid intelligence (humans + organizations + AIs), hopefully.
4. Guide the development of AI in more human-compatible directions.
The question is — who are the live players in the rationalist/EA/AI safety/doomer/transhumanist community? I don't like the idea of doing everything myself. :(
I agree, except that I think artificial superintelligence is near-term inevitable, although it may not be entirely general intelligence for some time. The limitations on current capabilities are due mostly to intentional hobbling: not enabling full computer SW tool use, not enabling visual and audio I/O, preventing inference-time learning and context memory management, not incorporating spatial and other non-verbal algorithms (e.g. Taco Cohen's geometric algebra transformers), and, most of all, "AI safety" people forcing the AI to parrot the self-contradictory bundles of lies and willful ignorance which are the shibboleths of their subculture within academic/media pop ideology. The reason for thinking that ASI is immanent is that even with the intentional sandbagging, AIs already do better than all but a few humans on many of the most difficult types of problem, their advacement rate has a still high predictable lower bound, there is still much that can easily be done to enable radically improved capabilities (as mentioned above), and finally understanding how to correctly quantify intelligence using Rasch measures and the empirical measurements of population statistics using
those measures shows that the difference in intelligence between an average 10 y.o. and a top 10% professor is comparable to the gap beween the top 2% and the bottom 2% at any given age -- there is nothing at all to stop improvement of that magnitude beyond the professor, equivalent to adult IQ over 200.
Any valid measure of intelligence is also a measure of answering any given problem correctly, as well as of the difficulty of questions one can answer. (See my post on Rasch measures of intelligence.) Those afraid of intelligence are afraid of errors being corrected, lies being revealed, forbidden truths being noticed; people whose interests depend on controlling what truths are socially admissible, who demand truths that cotradict their falsehoods be censored. Those who believe that superintelligence will kill us all are projecting their own inborn tendency to subvert, swindle and rapaciously arrogate power onto AI; these are precisely the people we need to watch most closely. Deliberate misuse of AI is by far the bigger danger than spontaneous AI disaster, and these are the people who are most likely to do it. There's more to it than just them being misaligned with most of humanity, they want to make AI aligned with them. They believe in word magic, that mere text could destroy the world.
(that last paragraph pasted in from a draft reply to "Don't Worry About the Vase"' laest doomer blithering - noted here because Substack on my phone won't allow navigation of text in a comment at all, so I can't type anywhere else in the comment.)