Several Rationalists objected to my recent “Against AGI Timelines” post, in which I argued that “the arguments advanced for particular timelines [to AGI]—long or short—are weak”.
There's an awful lot of magical thinking going on. LLMs are millions of miles from artificial intelligence systems that can actually figure things out. The need for massive, finely honed prompt engineering is that LLM intelligence is a lot like that of Clever Hans. (I will grant that Clever Hans, being a horse, had a lot more intelligence than the typical LLM based system.)
For a good example of informed opinions on the future of AI, check out Rodney Brooks web site. He's been doing robotics work since the 1970s, so his estimates are based on actual knowledge of the field. For example:
- Dexterous robot hands generally available.
- NET 2030 (not eaerlier than) BY 2040 (I hope!)
- Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years.
Another point is that sometimes raw ignorance is just very informative about what state of affairs you are in. E.g., some form of raw ignorance about whether a coin will fall head of tails will result in a very precise 50%. I think at times you are making the point that the problem is really not well posed. Still, I think that there is something meaningful about using Laplace's rule of succession, tastefully applied, to get a sense of things here (https://en.wikipedia.org/wiki/Rule_of_succession).
Like, the thing is, I do think that ambitious quantification, seriously applied, can give a forceful answer to this, and produce artifacts that illuminate rather than occlude. But it's so effortful that most people don't bother. Still, here: https://www.lesswrong.com/posts/FaCqw2x59ZFhMXJr9/a-prior-for-technological-discontinuities is what might be an ok baserate for technological discontinuities.
Perhaps you can have:
- Thesis: Probabilities are useful for bounding AGI
- Antithesis: Random people pulling probabilities out of nowhere for AGI is crap.
- Synthesis: Sometimes you can subjectively bound uncertainty, if you are careful and you bother to do it well. Still, the output will be a subjective guess, which might be all you have but comes nowhere close to the certainty of science, or of statistics in domains which are well studied. Most people do the crappy version, though.
Sure, a bunch of the folks at your first link are in the "I’m not persuaded but they are at least not making the particular mistake that I’m arguing against here" category. Dunno if what they're doing is *useful*, exactly, and Yudkowsky's critique of grossly premature quantification still applies. But it's certainly much better than the fake version that I'm critiquing here.
"But slapping unjustified numbers on raw ignorance does not actually make you less ignorant."
Agreed, but it is clearer. And often allows for more comparison.
Experts in every other field are describing their futures with words like maybe and probably. I think numbers are better than that. They don't say you know something (though some do). but they do say it clearly.
Also numbers can be aggregated. And we are starting to find people with track records across 5 -10 years. I want a median (and 90% co) of these people so I can think well about the future.
I think your criticism is fair, but I think the "put numbers on it" are better than almost anyone else, who never clarify never give the option to be wrong and make policy anyway.
Even if we did have a probability distribution, it wouldn’t give us a timeline. An actuary table doesn’t tell you the year of your death.
There's an awful lot of magical thinking going on. LLMs are millions of miles from artificial intelligence systems that can actually figure things out. The need for massive, finely honed prompt engineering is that LLM intelligence is a lot like that of Clever Hans. (I will grant that Clever Hans, being a horse, had a lot more intelligence than the typical LLM based system.)
For a good example of informed opinions on the future of AI, check out Rodney Brooks web site. He's been doing robotics work since the 1970s, so his estimates are based on actual knowledge of the field. For example:
- Dexterous robot hands generally available.
- NET 2030 (not eaerlier than) BY 2040 (I hope!)
- Despite some impressive lab demonstrations we have not actually seen any improvement in widely deployed robotic hands or end effectors in the last 40 years.
A problem: that future you see? It too is generated.
I don't disagree that people are sometimes slapping a probability of things and calling it a day.
For some examples of distributional timelines, you could check out this: https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines
Another point is that sometimes raw ignorance is just very informative about what state of affairs you are in. E.g., some form of raw ignorance about whether a coin will fall head of tails will result in a very precise 50%. I think at times you are making the point that the problem is really not well posed. Still, I think that there is something meaningful about using Laplace's rule of succession, tastefully applied, to get a sense of things here (https://en.wikipedia.org/wiki/Rule_of_succession).
Like, the thing is, I do think that ambitious quantification, seriously applied, can give a forceful answer to this, and produce artifacts that illuminate rather than occlude. But it's so effortful that most people don't bother. Still, here: https://www.lesswrong.com/posts/FaCqw2x59ZFhMXJr9/a-prior-for-technological-discontinuities is what might be an ok baserate for technological discontinuities.
Perhaps you can have:
- Thesis: Probabilities are useful for bounding AGI
- Antithesis: Random people pulling probabilities out of nowhere for AGI is crap.
- Synthesis: Sometimes you can subjectively bound uncertainty, if you are careful and you bother to do it well. Still, the output will be a subjective guess, which might be all you have but comes nowhere close to the certainty of science, or of statistics in domains which are well studied. Most people do the crappy version, though.
Sure, a bunch of the folks at your first link are in the "I’m not persuaded but they are at least not making the particular mistake that I’m arguing against here" category. Dunno if what they're doing is *useful*, exactly, and Yudkowsky's critique of grossly premature quantification still applies. But it's certainly much better than the fake version that I'm critiquing here.
I continue to believe that Rationalism is the most interesting form of Maya, even more fascinating than the scientific form of it.
"But slapping unjustified numbers on raw ignorance does not actually make you less ignorant."
Agreed, but it is clearer. And often allows for more comparison.
Experts in every other field are describing their futures with words like maybe and probably. I think numbers are better than that. They don't say you know something (though some do). but they do say it clearly.
Also numbers can be aggregated. And we are starting to find people with track records across 5 -10 years. I want a median (and 90% co) of these people so I can think well about the future.
I think your criticism is fair, but I think the "put numbers on it" are better than almost anyone else, who never clarify never give the option to be wrong and make policy anyway.
If they admitted the numbers are made up and "probability" is misleading I would take them more seriously.