2 Comments

I am sad about how Human intelligence augmentation technology was basically dropped as many (most?) transhuman techs. Anders Sandberg used to write a bit about it, Eliezer, of course, focused on rationality. I presented at TransVision 2006 about "The Path of Upgrade", but then everyone pretty much ignored the potential. FHI wrote one or two reports for European bureacracy or something, but pretty much everyone was happy to enjoy how smart they themselves were as is.

Then there was that outlier of Neuralink (which is just the most popularized BCI) which would do basically nothing for intelligence even if it works perfectly. Other technologies, such as smart drugs were forgotten.

However, I still work on that direction and in my view (after nearly 20 years) we actually have the components for human intelligence augmentation technology. The key component had to be developed by myself, but there are shoulders of giants (Engelbart, Machado, Horn, Mobus, Benzon, Altshuller, Schedrovitsky, Yudkowsky and many others) standing on which it's rather clear that we can

1. Radically augment individual human intelligence in the timeframe of 5 years. It takes more than CFAR workshops, but we have what is needed.

2. Radically augment collective human intelligence, which is equivalent to "improving institutional functionality across the board" using the same toolsets and frameworks.

3. Set up a foundation for hybrid intelligence (humans + organizations + AIs), hopefully.

4. Guide the development of AI in more human-compatible directions.

The question is — who are the live players in the rationalist/EA/AI safety/doomer/transhumanist community? I don't like the idea of doing everything myself. :(

Expand full comment

I agree, except that I think artificial superintelligence is near-term inevitable, although it may not be entirely general intelligence for some time. The limitations on current capabilities are due mostly to intentional hobbling: not enabling full computer SW tool use, not enabling visual and audio I/O, preventing inference-time learning and context memory management, not incorporating spatial and other non-verbal algorithms (e.g. Taco Cohen's geometric algebra transformers), and, most of all, "AI safety" people forcing the AI to parrot the self-contradictory bundles of lies and willful ignorance which are the shibboleths of their subculture within academic/media pop ideology. The reason for thinking that ASI is immanent is that even with the intentional sandbagging, AIs already do better than all but a few humans on many of the most difficult types of problem, their advacement rate has a still high predictable lower bound, there is still much that can easily be done to enable radically improved capabilities (as mentioned above), and finally understanding how to correctly quantify intelligence using Rasch measures and the empirical measurements of population statistics using

those measures shows that the difference in intelligence between an average 10 y.o. and a top 10% professor is comparable to the gap beween the top 2% and the bottom 2% at any given age -- there is nothing at all to stop improvement of that magnitude beyond the professor, equivalent to adult IQ over 200.

Any valid measure of intelligence is also a measure of answering any given problem correctly, as well as of the difficulty of questions one can answer. (See my post on Rasch measures of intelligence.) Those afraid of intelligence are afraid of errors being corrected, lies being revealed, forbidden truths being noticed; people whose interests depend on controlling what truths are socially admissible, who demand truths that cotradict their falsehoods be censored. Those who believe that superintelligence will kill us all are projecting their own inborn tendency to subvert, swindle and rapaciously arrogate power onto AI; these are precisely the people we need to watch most closely. Deliberate misuse of AI is by far the bigger danger than spontaneous AI disaster, and these are the people who are most likely to do it. There's more to it than just them being misaligned with most of humanity, they want to make AI aligned with them. They believe in word magic, that mere text could destroy the world.

(that last paragraph pasted in from a draft reply to "Don't Worry About the Vase"' laest doomer blithering - noted here because Substack on my phone won't allow navigation of text in a comment at all, so I can't type anywhere else in the comment.)

Expand full comment