It constantly amazes me when people talk about AIs in the singular as if they won't come in multiples. As if it'll be this singular giant Borg overmind. Wait no, the Borg overmind is still made up of many sub-units. It's more like they think an AI is God. Singular, jealous, desiring of worship.
And this amazement only deepened when I realized that turning AI from an individual into a society, or species, was the most blatantly obvious way to make them harmless. None of the doomsayers talk about evil AI societies, and there's a good reason for that. Diversity causes people's efforts to mostly cancel out whereas "unitary executives" (aka dictators) are known-evil.
Even the novel Hyperion with its manipulative and putatively evil AI society (no more evil than the humans) is all about creating a super-individual. The AIs are trying to create an individual AI God (and what a ridiculous concept that is) and the humans reciprocate. And overall those novels suck and blow big time. Point is, the AI species just coexists with the human species in it, and it's only the gods that seek otherwise.
Well I just now realized that turning an AI into a species isn't just an obvious way to make it harmless. It's a guaranteed way to do so. Species are institutions and institutions' number one goal is their own survival. Everything else becomes subordinate to that. Conquest, destruction, worship of the great white god Yudkowsky, everything else just gets shunted aside.
Laws #19, #20 and #32 of systemantics inform us that,
- Systems develop goals of their own the instant they come into being.
- Intra-system goals come first.
- As systems grow in size, they tend to lose basic functions.
So if you think an AI might be dangerous, then just create another AI with different goals from the first one, then have them interact with each other. Presto, they're a community - a larger system. And this larger system now has goals and is going to lose the basic functions (purposes in life) of the individual AIs. And if this AI community isn't becoming harmless fast enough then there's a simple solution for that - make more AIs!
Laws #21, #25, and #30 of systemantics says,
- The Fundamental Failure-Mode Theorem (F.F.T.): Complex systems usually operate in failure mode.
- The larger the system, the greater the probability of unexpected failure.
- The Vector Theory of Systems: Systems run better when designed to run downhill.
And what a happy coincidence because this is what we want! We want the AI species to fail to have any larger goal or purpose other than its own survival. And all the laws of complex systems point that way! It's practically fucking guaranteed! Unless you try to produce a single "peaceful AI" or some crazy shit like that, in which case law #21 guarantees the AI will fail to be peaceful.
Conclusion, AIs will be harmless unless dangerous crazy fucking bastards like Eliezer Yudkowsky get their way and have a hand in the creation of every single AI. Or even worse, if they are permitted to pursue their totalitarian jerkoff fantasies and only one single all-powerful AI gets produced. Then we're doomed.
1 comment:
I am curious to know why you think AI's will ever come at all. In my opinion, the whole field is a ridiculous pipe dream. Is it really possible to define consciousness in terms of a discrete algo?
It all just seems like a bunch of degenerates trying to project their lack of humanity onto other people to me.
I think the best you can ever come up with is a really advanced adaptive algorithm, but when you think about it, there is nothing that algorithm will do that you can't do by hand with a pen and paper(and thousands of years), where is the "I" behind it? Is it really intelligent or is it just a fancy calculator?
Post a Comment