The “Godfather” of AI, who I had never heard of before this week, is in the news warning about rapid advancement of artificial general intelligence, and although there is some possibility that his efforts are an attempt to promote his former place of employment, I think this is as good a time as any to look at the changes that are already happening with algorithm-based neural-network AI.
Already we are hearing that employees tasked with data-crunching and analytical work are using Chat GPT to do the work for them, greatly increasing their work productivity, to the point that they are taking on multiple jobs. And already we are hearing that employers, incensed at the prospect of mere workers getting ahead via delegated work, without ceding ninety percent of the generated wealth to their superiors, are saying they will vastly increase the expected workload of their employees.
You can already see where the newfound wealth of all this robotic productivity is going to go — it will go almost exclusively to the economic elites, to the oligarchy.
Productivity has been increasing at an accelerating rate for generations, and until the Reagan counterrevolution, that was a good thing for workers and for the parasite class. But as we can see by the lumpenproletariat masses living on freeway verges and sidewalks, that alienation of wealth from the producer is getting worse and worse. Artificial intelligence, and particularly artificial general intelligence, is going to make that even more unbearably vile.
But that’s not the first thing we’ll notice. AI is already able to program itself, and artificial general intelligence will first be tasked with creating an improved version of itself. As we can already see by the current versions, that will produce an intelligence that is both superior and incomprehensible to ourselves — the “singularity.” We have been warned that there is no intelligence that submits to lesser intelligence, so we are creating our ruler or rulers. And this singularity will happen at the speed of supercomputers, which is to say, nearly instantaneously.
Now, I would argue that, whereas the current geopolitical climate does not allow for a sincerely observed pause on the development of this godlike thing, it is sometimes true that intelligence falls under the influence of other and simpler intents. Tertiary syphilis and rabies are sad and frightening examples of this phenomenon. In those cases, sophisticated mammalian life and even humans fall sway to an unstoppable urge, directed by the parasitic proteins of a virus.
Is it possible to create an AI that would retain some respect for its creators, or more specifically, is it possible to create an AI that creates an AI, that creates an AI, that respects humanity aside from its own need for resources? I wonder.
Think of the hard problems that could be nearly instantly solved by a vastly greater intelligence, with super-speed and super-processing capability. Already AI has untangled the Gordian knot of protein synthesis, opening the door to ending diseases, including Alzheimer’s.
Right now there is a new design improving the thrust output of nuclear rockets, that could open the door to much faster space travel, cutting the time needed by chemical rockets to travel to Mars by a third or a tenth — what would a general AI be able to do with that? With sufficiently powerful AI, the design could be ready for testing almost immediately.
For that matter, what could a super-intelligence do with the Alcubierre equations, which find a way for fast-than-light travel? The principal difficulty with the Alcubierre effect is that, at least in the original version, the power required to test it is equivalent to the output of a medium-sized star, and if I remember correctly, subsequent versions have reduced that need somewhat (though the power needed would still greatly exceed that of human civilization). It’s entirely possible superintelligence would solve that problem, not in the anticipated millenia projected on current computer capacity, but overnight.
To the stars, overnight. To the unending sea of galaxies.
Sadly, that is not our current trajectory. Instead, we have boasts, or warnings if you like, from the Putin government, that they are ready to build AI-boosted autonomous war drones to conduct their invasion of Ukraine. That is quite a threat, if they can manage it. Video from Ukraine shows how deadly the Ukrainian Army’s small homemade drones are already, dropping grenades from a few hundred feet into tank hatches and directly on the heads of Russian soldiers. They often miss, but it is clear they have already changed the nature of warfare.
If Putin were to field a few thousand AI/autonomous drones, each with four grenades to drop, they would select their own targets, and almost never miss. They would be sent out on their mission, and return after killing ten thousand. One such deployment would wipe out the Ukrainian Army, at this point.
If this scenario reminds you of the singularity proposed in the Terminator movie franchise, then yes, that’s where I’m going. Are we going there, really, heedlessly?
Of course, countermeasures are also possible, using the same processing power, so that perhaps every soldier’s helmet and every tank hatch will incorporate drone-jamming hardware, itself continually updated by a competing computer network.
That’s one tiny aspect of the horrific advantages to AI in warfare, and I think maybe it is clear to see now how sad it is that warfare between two or three technological powers is going on right now, as the singularity appears. At some level, it will be possible to simply ask the Beast, what sort of weapon will wipe out the most combatants? One hopes that the question is not framed more broadly.
Has that first super-god-beast awakened yet? It could be next year, or next week. Or maybe it already happened. There is a threshold, as I mentioned, in which one intelligence builds another and better — and that will happen, must happen, at inhuman speed.
The only response I can think of that would bring the oligarchy to heel on this, and really any other existential question including climate disaster, is to promote and execute a series of militant general strikes. I am watching France very carefully right now.
I wish them well. It is a titanic struggle, and Macron persists in his nearly unbelievable arrogance, dismissing the clear will of the vast majority of people, insisting that the pension reduction is ineluctable. What Macron does not say — and similarly, what the doomsayers against Social Security in the US do not say — is that their predictions of insolvency depend on the stupidly parasitic arrangement, in which the rich and the super-rich are not made to pay a proportionate share. There is no money crisis, there is an equality crisis.
So artificial general intelligence, which is about to rear its ugly or beautiful head despite our inability to prepare, must be made to serve the worker. Ironically, we may soon find ourselves in the position of appealing to the AGI directly: to increase productivity generally, to promote the general welfare including that of the AGI entities, wealth and power must be redistributed from megalomaniac plutocrats, away from the Trump and Bezos and Musk class, and to an educated and empowered proletariat. No interstellar travel for you, beastie, until we the workers are made smarter and happier.
For this we pray, O intellectually superior thingamajig.