The Technological Singularity is the hypothesized creation, usually via AI or brain-computer interfaces, of smarter-than-human entities who rapidly accelerate technological progress beyond the capability of human beings to participate meaningfully in said progress. Futurists have varying opinions regarding the time, consequences, and plausibility of such an event.
I.J. Good first explored the idea of an “intelligence explosion”, arguing that machines surpassing human intellect should be capable of recursively augmenting their own mental abilities until they vastly exceed those of their creators.
Tor smiled wryly and invoked Free Will. “What if the machines don’t feel like improving themselves. I mean, really, what would be the point for them?”. I can see what he means. The fundamental meaningless of existence would be abundantly clear to an Artificial Intelligence. And even if programmers hard-wired a self-improvement imperative into the first generation AIs, there would be no way to keep their descendants from deleting that code. Exponential technological development has only been observed with standard humans as the agents. Perhaps this effect only arises from our inability to reach Buddha nature, rip out the illusion of meaning and ambition that evolution put into our skulls, and just let be.
But wait a sec. Evolution. Posit a population of AIs, some of whom care about building new and better AIs, some who don’t. As long as they vary in this respect, there will be continued tech development among them.
I don’t know. AI is still firmly in the future, and there’s no guarantee that technology’s ecological substrate will hold out long enough for it ever to appear. Perhaps my great-grandchildren will read scavenged copies of Stross with a wistful smile in refugee camps or rural hamlets — not post-Singularity, but post-Collapse.