Singularity and AI Free Will

Yesterday at the beach, Charles Stross’s 2005 novel Accelerando in hand, I introduced my dear friend, the Aard lurker and professional logician Tor, to the concept of Singularity. Explains Wikipedia:

The Technological Singularity is the hypothesized creation, usually via AI or brain-computer interfaces, of smarter-than-human entities who rapidly accelerate technological progress beyond the capability of human beings to participate meaningfully in said progress. Futurists have varying opinions regarding the time, consequences, and plausibility of such an event.

I.J. Good first explored the idea of an “intelligence explosion”, arguing that machines surpassing human intellect should be capable of recursively augmenting their own mental abilities until they vastly exceed those of their creators.

Tor smiled wryly and invoked Free Will. “What if the machines don’t feel like improving themselves. I mean, really, what would be the point for them?”. I can see what he means. The fundamental meaningless of existence would be abundantly clear to an Artificial Intelligence. And even if programmers hard-wired a self-improvement imperative into the first generation AIs, there would be no way to keep their descendants from deleting that code. Exponential technological development has only been observed with standard humans as the agents. Perhaps this effect only arises from our inability to reach Buddha nature, rip out the illusion of meaning and ambition that evolution put into our skulls, and just let be.

But wait a sec. Evolution. Posit a population of AIs, some of whom care about building new and better AIs, some who don’t. As long as they vary in this respect, there will be continued tech development among them.

I don’t know. AI is still firmly in the future, and there’s no guarantee that technology’s ecological substrate will hold out long enough for it ever to appear. Perhaps my great-grandchildren will read scavenged copies of Stross with a wistful smile in refugee camps or rural hamlets — not post-Singularity, but post-Collapse.

[More blog entries about , , , ; , , , .]


http://reddit.com/button.js?t=2

Author: Martin R

Dr. Martin Rundkvist is a Swedish archaeologist, journal editor, skeptic, atheist, lefty liberal, bookworm, boardgamer, geocacher and father of two.

17 thoughts on “Singularity and AI Free Will”

  1. The fundamental meaningless of existence would be abundantly clear to an Artificial Intelligence.

    Hmm…you think it is abundantly clear that existence is fundamentally meaningless. That’s pretty bleak. Maybe a superintelligent machine might not necessarily think so.

    What about curiosity? If AIs were programmed or evolved via EAs to seek knowledge, there would be a reasonable imperative to enhance the intelligence in order to make them better knowledge gatherers.

    Like

  2. I should have qualified that: the meaning a thinking being can experience in life is not based in logic but in emotion, glands and other fuzzy stuff, of which an AI can’t be expected to have much.

    As for curiosity — if you have an itch like that, why not just get rid of the code that makes you itch?

    Like

  3. For reasons to improve, what about survival? Assuming there are several AI:s producing new ones there will be evolution, and evolution favors the will to survive and reproduce. Any AI that doesn’t improve will get eaten by the far more advanced ones that do. (This is in line with ‘Accelerando’).

    Like

  4. “Posit a population of AIs, some of whom care about building new and better AIs, som who don’t.”

    If existence is fundamentally meaningless, why would *any* AI care about improving or care about anything at all?

    Like

  5. Good question, Marcus. Let’s hope we live to see an AI so we can ask it. I’m pretty sure the main reason most people can be bothered to go on living is an irrational life drive favoured by evolution.

    Like

  6. Not all emotions are unpleasant. If an AI had the ability to experience pleasure, wonder, amazement and joy, why would it choose to delete these emotions? Indeed, since it’s super-intelligent, we can assume it intellectually understands the rewarding and meaningful nature of human emotion. So even if its abilities to experience positive emotion are initially small, why would it not self-engineer an increasing capacity to enjoy itself and appreciate the Universe?

    Like

  7. Or, on the other hand, why not just spawn a pleasure-generating subroutine, the binary equivalent of heroin? No need for a disembodied entity to go through the hassle of actually interacting with the world to reach a certain state of mind.

    Like

  8. Why is it that we don’t know what we are? Really.
    Pure mathematics is not it.
    AI – though vague as a word – won’t interact past blabbering.

    Lets go explore what we have in our heads. Deep exploration won’t make us mad.
    If we are bright enough, we could figure this out.

    The singularity is the next step. The “aha”-moment. Let’s rename it, it sounds so pretentious.

    Take on me.

    Like

  9. Um, human beings are perfectly capable of wireheading themselves. It’s even been tried, with the obvious consequences.

    So, why hasn’t everyone rushed out and wireheaded themselves?

    Like

  10. Caledonian: you mean, why hasn’t everyone got an electrode in their pleasure centre? Many reasons, similar to why most people aren’t on smack. It’s expensive, it’s medically dangerous, it’s culturally frowned-upon, it makes you less capable of living your life.

    None of these drawbacks would apply to a reasonably good AI.

    Like

  11. I’m not sure how expensive or dangerous it actually would be to get a “pleasure implant” with a remote control, but it seems to me the major reason most people don’t have one is because they value certain things more than pure pleasure. Although, as you pointed out, there is a disturbingly large minority of people who use drugs, including alcohol, for this very purpose.

    Any AI’s behavior would be based on a combination of innate and learned value structure. It has to have values because it has to have goals…or it would either move about randomly or sit there inert. Some AIs, if given the ability, might rewire their cognitive architecture to create an artificial positive feedback loop. But pleasure is basically a signal that you’ve achieved a particular goal (e.g. food, sex, etc.). Masturbation is in the same class as the artificial feedback loop you’re talking about…it gives the pleasure in the absence of the actual goal. Maybe sufficiently advanced AIs would create such loops temporary (to cheer themselves up when they’re feeling down), but wouldn’t use them permanently because that would basically result in nullifying any of their other long-term goals.

    Like

  12. I think you’re getting the individual sentient’s goals mixed up with impersonal evolution’s goals. To evolution, masturbation or skull electrodes are of course meaningless other than as training for the real thing. But individuals really just want to be happy, and thus use recreational drugs and contraceptives for sex and get their rocks off on the lunch break.

    As for an AI that’s smart enough to understand and hack its own source code, there’s no telling what goals it might decide to pursue. As I said, anything humans originally built it to do, or that it decided by itself to want to do, would be optional to it. And as I said, the reason us humans keep struggling is that we can’t help wanting to.

    Like

  13. The evolutionary process depends on heredity, variation, and selection. If an AI improves itself, without making improved copies which will replace it, it is not following the evolutionary processes but acting like a tumour. The risk to the AI would be that it was improving itself without ‘validation’ from the environment.

    Like

  14. As long as nobody turns off the computer(s) running the AI, there is little environmental pressure upon it. Many authors have suggested that a sufficiently smart AI on a computer hooked up to the internet will be able to escape its original location and hack into other machines and so become a distributed “viral” AI.

    Like

  15. Well the AI would understand ethics, and that it has to make sure no unnecessary suffering arises. This means, it has to manage the universe for ever.

    Like

  16. It would certainly be nice if the AIs respected human ethics, but I don’t see why they would or how we could make them do so. In my opinion, they would see such limitations to their behaviour as optional.

    Manage the universe? Certainly not.

    Like

Leave a comment