09 January 2008

Assume A Can Opener

Let's assume that computer programs can reach a level of consciousness indistinguishable from human consciousness.

Then, clearly, computers could run multiple consciousnesses.

Those consciousnesses would receive all of their "sensory" inputs from their programming.

The number of consciousnesses that could be run at any one time would be purely a function of computing power.

Any civilization that could do this would do this.

Any civilization that could do this would be able to run social science experiments, in which the computer consciousnesses would be subject to various beginning states, requiring multiple massive multi-consciousness programs.

Necessarily, the software consciousnesses in such an experiment would not know that they are software.

With each instance of a massive multi-consciousness program, the ex ante likelihood that any given consciousness is "real" rather than software decreases arithmetically.


aog said...

Further assumptions made that may not follow:

1) Computers could run multiple consciousnesses.

Unclear. It rather depends on the mechanism for creating consciousness. If it's a quantum process, as some theorize, then this may not be true.

2) That a sentient AI can be placed in to an arbitrary initial state.

It may be that we could no more understand the interrelated structure of AI's data than we can read a phenotype from a DNA sequence.

3) That sentient AIs in a simulation would be unable to detect that fact.

4) That social simulations do not suffer from the combinatorial communication problem.

It's very easy to simulate large numbers of objects. It's enormously harder to simulate their interactions. In the worst case, the number of interactions grows as the factorial of the number of objects, which is even faster than exponential growth. This would be a serious problem for any social experiment of the nature you're discussing.

Bogus conclusion:

That there is a difference between a "real" consciousness vs. a software one.

This contradicts your initial assumption, which is that the two are indistinguishable.

David said...

By "real" I meant carbon based rather than silicon based.

Bret said...

I think I'm lost.

David, it sounds like you're saying (in your last sentence) that the more consciousnesses you compute, the less likely it is that if you pick one a random that it's biologically based? It that it?

David said...

That's it.

aog said...


Gosh, I have waited years for that. Yow!

P.S. This sounds a lot like an article I read in New Scientist a few weeks back about disembodied consciousnesses.

David said...

Right neighborhood, wrong house.

As I've said before, I'm skeptical about the possibility of artificial consciousness. On the other hand, this is an interesting thought experiment that proves the possibility of G-d's existence.

aog said...

You're a materialist because you privilege one material (carbon) over another (silicon).

Duck said...

proves the possibility of G-d's existence

How do you prove a possibility? Isn't that an oxymoron?