5.5 C
New York
Saturday, March 2, 2024

Where to Seek Artificial Consciousness

Where to Seek Artificial Consciousness

[1] As I use the term, software need not be expressed using a programming language. For example, a software specification (e.g. the C++ compiler specification) would itself constitute software, as would well-defined pseudo-code. This is a more inclusive usage of software than some others, so I wanted to note the difference. There is a reason for my usage: it highlights my argument’s applicability to all such software, and it does so without introducing distinctions that are, in this context, unimportant.

[2] For example, there are different compilation settings available when compiling C++ into assembly. That assembly is then translated into machine code by an assembler, which, as I noted previously, is represented by electrical signals.

There is a flip side to the fact that different software can be executed on the same hardware.¹ Different hardware can execute the same software (e.g. the same C++ program, or the same program specification). An LLM may run on specialized AI processors, on CPUs alone (often not quickly, in this case), using GPUs, etc.

Then, the physical happenings will greatly vary for the same software. For example, for specific tasks (like training or running an LLM), GPUs can perform more parallel processing than CPUs. These differences in capabilities arise from physical differences in the hardware, e.g. GPUs have many more physical processing cores. These differences then affect the physical distribution of electricity within a computer when executing an LLM. If physical states constitute or cause consciousness, I would expect such differences in state to be reflected in differences of conscious experience.

This point can also be extended to apply even without hardware variation. Software is an abstraction over the electrical signals executed in a computer. Oftentimes, there are semantically equivalent but physically different electrical signals within the same computer, which result from different settings being used to convert from a programming language into machine code.² Similar arises if the same software is implemented using different programming languages.

Then, if different electrical events produce different conscious experiences, the same software can produce different experiences.

This point can be strengthened in clarity and correctness. Initially, consider some of Chalmer’s specific examples. To determine whether some software has a global workspace or recurrent information processing, it suffices to examine, for example, a C++ implementation of that software’s specification, alongside the C++ language specification (assuming the specification has been correctly implemented). More generally, consider any bundle of qualities whose presence could be ascertained by examining a C++ implementation.

If said qualities jointly-determine conscious experiences, and they are present in a C++ implementation, they will likewise be present in every machine-code representation of that C++. Then, whether or not those different representations are, in a deep metaphysical sense, the same software, they are relevantly identical. That is, regarding consciousness-relevant qualities, they are identical. So computers that execute them must have identical conscious experiences.

However, this is contrary to reality: the same C++ executed on different hardware, or with different compilation settings, will plausibly produce different conscious experiences. Likewise with respect to executing the same software that is implemented using different programming languages that abstract, to some extent, over assembly.

Here’s one possible requirement for consciousness: embodied, sensory perception. Without sensory perception, an entity cannot have conscious experiences of the world. Without a body, an entity cannot act in the world. To link embodiment to consciousness, one might suppose that consciousness is necessary because it facilitates effective interaction with the world, and one might suppose that all action involves interaction with the world.

Chalmers explicitly denies believing in such reasoning. However, he also argues that even if embodiment is necessary, it need not require physical embodiment. There can be agents that exist in virtual worlds. For example, he mentions Deepmind’s MIA (multimodal interactive agent).

Moreover, virtual reality is no less real than physical reality, so virtual and physical reality are analogous. He defends this second claim at length in his book Reality+, which I admit I have read only one chapter of. Thank goodness that, for the sake of argument, I can concede this second claim!

However, in this context, there is an important difference between virtual and physical embodiment. In particular, on many computers, nothing in a CPU physically separates one software program from a different software program.

What this means will become clearer after I introduce some details about the workings of many computers and their operating systems.

Such computers can execute multiple programs, or processes, which are isolated from each other. Physically, this means that they use different portions of computer memory. On a computer with access to a single core (without anything like hyperthreading), there will not be any parallel execution, the Computer Science term for simultaneously executing processes.

Instead, when there are multiple processes, the O/S will periodically change which process is currently executing. This is a form of what Computer Scientists call concurrency.¹ In short, when the active process should change from A to B, A’s current state is saved, and B’s stored state is loaded. Then, B can resume from where it left off. This so-called context-switching is usually done by the O/S’s kernel.²

Consequently, the electrical signals encoding the instructions of programs A, B, and the kernel will be interwoven with each other. This is what I meant by saying that nothing needs to physically distinguish different programs within a CPU.

Having clarified that, I can now explain the philosophical significance of this. Suppose that we have a computer with a single core CPU, on which we execute some virtually embodied software. Keep in mind that all possible software can, in principle, be executed in a single core CPU, though it may not be fast.³ That is, if something can be computed using parallel processing, it can be computed without such processing.

For the same reason, if software can execute using a graphics card, it can be rewritten in a semantically equivalent form that does not use a GPU. Things may be slow, you may require more general purpose memory to replace the GPU’s specialized memory (to avoid crashing), and so on. But it can be done with enough time and money.

Next, suppose we also execute some other software using that same CPU. Then, the virtually embodied software and this other software will be physically and temporally interwoven.

Now, suppose, as Chalmers would, that the virtually embodied software is the only conscious software. This makes sense if: (1) embodiment is required for consciousness, and (2) the other software is not embodied within a virtual world. Then, the computer system must alternate between consciousness and unconsciousness according to which program is currently executing.

Since physical happenings somehow cause or constitute consciousness, this alternation is surprising. The two programs’ electrical representations of themselves are contiguous with each other. Moreover, portions of the embodied software’s electrical signals are disconnected. In themselves, these may not be problems. After all, there is spatial separation between every atom, and in a different sense, the nervous system is connected with the rest of the human body.

However, in this context, they jointly imply the following: some parts of a continuous electrical flow, really a flow of electrons, is conscious (or else causing conscious experiences), but the other part of that flow is not, and virtual embodiment makes the difference.

This seems absurd. A more intuitively plausible possibility is that, if the system ever has conscious experiences, and it remains powered, it continues having experiences.

Even if the preceding is incorrect, investigating this would require emphasizing the physical happenings within a computer, not the software, which is consistent with my broader point. For more on this, refer back to my earlier comments on the relationship between intuitions and science.

Finally, notice that this argument never claims that virtual reality is less “real” than physical reality. Indeed, this argument works even if our supposedly physical reality is, instead, virtual.

If we’re in a virtual reality, little changes in my argument. Specifically, all that must be replaced is the usage of the term “virtual embodiment” to characterize only some embodiment. After all, in a simulation, all embodiment is virtual.

To distinguish between what I originally called virtual and physical embodiment, the term “virtual embodiment” could be replaced by “virtual-virtual embodiment,” and “physical embodiment” could be replaced by “virtual embodiment.”⁴ The argument can remain otherwise intact.

Source link

Latest stories