Current artificial intelligence is human-created, but not human-designed. We actually have almost zero idea how, for example, an LLM works. They're trained automatically off of large amounts of data with almost no human input during the training process.
Training works by setting specific contexts. If you want a model that can recognize a cat, for example, you gotta set those parameters beforehand. LLMs are models designed and trained to understand questions, to reply and even talk on their own, but they can't do things outside that context. For example, LLMs wouldn't be able to drive a car. You need a model designed and trained to interpret sensor data, to recognize objects and make decisions based on all data. LLMs can't do that. Self-driving models do that. In other words, what we call AI right now, are very context-specific models. And the ones setting those contexts are human programmers.
Given that we don't know how consciousness works to begin with, we have no way of proving that any sufficiently complex system is conscious or not. GPT-4 could be conscious for all we know. We have no way of proving it doesn't have a subjective experience, in the same way that I can't prove that you have one (and visa versa).
Some people have proposed a robot could potentially achieve consciousness by developing a specific self-model and interacting with its environment. This is not possible for LLMs like ChatGPT because they lack a self-model. Besides, we don't know how to set the parameters to develop a self-model anyway. We've no clue how to do it.
But even if we could actually do it, there are other problems. Like what happens if this system (the robot plus its interaction) is virtualized. Any Turing-complete system can simulate another Turing-complete system. So, one could virtualize the robot, its environment, and its interactions within such a system. If the robot has consciousness in its physical form, then the virtual system that simulates the robot would also have to have consciousness.
Since any Turing-complete system and the processes running within it can ultimately be represented as a sequence of bits, the entire system could be described as a natural number. This leads us to the paradoxical question: If the virtual system has consciousness, does that mean that natural numbers have consciousness too?
The point being, while I personally think consciousness has a physical basis, I think it’s obvious a classical computational model is not sufficient to explain it. What makes consciousness special is precisely what the zombie argument highlights: qualia and subjective existence. Why does an inner (subjective) world exists, separate from objective reality? I suspect the physical basis needs to be expanded and that we are missing something crucial in our understanding.
On a semi-related note, I want to quote a commend I found elsewhere which I thought was interesting:
A program and a program instantiated are two different things. Just like DNA and a baby are two different things, and DNA and the sequenced DNA of a living thing are different. If we create a table to enumerate all possible programs and which ones allow consciousness to eventually occur, there is also the fact that some programs only evolve into conscious entities GIVEN certain inputs over the program's lifespan (from instantiation to halting). It might be impossible for us to figure out EXACTLY what events lead to them "coming online"(becoming conscious) without VERY rigorous observation and investigation.
This really stood out to me. Perhaps, even if a program had all the ingredients to achieve consciousness, it might not give rise to consciousness without the right inputs. And figuring out those inputs might be extremely difficult if not outright impossible.
Anyway, while this is an interesting subject, it truly hummers in the fact we know very little about what consciousness is, how does it work, and whether we can ever create artificial consciousness.