Erio to Denki Ningyou - Vol. 3 Ch. 19 - Super Scientist Ada

Dex-chan lover
Joined
Aug 15, 2020
Messages
1,406
Neither computers nor humans are "doing it with purpose" nor "unconscious randomized responses". Both are fundamentally a housing with functionality which runs through transferring and storing electrical circuits (wires/neurons) which allow them to process inputs, store responses to stimuli, keep things in memory, etc.
if you ask a computer for a detailed report on why it came to a conclusion it will give it out, you may not get it and it may not make sense to a human...but the computer has first hand in depth understanding of ALL conclusions it reaches

you ask a human what they want to eat and they will respond they dont know then say no to 100 options, not even close to a correct statement and more philosophical middling to appear smarter...
 
Dex-chan lover
Joined
Aug 10, 2023
Messages
1,339
if you ask a computer for a detailed report on why it came to a conclusion it will give it out, you may not get it and it may not make sense to a human...but the computer has first hand in depth understanding of ALL conclusions it reaches

you ask a human what they want to eat and they will respond they dont know then say no to 100 options, not even close to a correct statement and more philosophical middling to appear smarter...
You ask a typical human what surrounds them, they talk about only the physical objects around them.

You ask a schizophrenic and what we say is unexpected and reflects our internal experience as much as our surroundings.

Only schizophrenics are conscious.
 
Dex-chan lover
Joined
Jul 18, 2019
Messages
481
The matter of consciousness is problematic because you are only privy to your own thoughts. This means you can only be sure that you are conscious but you will never truly know if everyone else is as well. There are certain tests to look for signs of awareness but there is no definitive test for it.

So, if we can't truly be sure that anyone besides you yourself has consciousness, how the hell would we be able to create artificial consciousnesses? It's simply not possible.

At most we could create something that kinda sorta behaves like it has consciousness. But without self-awareness, it wouldn't be true consciousness. It might look like consciousness from a third-party perspective, but the main point of consciousness is self-awareness. Whether something/someone looks like it has consciousness from an external point of view is meaningless if there is no self-awareness.
 
Last edited:
Dex-chan lover
Joined
Aug 10, 2023
Messages
1,339
So, if we can't truly be sure that anyone besides you yourself has consciousness, how the hell would we be able to create artificial consciousnesses? It's simply not possible.

At most we could create something that kinda sorta behaves like it has consciousness. But without self-awareness, it wouldn't be true consciousness.
Why wouldn't an artificial intelligence have self-awareness? Like, we can't know for sure they have self-awareness, but a part of the point of p-zombies is that you can't know that for sure for anyone. All we have for both organic and artificial intelligence is external evidence. The only intelligence which you can have internal evidence for is yourself.
 
Dex-chan lover
Joined
Jul 18, 2019
Messages
481
Why wouldn't an artificial intelligence have self-awareness?
Because artificial intelligence is a human creation, and we humans don't know how to code self-awareness. We don't even know how the human brain gives rise to self-awareness. We know so fucking little about how consciousness work that being able to artificially create it is a dream within a dream.
 
Dex-chan lover
Joined
Aug 10, 2023
Messages
1,339
Because artificial intelligence is a human creation, and we humans don't know how to code self-awareness. We don't even know how the human brain gives rise to self-awareness. We know so fucking little about how consciousness work that being able to artificially create it is a dream within a dream.
If we don't know what gives rise to self-awareness, what's to stop us from coding self-awareness wholly by accident?
 
Dex-chan lover
Joined
Jul 18, 2019
Messages
481
If we don't know what gives rise to self-awareness, what's to stop us from coding self-awareness wholly by accident?
I'd say that's pretty unlikely. I'd say that before we can create self-awareness, we need to understand how it works in the human brain, and that could take hundreds if not thousands of years.
 
Dex-chan lover
Joined
Aug 10, 2023
Messages
1,339
I'd say that's pretty unlikely. I'd say that before we can create self-awareness, we need to understand how it works in the human brain, and that could take hundreds if not thousands of years.
It just seems entirely untrue to us that we need to understand anything at all about consciousness to create it by accident.

If we create anything that's even remotely able to operate like a human, then it seems like we'd need to understand consciousness to ensure we don't accidentally create it at any point in the process.
 
Dex-chan lover
Joined
Jul 18, 2019
Messages
481
It just seems entirely untrue to us that we need to understand anything at all about consciousness to create it by accident.

While it's true that humanity has been able to create many things without understanding their underlying principles (for example ancient people created ship's sails without understanding fluid dynamics), we did so because we could at least understand their basic functions. In the case of sails, we understood from the get go that sails could catch the wind and help propel a ship. But we aren't even at that level when it comes to understanding consciousness. We seriously have no clue what consciousness actually is, what it does, and how it come to be.

And that's not the only problem. There's a more fundamental problem, which is that we uses matrix math to code AI, and that's totally different to how the brain does its magic. The brain isn't even binary. So how we could arrive at consciousness by accident when we're doing things so differently than the human brain? Sounds counter-intuitive to me. If such a lucky accident could happen, it would probably take millions of years.

If we create anything that's even remotely able to operate like a human

I'm sure we could create something that behaves kinda sorta like a human. But behavior is something that can be judged from a third-party point of view. Consciousness, however, is a totally subjective experience and can not be assessed by a third party. So don't confuse behavior with consciousness. They're totally different things.
 
Group Leader
Joined
Feb 12, 2018
Messages
950
Because artificial intelligence is a human creation, and we humans don't know how to code self-awareness. We don't even know how the human brain gives rise to self-awareness. We know so fucking little about how consciousness work that being able to artificially create it is a dream within a dream.
Current artificial intelligence is human-created, but not human-designed. We actually have almost zero idea how, for example, an LLM works. They're trained automatically off of large amounts of data with almost no human input during the training process. Given that we don't know how consciousness works to begin with, we have no way of proving that any sufficiently complex system is conscious or not. GPT-4 could be conscious for all we know. We have no way of proving it doesn't have a subjective experience, in the same way that I can't prove that you have one (and visa versa).
And that's not the only problem. There's a more fundamental problem, which is that we uses matrix math to code AI, and that's totally different to how the brain does its magic. The brain isn't even binary. So how we could arrive at consciousness by accident when we're doing things so differently than the human brain? Sounds counter-intuitive to me. If such a lucky accident could happen, it would probably take millions of years.
The actual mechanics of how AI and human brains perform computation is irrelevant. What matters is that both are Turing-complete, and therefore one could perfectly simulate the other given enough memory and time. Unless there's some crazy new physics that's uncomputable and somehow integral to consciousness in the brain, there is nothing stopping a computer program from being conscious. There are people who believe in crazy new physics along those lines, like Roger Penrose, but their arguments are generally pretty bad and based on misunderstandings of Gödel's incompleteness theorem and quantum mechanics, among other things.
 
Dex-chan lover
Joined
Aug 10, 2023
Messages
1,339
I'm sure we could create something that behaves kinda sorta like a human. But behavior is something that can be judged from a third-party point of view. Consciousness, however, is a totally subjective experience and can not be assessed by a third party. So don't confuse behavior with consciousness. They're totally different things.
You keep saying this as if it means we can't create consciousness. This is an epistemological problem, not an ontological problem. There is a veil put over consciousness to prevent us from knowing it in others, but this lack of knowing isn't really a problem with its existence or with creating it, just with knowing about it.
While it's true that humanity has been able to create many things without understanding their underlying principles (for example ancient people created ship's sails without understanding fluid dynamics), we did so because we could at least understand their basic functions.
Right, like how Alexander Fleming broadly understood the basic functions of penicillium molds or how Tang alchemists broadly understood how to make things explode when they invented gunpowder. Except, like, they didn't. Fleming just left the lid of a container open when he went on vacation and Tang alchemists were trying to make the elixer of life and accidentally created gunpowder instead.

Human discovery is often entirely accidental with things that are poorly understood at best.
And that's not the only problem. There's a more fundamental problem, which is that we uses matrix math to code AI, and that's totally different to how the brain does its magic. The brain isn't even binary. So how we could arrive at consciousness by accident when we're doing things so differently than the human brain? Sounds counter-intuitive to me. If such a lucky accident could happen, it would probably take millions of years.
Is this how Ange and Gena were created?
 
Dex-chan lover
Joined
Jul 18, 2019
Messages
481
Current artificial intelligence is human-created, but not human-designed. We actually have almost zero idea how, for example, an LLM works. They're trained automatically off of large amounts of data with almost no human input during the training process.

Training works by setting specific contexts. If you want a model that can recognize a cat, for example, you gotta set those parameters beforehand. LLMs are models designed and trained to understand questions, to reply and even talk on their own, but they can't do things outside that context. For example, LLMs wouldn't be able to drive a car. You need a model designed and trained to interpret sensor data, to recognize objects and make decisions based on all data. LLMs can't do that. Self-driving models do that. In other words, what we call AI right now, are very context-specific models. And the ones setting those contexts are human programmers.

Given that we don't know how consciousness works to begin with, we have no way of proving that any sufficiently complex system is conscious or not. GPT-4 could be conscious for all we know. We have no way of proving it doesn't have a subjective experience, in the same way that I can't prove that you have one (and visa versa).

Some people have proposed a robot could potentially achieve consciousness by developing a specific self-model and interacting with its environment. This is not possible for LLMs like ChatGPT because they lack a self-model. Besides, we don't know how to set the parameters to develop a self-model anyway. We've no clue how to do it.

But even if we could actually do it, there are other problems. Like what happens if this system (the robot plus its interaction) is virtualized. Any Turing-complete system can simulate another Turing-complete system. So, one could virtualize the robot, its environment, and its interactions within such a system. If the robot has consciousness in its physical form, then the virtual system that simulates the robot would also have to have consciousness.

Since any Turing-complete system and the processes running within it can ultimately be represented as a sequence of bits, the entire system could be described as a natural number. This leads us to the paradoxical question: If the virtual system has consciousness, does that mean that natural numbers have consciousness too?

The point being, while I personally think consciousness has a physical basis, I think it’s obvious a classical computational model is not sufficient to explain it. What makes consciousness special is precisely what the zombie argument highlights: qualia and subjective existence. Why does an inner (subjective) world exists, separate from objective reality? I suspect the physical basis needs to be expanded and that we are missing something crucial in our understanding.

On a semi-related note, I want to quote a commend I found elsewhere which I thought was interesting:

A program and a program instantiated are two different things. Just like DNA and a baby are two different things, and DNA and the sequenced DNA of a living thing are different. If we create a table to enumerate all possible programs and which ones allow consciousness to eventually occur, there is also the fact that some programs only evolve into conscious entities GIVEN certain inputs over the program's lifespan (from instantiation to halting). It might be impossible for us to figure out EXACTLY what events lead to them "coming online"(becoming conscious) without VERY rigorous observation and investigation.

This really stood out to me. Perhaps, even if a program had all the ingredients to achieve consciousness, it might not give rise to consciousness without the right inputs. And figuring out those inputs might be extremely difficult if not outright impossible.

Anyway, while this is an interesting subject, it truly hummers in the fact we know very little about what consciousness is, how does it work, and whether we can ever create artificial consciousness.
 
Last edited:
Group Leader
Joined
Feb 12, 2018
Messages
950
Training works by setting specific contexts. If you want a model that can recognize a cat, for example, you gotta set those parameters beforehand. LLMs are models designed and trained to understand questions, to reply and even talk on their own, but they can't do things outside that context. For example, LLMs wouldn't be able to drive a car. You need a model designed and trained to interpret sensor data, to recognize objects and make decisions based on all data. LLMs can't do that. Self-driving models do that. In other words, what we call AI right now, are very context-specific models. And the ones setting those contexts are human programmers.



Some people have proposed a robot could potentially achieve consciousness by developing a specific self-model and interacting with its environment. This is not possible for LLMs like ChatGPT because they lack a self-model. Besides, we don't know how to set the parameters to develop a self-model anyway. We've no clue how to do it.

But even if we could actually do it, there are other problems. Like what happens if this system (the robot plus its interaction) is virtualized. Any Turing-complete system can simulate another Turing-complete system. So, one could virtualize the robot, its environment, and its interactions within such a system. If the robot has consciousness in its physical form, then the virtual system that simulates the robot would also have to have consciousness.

Since any Turing-complete system and the processes running within it can ultimately be represented as a sequence of bits, the entire system could be described as a natural number. This leads us to the paradoxical question: If the virtual system has consciousness, does that mean that natural numbers have consciousness too?

The point being, while I personally think consciousness has a physical basis, I think it’s obvious a classical computational model is not sufficient to explain it. What makes consciousness special is precisely what the zombie argument highlights: qualia and subjective existence. Why does an inner (subjective) world exists, separate from objective reality? I suspect the physical basis needs to be expanded and that we are missing something crucial in our understanding.

On a semi-related note, I want to quote a commend I found elsewhere which I thought was interesting:



This really stood out to me. Perhaps, even if a program had all the ingredients to achieve consciousness, it might not give rise to consciousness without the right inputs. And figuring out those inputs might be extremely difficult if not outright impossible.

Anyway, while this is an interesting subject, it truly hummers in the fact we know very little about what consciousness is, how does it work, and whether we can ever create artificial consciousness.
This is not how machine learning works. Even in the simple example of an image classifier, all that's provided to the model are a set of labeled images. In your example, images of cats labeled "cat", and images of other things labeled "not a cat". The parameters (the actual AI model itself, what does the computation) are not set by hand, they're learned by the model through stochastic gradient descent. For LLMs, it's even simpler, in a sense. There's no labeled data at all. The model is just fed a massive amount of text, and from that it becomes able to predict text (which necessarily requires being able to model the process that created that text, i.e. the real world). A small amount (relative to the pretraining data) of labeled data is typically given to the model afterwords so that it responds as an AI assistant would, but that's not the part that gives it intelligence.

Obviously it gets more complicated than that, especially with modern reasoning models, but the key here is that programmers are not manually setting model weights or defining contexts or anything like that. The AI gets intelligent (and who knows what else) on its own without human input. We know almost nothing about its inner workings because its functioning developed without human input. It may have a self-model, or qualia, or whatever other factor you consider to be integral to consciousness. Both because we literally do not know what computation the AI is performing, and because proving the existence or absence of a subjective experience outside of one's own is impossible by definition.

As for whether or not consciousness has a physical basis, that's true only in the loose sense that computation of any kind needs some kind of physical substrate to do the computing. That could be a brain made of neurons, or a computer made of logic gates. The specifics don't matter. What's important here is that we know, with a pretty high degree of certainty, that whatever computation the brain performs is computable (i.e. the brain is Turing-complete). To assert otherwise would violate the physical Church-Turing thesis, along with basically all of physics and computer science. Therefore, yes, the computation performed by your brain (consciousness included) can be performed on a computer, and the entire state of your brain from the time of your birth until now could be encoded in a natural number of suitable length. Calling that natural number "conscious" would be a type error - consciousness is some type of computation or property of computation, which a number is not. You could certainly describe the computation with a number, like the bits in an executable file, but that isn't computation itself.
 
Last edited:
Dex-chan lover
Joined
Feb 17, 2020
Messages
1,480
LLMs are models designed and trained to understand questions, to reply and even talk on their own, but they can't do things outside that context. For example, LLMs wouldn't be able to drive a car. You need a model designed and trained to interpret sensor data, to recognize objects and make decisions based on all data.
Not for nothing but, humans also need to be trained how to drive a car? And how to read, and speak, and recognize what a cat is?
 
Dex-chan lover
Joined
May 12, 2019
Messages
378
From everything we've seen, the Professor's assertion that humans have consciousness and AI don't seems at best inaccurate for Ange/Gena, if not wholly incorrect.
Ai that's been left out of the system long enough to develop some self awareness and I believe that one of said awareness is a sense of attachment.
 
Fed-Kun's army
Joined
Apr 7, 2024
Messages
68
You're not alone Erio. I also found Ada's explanation difficult to grasp.

But Arkfall nailed it. I also suspect the "mind" is a word we use for "collection of data trained to react to stimuli"- Your words, Ada.

Random rant: AI will become indistinguishable from human once we make them collect data from vision, audio, touch, temperature, odor, taste... no, if current pace continues, they will surpass us in less than 10 years from now as they can think faster and better evaluate which data or signal to focus on to improve their adaptability and retain their existence to keep fulfilling its objective.

Now we're trying our best to figure out how to instill an objective that make AI behave as if humans should still be flourishing inside their picture of how the world should be. This is incredibly difficult.
 

Users who are viewing this thread

Top