Pluto - Vol. 5 Ch. 32

Dex-chan lover
Joined
Jul 31, 2019
Messages
1,170
I'll say it again - creating super intelligent AIs, especially those with emotions, is a mistake.
 
Member
Joined
Nov 26, 2019
Messages
41
@Goyal99Raj

is it though? a highly intelligent AI with emotions would be human in all but inner components. dr. tenma is a piece of shit, but he is right about one thing: an AI's emotions should be nurtured from birth the same way a human's would. now, a highly intelligent AI with NO emotions... now THATS a mistake. i mean, HAL 9000, Skynet, VIKI anyone?
 
Dex-chan lover
Joined
Jul 31, 2019
Messages
1,170
@MadChank I think there's a bit of confusion here - AIs don't develop emotions naturally. AI emotions can't be nurtured because AIs aren't "born" with emotions, they only have emotions when they are programmed into them. Any piece of software follows the goal/incentives given to it.

For example, if I create an AI and program in it the goal "create a realistic, human signature", it's not going to decide to suddenly start killing humans because it "hates" us for giving it menial work. Instead, if the AI gains enough intelligence, it could realize that humans might shut it down which would prevent it from working on its goal. So it could start actively working against it's creators to prevent ever being shut down. The incentives of the AI (creating better signatures) no longer align with those of it's creators (human life is important). That's the real danger with super-advanced AIs - misalignment of intentions/incentives.

Of course the above scenario only plays out into a "Robot Uprising" (Like HAL/Skynet/etc.) when the AI is 1) given access to vast quantities of information regarding the world around it, 2) improving it's ability to interpret/analyze that information, 3) able to apply that information in directly harmful ways, 4) improving at a rate faster than manually-detectable & 5) there are no programmed safeguards against such "misalignments".

This entire issue is very separate from giving AI "emotions". "Giving emotions to AI" basically entails creating an incentive structure based on vague notions of love/hate/admiration/etc. By it's very nature, such an incentive structure is a black box - we don't know what such an AI will do next since the incentive structure has built-in-uncertainty. This is fundamentally unconnected from the factors that could create an intelligent AI uprising.
 

Users who are viewing this thread

Top