@MadChank I think there's a bit of confusion here - AIs don't develop emotions naturally. AI emotions can't be nurtured because AIs aren't "born" with emotions, they only have emotions when they are programmed into them. Any piece of software follows the goal/incentives given to it.
For example, if I create an AI and program in it the goal "create a realistic, human signature", it's not going to decide to suddenly start killing humans because it "hates" us for giving it menial work. Instead, if the AI gains enough intelligence, it could realize that humans might shut it down which would prevent it from working on its goal. So it could start actively working against it's creators to prevent ever being shut down. The incentives of the AI (creating better signatures) no longer align with those of it's creators (human life is important). That's the real danger with super-advanced AIs - misalignment of intentions/incentives.
Of course the above scenario only plays out into a "Robot Uprising" (Like HAL/Skynet/etc.) when the AI is 1) given access to vast quantities of information regarding the world around it, 2) improving it's ability to interpret/analyze that information, 3) able to apply that information in directly harmful ways, 4) improving at a rate faster than manually-detectable & 5) there are no programmed safeguards against such "misalignments".
This entire issue is very separate from giving AI "emotions". "Giving emotions to AI" basically entails creating an incentive structure based on vague notions of love/hate/admiration/etc. By it's very nature, such an incentive structure is a black box - we don't know what such an AI will do next since the incentive structure has built-in-uncertainty. This is fundamentally unconnected from the factors that could create an intelligent AI uprising.