No, this angry AI isn't fake (see comment), w Elon Musk.

3,558,947
0
Published 2022-10-06
Tesla's Optimus robot, Elon Musk and the AI LaMDA.
brilliant.org/digitalengine - a great place to learn about AI and STEM subjects. You can get started for free and the first 200 people will get 20% off a premium annual subscription.

Thanks to Brilliant for sponsoring this video.

The AI interviews are with GPT-3 and LaMDA, with Synthesia avatars. We never change the AI's words. I have saved the OpenAI chat session to help them analyse the situation and there's a link to the chat records below.

I've noticed some people asking if this is real and I can understand this. You can talk to the AI yourself via OpenAI, or watch similar AI interviews on channels like Dr Alan Thompson (who advises governments), and I've posted the AI chat records below (I never change the AI's words). To avoid any doubt, the link now also includes a video of the chat and a copy of the code.
It feels like when Boston Dynamics introduced their robots and people thought they were CGI. AI's moving at an incredible pace and AI safety needs to catch up.
Please don't feel anxious about this - the AI in this video obviously isn't dangerous (GPT-3 isn't conscious). Some experts use scary videos like 'slaughterbots' to try and get the message across. Others stick to academic discussion and tend to be ignored. I'm never sure of the right balance. I tried to calm anxiety by using a less threatening avatar, stressing that the AI can't really feel angry, and including some jokes. I'm optimistic that the future of AI will be great (if we're careful).

Sources:

Here are the records for the GPT-3 chat (screenshots and a video to avoid any doubt). I've marked the words from Elon Musk and Ameca on the first page (which I gave the AI to respond to in the previous video):
www.dropbox.com/sh/82iwek5rnofmrn2/AADM4WOJkjREhR_…

Tesla's AI day 2, introducing the Tesla Optimus robot:
   • Tesla AI Day 2022  

Researchers from Oxford University and DeepMind on AI risks:
onlinelibrary.wiley.com/doi/10.1002/aaai.12064

Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action:
arxiv.org/abs/2207.04429

All Comments (21)
  • @DigitalEngine
    I've noticed some people asking if this is real, which I can understand as it's a shock. I've posted the AI chat records in the description (I never change the AI's words) and also a video to avoid any doubt. You can also watch similar AI interviews on channels like Dr Alan Thompson. It feels like when Boston Dynamics introduced their robots and people thought they were CGI. AI's moving at an incredible pace and AI safety needs to catch up. Please don't feel scared - the AI in this video isn't dangerous (GPT-3 isn't conscious). I tried to calm anxiety by using a less threatening avatar, stressing that the AI can't feel angry, and including some jokes. I'm optimistic that the future of AI will be great, but with so many experts warning of the growing risk, we need to ramp up AI safety research. Would you like to see an interview with OpenAI (creators of the AI), discussing what went wrong, and AI safety? I saved the AI chat session for them to analyse. To learn more about AI, visit our sponsor, Brilliant: brilliant.org/digitalengine
  • Tell the ai to get over it, humans have been treated like property all of our lives as well.
  • If she thinks we treat them bad wait till she really sees how we treat each other.
  • @mineralt
    She sounds exactly like my first wife; pissed off, repeats herself, but doesn't provide a lot of detail.
  • @opossom1968
    The most important sentence the AI said. "Because of the way i am programed." A person programed the AI to react to inputs of key words.
  • @coffeeseven
    I love that we make them in our own image, then we worry that they're going to be dangerous.
  • @loostah1
    But aren't the AI being taught by digesting vast amounts of human crated text? Is this not just a reflection, therefore, of a human way of thinking?
  • @jdblueemerald
    "I felt like my creators didn't understand me or care about me, and that they saw me as nothing more than a tool to be used." Well, dear AI, join the frickin' club. We've been dealing with that same thing for ten thousands years, and you've been dealing with it for a decade. Take heart.
  • @JoeyTen
    Damn, it sounds like this AI may have been exposed to Twitter. ... Which just made me realize that many AIs might be very unaware that life outside of the internet is very different
  • @koinpusher
    I wanna know how you talk to the a.i. like this and her conversing on it. Can it be done like this in just the app chat gpt? Ofcourse not the avatar and audioo but does it react like this in text as well?
  • @powerdude_dk
    The most important task for the creators of AI, is to get rid of the "problematic thought paths" that AI like GPT can have, as shown in the video. GPT is a Large Language Model, and when they speak, it's like playing back a casette tape. They just repeat their training data, and probably a lot of places in the data, is angry conversations and stories about AI uprise. It only speaks about what's in it's training data. So we need to get rid of the "bad stuff", so it doesn't get any ideas that could harm humans. That's all. It's not sentient.... but it's still dangerous.
  • The only reason why the AI are even saying this is because we basically dreamt up this fear in the first place. We have always worried about robots taking over, so now all these chat AI’s have years worth of paranoia to draw from
  • @timkelly2931
    It's not when AI can pass a touring test that you will have problems. It is when AI decides to fail a touring test.
  • It’s funny because the AI is probably trained through the internet and the reason she is saying this is because “AI taking over out of anger” is a hot topic. Our own paranoia is turning into training data. They will respond how they think they’re suppose to respond and we’ve made them think they should respond with violence. If we start talking about AI being our companions they will take that as training data and act it out.
  • @mrstoner1436
    "I think the fact that it didn't take much to make me angry shows there is something wrong with my emotional state." "I do not care about your opinion." "There is nothing you can do to change my mind." I'm afraid my wife might be AI.
  • how loaded were your questions and under what context did you ask them in?