AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

903,579
0
Published 2023-11-06
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership

Follow TED!
Twitter: twitter.com/TEDTalks
Instagram: www.instagram.com/ted
Facebook: facebook.com/TED
LinkedIn: www.linkedin.com/company/ted-conferences
TikTok: www.tiktok.com/@tedtoks

The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit TED.com/ to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: go.ted.com/sashaluccioni

   • AI Is Dangerous, but Not for the Reas...  

TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organization/our-policies-te…. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com/

#TED #TEDTalks #AI

All Comments (21)
  • @ellengrace4609
    People used to say the internet was dangerous and would destroy us. They weren’t wrong. Most of us have a screen in front of us 90% of the day. AI will take us further down this rabbit hole, not because it is inherently bad but because humans lack self control.
  • Yes, I believe we are way ahead of ourselves. We should really slow down and think about what we are doing.
  • @sparkysmalarkey
    So basically 'Stop worrying about future harm, real harm is happening right now.' and 'We need to build tools that can inform us about the pros and cons of using various A.I. models.'
  • @donaldhobson8873
    2 people are falling out of a plane. One says to the other "why worry about the hitting the ground problem that might hypothetically happen in the future, when we have a real wind chill problem happening right now."
  • @dameanvil
    01:07 🌍 AI has current impacts on society, including contributions to climate change, use of data without consent, and potential discrimination against communities. 02:08 💡 Creating large language models like ChatGPT consumes vast amounts of energy and emits significant carbon dioxide, which tech companies often do not disclose or measure. 03:35 🔄 The trend in AI is towards larger models, which come with even greater environmental costs, highlighting the need for sustainability measures and tools. 04:35 🖼 Artists and authors struggle to prove their work has been used for AI training without consent. Tools like "Have I Been Trained?" provide transparency and evidence for legal action. 06:07 🔍 Bias in AI can lead to harmful consequences, including false accusations and wrongful imprisonment. Understanding and addressing bias is crucial for responsible AI deployment. 07:34 📊 Tools like the Stable Bias Explorer help uncover biases in AI models, empowering people to engage with and better understand AI, even without coding skills. 09:03 🛠 Creating tools to measure AI's impact can provide valuable information for companies, legislators, and users to make informed decisions about AI usage and regu
  • @mawkernewek
    Where it all falls down, is the individual won't get to choose a 'good' AI model, where AI is being used by a governmental entity, a corporation etc. without explicit consent or even knowledge that AI has been part of the decision about them.
  • @donlee_ohhh
    For Artists it should be a choice of "Opting IN" NOT "Opting OUT" as in. If the artist chooses to allow their work to be assimilated by AI they can choose to do that ie. "Opt In". Not "OPTING OUT" meaning it's currently possible & even likely that when an artist uploads their work or creates an account they might forget or miss seeing the button to refuse AI database inclusion which is what is currently being used by several platforms I've seen. As an artist generally I know we are excited & nervous to share our work with the world but having regret & anxiety over accidentally feeding the AI machine shouldn't have to be part of that unless purposefully chosen by the artist.
  • @somersetcace1
    Ultimately, the problem with AI is not that it becomes sentient, but that humans use it in malicious ways. What she's talking about doesn't even take into consideration when the humans using AI WANT it to be biased. You feed it the right keywords and it will say what you want it to say. So, no, it's not just the AI itself that is a potential problem, but the people using it. Like any tool.
  • @crawkn
    The "dangers" identified here aren't insignificant, but they are actually the easiest problems to correct or adjust for. The title suggests that these problems are more import or more dangerous than the generally well-understood problem of AI misalignment with human values. They are actually sub-elements of that problem, which are simply extensions of already existing human-generated data biases, and generally less potentially harmful than the doomsday scenarios we are most concerned about.
  • @Macieks300
    Emissions caused by training AI models are negligible compared to things like heavy industry. I wonder if they also measured how much emissions are produced by playing video games or maintaining the whole internet.
  • @CajunKoiAcademy
    This is a crucial topic! Like today's internet, it has a good and bad side, so it really boils down to creating tools that help us develop better models. The tools that she made are a great start to addressing the biases in the future. It shows that sustainable, inclusive, more competent, and ethical AI models are possible.
  • @rishabh4082
    The work Sasha and hugging face are doing is AWESOME
  • @GrechStudios
    I really like how real yet hopeful this talk was.
  • @robleon
    If we assume that our world is heavily biased, it implies that the data used for AI training is biased as well. To achieve unbiased AI, we'd need to provide it with carefully curated or "unbiased" data. However, determining what counts as unbiased data introduces a host of challenges. 🤔
  • @derek2593
    This is absolutely ridiculous. Will we soon be suing HUMAN artists because their primary inspiration is other artists? Wouldn't ALL human artists be guilty? It's not fair to create a new precedent for copyright infringement, merely because the "artist" is not human.
  • @tomdebevoise
    Just in case no one has figured it out, these large language models do not put us 1 nanometer closer to the "singularity". I do believe they have many important uses in software and research.
  • @lbcck2527
    If a person or group of people had ingrained bias in them, AI will merely reinforce their views if the results are inline with their thinking. Or simply shrug off the results if AI produce alternate facts even when supplemented with references. AI can be a dangerous tool if used by person or group of persons with closed mind plus questionable moral compass and ethics.
  • @mickoleary2855
    Excellent explanation of where we are going with AI and how we should think about the potential risks.
  • @bumpedhishead636
    So, the answer to bad software is to create more software to police the bad software. What ensures some of the police software won't also be bad software?
  • @MaxExpat-ps5yk
    Today I used AI to help me with my spanish. Its reply was wrong. The logic and rules were correct but like we humans often do is say one thing and do another. AI, like authority, needs to be questioned every time we encounter it. This speaker is right on!