AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

875,157
0
Published 2023-11-06
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership

Follow TED!
Twitter: twitter.com/TEDTalks
Instagram: www.instagram.com/ted
Facebook: facebook.com/TED
LinkedIn: www.linkedin.com/company/ted-conferences
TikTok: www.tiktok.com/@tedtoks

The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit TED.com/ to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: go.ted.com/sashaluccioni

   • AI Is Dangerous, but Not for the Reas...  

TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organization/our-policies-te…. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com/

#TED #TEDTalks #AI

All Comments (21)
  • @ellengrace4609
    People used to say the internet was dangerous and would destroy us. They weren’t wrong. Most of us have a screen in front of us 90% of the day. AI will take us further down this rabbit hole, not because it is inherently bad but because humans lack self control.
  • @donaldhobson8873
    2 people are falling out of a plane. One says to the other "why worry about the hitting the ground problem that might hypothetically happen in the future, when we have a real wind chill problem happening right now."
  • @sparkysmalarkey
    So basically 'Stop worrying about future harm, real harm is happening right now.' and 'We need to build tools that can inform us about the pros and cons of using various A.I. models.'
  • @somersetcace1
    Ultimately, the problem with AI is not that it becomes sentient, but that humans use it in malicious ways. What she's talking about doesn't even take into consideration when the humans using AI WANT it to be biased. You feed it the right keywords and it will say what you want it to say. So, no, it's not just the AI itself that is a potential problem, but the people using it. Like any tool.
  • @mawkernewek
    Where it all falls down, is the individual won't get to choose a 'good' AI model, where AI is being used by a governmental entity, a corporation etc. without explicit consent or even knowledge that AI has been part of the decision about them.
  • @Macieks300
    Emissions caused by training AI models are negligible compared to things like heavy industry. I wonder if they also measured how much emissions are produced by playing video games or maintaining the whole internet.
  • Yes, I believe we are way ahead of ourselves. We should really slow down and think about what we are doing.
  • The bit about AI (and other techs) that concerns me the most is the free-for-all personal data harvesting by corporations without any laws to control what they do with it. Only the EU has taken some steps to control this (GDPR), but no other nation protects the privacy of our data. These corporations are free to collect, correlate and sell our profiles to anyone. AI will enable data profiles that know us better than we know ourselves... all in a lawless environment.
  • @crawkn
    The "dangers" identified here aren't insignificant, but they are actually the easiest problems to correct or adjust for. The title suggests that these problems are more import or more dangerous than the generally well-understood problem of AI misalignment with human values. They are actually sub-elements of that problem, which are simply extensions of already existing human-generated data biases, and generally less potentially harmful than the doomsday scenarios we are most concerned about.
  • @dameanvil
    01:07 🌍 AI has current impacts on society, including contributions to climate change, use of data without consent, and potential discrimination against communities. 02:08 💡 Creating large language models like ChatGPT consumes vast amounts of energy and emits significant carbon dioxide, which tech companies often do not disclose or measure. 03:35 🔄 The trend in AI is towards larger models, which come with even greater environmental costs, highlighting the need for sustainability measures and tools. 04:35 🖼 Artists and authors struggle to prove their work has been used for AI training without consent. Tools like "Have I Been Trained?" provide transparency and evidence for legal action. 06:07 🔍 Bias in AI can lead to harmful consequences, including false accusations and wrongful imprisonment. Understanding and addressing bias is crucial for responsible AI deployment. 07:34 📊 Tools like the Stable Bias Explorer help uncover biases in AI models, empowering people to engage with and better understand AI, even without coding skills. 09:03 🛠 Creating tools to measure AI's impact can provide valuable information for companies, legislators, and users to make informed decisions about AI usage and regu
  • @donlee_ohhh
    For Artists it should be a choice of "Opting IN" NOT "Opting OUT" as in. If the artist chooses to allow their work to be assimilated by AI they can choose to do that ie. "Opt In". Not "OPTING OUT" meaning it's currently possible & even likely that when an artist uploads their work or creates an account they might forget or miss seeing the button to refuse AI database inclusion which is what is currently being used by several platforms I've seen. As an artist generally I know we are excited & nervous to share our work with the world but having regret & anxiety over accidentally feeding the AI machine shouldn't have to be part of that unless purposefully chosen by the artist.
  • @robleon
    If we assume that our world is heavily biased, it implies that the data used for AI training is biased as well. To achieve unbiased AI, we'd need to provide it with carefully curated or "unbiased" data. However, determining what counts as unbiased data introduces a host of challenges. 🤔
  • @bumpedhishead636
    So, the answer to bad software is to create more software to police the bad software. What ensures some of the police software won't also be bad software?
  • @mattp422
    My wife is a portrait artist. I just searched her on SpawningAI by name, and the first 2 images were her paintings (undoubtedly obtained from her web-based portfolio).
  • @streamer77777
    Interesting. So the hypothesis here is that all the electricity used to train large language models came from non-renewable sources, unless it was her firm that was doing it. Also, AI models would rank the probability of an image being true based on a user's query. This doesn't necessarily mean that less probable choices do not represent other scientists. It sounds more like smart publicity!
  • @rishabh4082
    The work Sasha and hugging face are doing is AWESOME
  • @lbcck2527
    If a person or group of people had ingrained bias in them, AI will merely reinforce their views if the results are inline with their thinking. Or simply shrug off the results if AI produce alternate facts even when supplemented with references. AI can be a dangerous tool if used by person or group of persons with closed mind plus questionable moral compass and ethics.
  • @BirdRunHD
    skip 1:20 AI models are trained using public and personal data, yet paradoxically, restrictions are often placed on the output they generate. This raises concerns about the fair use and ownership of the data initially utilized for their development
  • @MaxExpat-ps5yk
    Today I used AI to help me with my spanish. Its reply was wrong. The logic and rules were correct but like we humans often do is say one thing and do another. AI, like authority, needs to be questioned every time we encounter it. This speaker is right on!
  • @tomdebevoise
    Just in case no one has figured it out, these large language models do not put us 1 nanometer closer to the "singularity". I do believe they have many important uses in software and research.