#57 - Prof. MELANIE MITCHELL - Why AI is harder than we think

Published 2021-07-25
Patreon: www.patreon.com/mlst

Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.

Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.

Framing [00:00:00]
Dartmouth AI Summer Workshop [00:07:02]
Letitia Intro to Melanie [00:09:22]
The Googleplex situation with Melanie and Douglas Hofstadter [00:14:58]
Melanie paper [00:21:04]
Note on audio quality [00:25:45]
Main show kick off [00:26:51]
AI hype [00:29:57]
On GPT-3 [00:31:46]
Melanie's "Why is AI harder than we think" paper [00:36:18]
The 3rd fallacy: Avoiding wishful mnemonics [00:42:23]
Concepts and primitives [00:47:56]
The 4th fallacy [00:51:19]
What can we learn from human intelligence? [00:53:00]
Pure intelligence [01:00:14]
Unrobust features [01:02:34]
The good things of the past in AI research [01:11:30]
Copycat [01:17:56]
Thoughts on the "neuro-symbolic camp" [01:26:49]
Type I or Type II [01:32:06]
Adversarial examples -- a fun question. [01:35:55]
How much do we want human-like (human-interpretable) features? [01:43:44]
The difficulty of creating intelligence [01:47:49]
Show debrief [01:51:24]

Pod: anchor.fm/machinelearningstreettalk/episodes/57---…

Panel:
Dr. Tim Scarfe
Dr. Keith Duggar
Letitia Parcalabescu and and Ms. Coffee Bean (youtube.com/c/AICoffeeBreak/)

Why AI is Harder Than We Think - Melanie Mitchell
arxiv.org/abs/2104.12871

melaniemitchell.me/aibook/

www.santafe.edu/people/profile/melanie-mitchell
twitter.com/MelMitchell1
melaniemitchell.me/

#machinelearning

All Comments (21)
  • @LucasDimoveo
    This podcast is shockingly high quality for the viewership. I hope this channel grows much more!
  • @ddoust
    Without a doubt, MLST is the best channel for AI practitioners - every episode is mandated work time viewing for our team. Their instinct for the right guests, the quality of the panel and the open minded ventilation of competing key issues is exemplary. Friston, Chollet, Saba, Marcus, Mitchell and Hawkins are among the spearhead thinkers for the next (and final) breakthrough. If I might humbly recommend three more: David Deutsch, Herb Roitblat and Cecilia Heyes.
  • @CristianGarcia
    "Machine Learning practitioners were often quick to differentiate their discipline" How differentiable are we talking?
  • @oncedidactic
    Letitia was an excellent addition to the show! I love the varied perspective she brings, really complements the panel. As always I love d Keith’s contributions as well, and together they bring a formidable physics lens. Kudos on having such an eminent guest and thank you for all your hard work. It makes a fantastic show.
  • And it is so great that you have Mr. Duggar interacting in your interviews, giving a voice to philosophy!
  • @sabawalid
    Another great episode guys!!! Keep 'em coming.
  • @bertbrecht7540
    I am 20 minutes into this video and am so inspired. Thank you so much for the hard work you all put into creating this.
  • @teamatalgo7
    One of the best talks on the topic, congrats to the team for pulling such an amazing content. I am hooked to MLST now and binge watching all videos.
  • Thank you for these conversations and ideas. As a musician who is looking to go into computer science and AI, there are so many questions and worries around creativity and art and it takes a lot of humility and curiosity to approach these questions with an open mind.
  • @minma02262
    It is 12.3 am here and this street talk is 2.3 hours. Yes, I'm sleeping at 3 am today.
  • @user-xs9ey2rd5h
    Awesome episode, I'm looking forward to the one with Jeff Hawkins as well, I've learned so much from this podcast and am very glad you guys are doing what you're doing.
  • @2sk21
    Very enjoyable way to spend a summer Sunday. You have had some great guests lately
  • @jordan13589
    Has the Jeff Hawkins episode not yet been released? I was confused by references to a previous discussion with him.
  • @JohnDoe-ie9iw
    I wasn't expecting this quality. So happy I found this channel
  • @ugaray96
    The problem lies on what's the research that has more financial interest (in the short term): probably downstream tasks such as translation, summarisation, object detection and more. If there was more financial interest in doing research on general intelligence, we would be seeing a whole different panorama
  • Tim Scarfe, you are such an amazing pedagogue! I wish everybody would be as good as you when explaining something!
  • @AvijitRoy6
    Completed the entire video in 4 days. I have been practicing Machine Learning for the last 5 years and this video gave me knowledge about the things that I never encountered during my tenure. Great Podcast.
  • @crimythebold
    So. Intelligence must be measured in Watt then... I'm so relieved that we did not create a new unit for that 😉