AI: Grappling with a New Kind of Intelligence

676,248
387
Published 2023-11-24
A novel intelligence has roared into the mainstream, sparking euphoric excitement as well as abject fear. Explore the landscape of possible futures in a brave new world of thinking machines, with the very leaders at the vanguard of artificial intelligence.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Participants:
Sébastien Bubeck
Tristan Harris
Yann LeCun

Moderator:
Brian Greene

SHARE YOUR THOUGHTS on this program through a short survey: survey.alchemer.com/s3/7619273/AI-Grappling-with-a…

00:00 - Introduction
07:32 - Yann lecun Introduction
13:35 - Creating the AI Brian Greene
20:55 - Should we model AI on human intelligence?
27:55 - Schrodinger's Cat is alive
37:25 - Sébastien Bubeck Introduction
44:51 - Asking chatGPT to write a poem
52:26 - What is happening inside GPT 4?
01:02:56 - How much data is needed to train a language model?
01:11:20 - Tristan Harris Introduction
01:17:13 - Is profit motive the best way to go about creating a language model?
01:23:41 - AI and its place in social media
01:29:33 - Is new technology to blame for cultural phenomenon?
01:36:34 - Can you have a synthetic version of AI vs the large data set models?
01:44:27 - Where will AI be in 5 to 10 years?
01:54:45 - Credits

WSF Landing Page Link: www.worldsciencefestival.com/programs/ai-grappling…
- SUBSCRIBE to our YouTube Channel and "ring the bell" for all the latest videos from WSF
- VISIT our Website: www.worldsciencefestival.com/
- LIKE us on Facebook: www.facebook.com/worldsciencefestival
- FOLLOW us on Twitter: twitter.com/WorldSciFest
#worldsciencefestival #ai #artificialintelligence #briangreene

All Comments (21)
  • @lukaseabra
    Can we just take a second to acknowledge how fortunate we are to get to watch such content - for free? Thanks Brian.
  • @erasmus9627
    This is the best, most balanced and most insightful conversation I have seen on AI. Thank you to everyone who made this wonderful show possible.
  • @alfatti1603
    With. ultimate respect to Yann LeCun, his responses to Tristan Harris' points, are good examples of why a specialist scientist should avoid also being a philosopher or an intellectual if that's not their strong suit.
  • Lecun is like a small child with fingers plugged into the ears, shouting "lalalala can't hear you! He discredits Tristan Harris, as if his examples or cited experiments are flat out lies. His responses are weak and shortsighted. Sadly, Lecun is the EXACT reason of why I am terrified for the future. Hubris, bias and blatant disregard is what I expect from someone in his position (Meta). If AI alignment is left to the ones who own and fund its development and the race to the bottom continues? There will be no more second chances. Those who point to our past as a future predictor in what we are facing today with exponential growth either does NOT understand or does NOT WANT to understand. We would all love of the bright and shiny optimism that is being promised. My belief is that it's crucial to question who is promising it and why. I put my trust in those who are working towards alignment over corporations and shareholders. It's my understanding that those who are working on the alignment path are far outnumbered by those who are working on pumping it out as quickly as possible. The days of "move fast and break things" mentality needs to end yesterday. Ask Eliezer Yudkowski. Max Tegmark. Nick Bostrom. Mo Gawdat. Daniel Schmactenberger. Connor Leahy. Geoffrey Hinton, to name a few. and of course, Tristan Harris. Check out their perspectives and their wealth of knowledge and experience here. They will all say that the shiny world that we want is indeed possible. They will all agree that the version that Lecun predicts is absolutely false and very likely to be our downfall.
  • @Contrary225
    It’s amazing that this was only posted 3 hours ago and some it is already obsolete.
  • @2CSST2
    This conversation is so precious, it's rare that we can get quality ones like that with different voices that have their chance to express their views with clarity. For me there's a lot of ambiguity about what's the right thing to do in all this in terms of regulations, slowing, open-sourcing, etc. But one IS for sure, conversations like this are definitely very helpful. Thank you WSF and hope to see more like it in the near future!
  • @Relisys190
    30 years from now I will be 70 years old. The world I currently live in will be unrecognizable both in technology and the way humans interact. What a time to be alive... -M
  • @alan_yong
    🎯 Key Takeaways for quick navigation: 02:27 🧠 Introduction to AI and Large Language Models - Exploring the landscape of artificial intelligence (AI) and large language models. - AI's promise of profound benefits and the potential questions it raises. - Large language models' versatility and capabilities in generating text, answering questions, and creating music. 08:09 🤯 Revolution in AI and Deep Learning - Overview of the revolutionary changes in AI technology over the past few years. - Surprising results in training artificial neural networks on large datasets. - The resurgence of interest in deep learning techniques due to more powerful machines and larger datasets. 14:35 🧐 Limitations of Current AI Systems - Acknowledging the impressive advances in technology but highlighting the limitations of current AI systems. - Emphasizing that language manipulation doesn't equate to true intelligence. - The narrow specialization of AI systems and the lack of understanding of the physical world. 21:07 🐱 Modeling AI on Animal Intelligence and Common Sense - Proposing a vision for AI development starting with modeling after animals like cats. - Recognizing the importance of common sense and background knowledge in AI systems. - The need for AI to observe and interact with the world, similar to how babies learn about their environment. 23:11 🧭 Building Blocks of Intelligent AI Systems - Introducing key characteristics necessary for complete AI systems. - Highlighting the role of a configurator as a director for organizing system actions. - Addressing the importance of planning and perception modules in developing advanced AI capabilities. 24:22 🧠 World Model in Intelligence - Intelligence involves visual and auditory perception, followed by the ability to predict the consequences of actions. - The world model is crucial for predicting outcomes of actions, located in the front of the brain in humans. - Emotions, such as fear, arise from predictions about negative outcomes, highlighting the role of emotions in decision-making. 27:30 🤖 Machine Learning Principles in World Model - The challenge is to make machines learn the world model through observation. - Self-supervised learning techniques, like those in large language models, are used to train systems to predict missing elements. - Auto-regressive language models provide a probability distribution over possible words, but they lack true planning abilities. 35:38 🌐 Future Vision: Objective Driven AI - The future vision involves developing techniques for machines to learn how to represent the world by watching videos. - Proposed architecture "Jepa" aims to predict abstract representations of video frames, enabling planning and understanding of the world. - Prediction: Within five years, auto-regressive language models will be replaced by objective-driven AI with world models. 37:55 🧩 Defining Intelligence and GPT-4 Impression - Intelligence involves reasoning, planning, learning, and being general across domains. - Assessment of ChatGPT (GPT-4) indicates it can reason effectively but lacks true planning abilities. - Highlighting the gap between narrow AI, like AlphaGo, and more general AI models such as ChatGPT. 43:11 🤯 Surprise with GPT-4 Capabilities - Initial skepticism about Transformer-like architectures was challenged by GPT-4's surprising capabilities. - GPT-4 demonstrated the ability to reason effectively, overcoming initial expectations. - Continuous training post-initial corpus-based training is a potential but not fully explored avenue for enhancing capabilities. 45:30 📜 GPT-4 Poem on the Infinitude of Primes - GPT-4 generates a poem on the proof of the infinitude of primes, showcasing its ability to create context-aware and intellectual content. - The poem references a clever plan, Yuk's proof, and the assumption of a finite list of primes. - The surprising adaptability of GPT-4 is evident as it responds creatively to a specific intellectual challenge. 45:43 🧠 Neural Networks and Prime Numbers - The proof of infinitely many prime numbers involves multiplying all known primes, adding one, and revealing the necessity of undiscovered primes. - Neural networks like GPT-4 leverage vast training data (trillions of tokens) for clever retrieval and adaptation but can fail in entirely new situations. - Comparison with human reading capacity illustrates the efficiency of neural networks in processing extensive datasets. 48:05 🎨 GPT-4's Multimodal Capability: Unicorn Drawing - GPT-4 demonstrates cross-modal understanding by translating a textual unicorn description into code that generates a visual representation. - The model's ability to draw a unicorn in an obscure programming language showcases its creativity and understanding of diverse modalities. - Comparison with earlier versions, like ChatGPT, highlights the rapid progress in multimodal capabilities within a few months. 51:33 🔍 Transformer Architecture and Training Set Size - The Transformer architecture, especially its relative processing of word sequences, is a conceptual leap enhancing contextual understanding. - Scaling up model size, measured by the number of parameters, exponentially improves performance and fine-tuning capabilities. - The logarithmic plot illustrates the significant growth in model size over the years, leading to the remarkable patterns of language generation. 57:18 🔄 Self-Supervised Learning: Shifting from Supervised Learning - Self-supervised learning, a crucial tool, eliminates the need for manually labeled datasets, making training feasible for less common or unwritten languages. - GPT's ability to predict missing words in a sequence demonstrates self-supervised learning, vital for training on diverse and unlabeled data. - The comparison between supervised and self-supervised learning highlights the flexibility and broader applicability of the latter. 01:06:57 🧠 Understanding Neural Network Connections - Neural networks consist of artificial neurons with weights representing connection efficacies. - Current models have hundreds of billions of parameters (connections), approaching human brain complexity. 01:08:07 🤔 Planning in AI: New Architecture or Scaling Up? - Debates exist on whether AI planning requires a new architecture or can emerge through continued scaling. - Some believe scaling up existing architectures will lead to emergent planning capabilities. 01:09:14 🤖 AI's Creative Problem-Solving Strategies - Demonstrates AI's ability to interpret false information creatively. - AI proposes alternate bases and abstract representations to rationalize incorrect mathematical statements. 01:11:20 🌐 Discussing AI Impact with Tristan Harris - Introduction of Tristan Harris, co-founder of the Center for Humane Technology. - Emphasis on exploring both benefits and dangers of AI in real-world scenarios. 01:15:54 ⚖️ Impact of AI Incentives on Social Media - Tristan discusses the misalignment of social media incentives, optimizing for attention. - The talk emphasizes the importance of understanding the incentives beneath technological advancements. 01:17:32 ⚠️ Concerns about Unchecked AI Capabilities - The worry expressed about the rapid race to release AI capabilities without considering wisdom and responsibility. - Analogies drawn to historical instances where technological advancements led to unforeseen externalities. 01:27:52 🚨 Ethical concerns in AI development - Facebook's recommended groups feature aimed to boost engagement. - Unintended consequences: AI led users to join extremist groups despite policy. 01:29:42 🔄 Historical perspective on blaming technology for societal issues - Blaming new technology for societal issues is a recurring pattern throughout history. - Political polarization predates social media; historical causes need consideration. 01:32:15 🔍 Examining AI applications and potential risks - Exploring an example related to large language models and generating responses. - Focus on making AI models smaller, understanding motivations, and preventing misuse. 01:37:15 ⚖️ Balancing AI development and safety - Concerns about the rapid pace of AI development and potential consequences. - The analogy of 24th-century technology crashing into 21st-century governance. 01:40:29 🚦 Regulating AI development and safety measures - Discussion about a proposed six-month moratorium on AI development. - Exploring scenarios that could warrant slowing down AI development. 01:44:35 🌐 Individual responsibility and shaping AI's future - The challenge of AI's abstract and complex nature for individuals. - Limitations of intuition about AI's future due to its exponential growth. 01:48:29 🧠 Future of AI Intelligence and Consciousness - Yan discusses the future of AI, stating that AI systems might surpass human intelligence in various domains. - Intelligence doesn't imply the desire to dominate; human desires for domination are linked to our social n
  • @dreejz
    I think it's very arrogant to think ' this and that will never happen'. How can you know!? Like we can predict this stuff. I'm pretty sure for example Yann did not foresee everybody having a phone in their pocket neither. It's also proven many times about the negative influence social media provides. I think Tristan was more on point in this conversation. We're living in wild times, that's for sure though! Skynet is coming ;)
  • @SylvainDuford
    My opinion of Yann LeCun took a big dive with this video. He underestimates the power of AI in its current form and what's coming over the next couple of years. He naively underestimates the dangers of AI. He seems to think that an AGI must be the same form of intelligence as human intelligence (absolutely false). And, perhaps predictably, he underestimates the negative impacts of Facebook and other social networks on society.
  • @jt197
    This discussion on the evolution of AI and its limitations is truly eye-opening. Yan Lecun's insights into the challenges AI faces in achieving true understanding and common sense are thought-provoking. It's clear that we have a long way to go, but this conversation gives us valuable perspective.
  • @SciEch92
    That opening by Brian blew my mind caught me off guard 😮
  • @Rockyzach88
    Having AI locked to a certain group of people also undemocratizes the technology and yet again further provides more power and wealth imbalance among society. Also banning something is just going to motivate people to do something in an unregulated fashion if they have the means.
  • @tarunmatta5156
    I wish Tristan was given some more time and voice in this conversation. While I'm convinced there is no way you can stop or slow down this race and we will surely see misuse as with any new invention, more conversations about it will ensure that safety is not ignored completely
  • @1911kodi
    I was very impressed by Yann's disciplined, rational and fact-based arguing preventing the discussion from turning in a more emotional direction.
  • @mrouldug
    Great conversation. The final comments about AI code being open source as a common good so that the big companies do not end up controlling our thoughts vs. AI code being proprietary so it doesn’t fall into the hands of bad people remains an open and scary question. Though I do not have Yann’s knowledge about AI, he seems a little too optimistic to me.
  • @astrogatorjones
    The problem with the scenario that Yann is advocating for is that is the best of all worlds. The example about sarin... it only takes one bad person to introduce the recipe. It will happen. Then it propagates. It's always going to be that way. When Tristan said, "I know all those guys." I laughed. I’e said the same thing. I'm the generation before him. We were geeks. Nerds. We thought we were inventing utopia where free speech cures it all because we’d been using the internet among ourselves for years. But we were wrong. We didn't know every last person would be carrying this handheld computer as —or more powerful than the servers we were working with. We didn't know about engagement. We didn't know about the dopamine factor. We didn't know that bad travels faster than good. This is the warning Tristian is talking about. I have hope that we'll fix social media. I think AI is a possible path but then I think, "let's fix the gun problem with more guns." I'm worried.
  • @aldogrech55
    My longstanding concerns about artificial intelligence have only been intensified by the attitudes of prominent figures like Yann LeCun. His assertive claims that AI, despite its growing intelligence, will remain under benign human control seem overly optimistic to me. This perspective reminds me of Yuval Noah Harari's cautionary words about AI's potential misuse by malevolent actors. It's worrying how AI can make decisions aligned with the harmful intentions of these actors, and yet, experts like LeCun, in his closing remarks, appear overly confident in their ability to manage these powerful tools. Having spent over 40 years in the IT industry, an industry I once passionately embraced, I now find myself grappling with a sense of fear towards the very field I've dedicated my life to.
  • This was more exciting and insightful than any 2 hour movie I could have watched. Thank you for sharing such wonderful content
  • @Laurie-eg8ct
    Most challenging for LLMs is planning, which involves the brain configurator (coordinator), perception, prediction, cost as degree of satisfaction (anxiety), and action.