CBMM10 Panel: Research on Intelligence in the Age of AI

112,931
0
Published 2023-11-20
On which critical problems should Neuroscience, Cognitive Science, and Computer Science focus now? Do we need to understand fundamental principles of learning -- in the sense of theoretical understanding like in physics -- and apply this understanding to real natural and artificial systems? Similar questions concern neuroscience and human intelligence from the society, industry and science point of view.

Panel Chair: T. Poggio
Panelists: D. Hassabis, G. Hinton, P. Perona, D. Siegel, I. Sutskever

cbmm.mit.edu/CBMM10

All Comments (21)
  • @pablotano352
    The hardest benchmark in current AI is making Ilya laugh
  • @DirtiestDeeds
    Please keep doing these panels - the public needs to hear directly and regularly from the leaders in this field. The ambient noise, hype and huckstering grows more intense by the day.
  • @user-to9ub5xv7o
    Chapter 1: Introduction and Panelist Introduction (0:00-1:03) - Tomaso Poggio introduces the panel, noting changes due to events in Israel. - Amnon Shashua unable to attend, replaced by Pietro Perona. - Panel comprises three real and three virtual members. Chapter 2: Panel Discussion Objectives (1:03-2:20) - Poggio outlines the main discussion topics: 1. Comparison of large language models, deep learning models, and human intelligence. 2. Interrelation of neuroscience and AI. - Focus on fundamental principles and the 'curse of dimensionality' in neural networks. Chapter 3: Geoff Hinton's Perspective (2:20-7:02) - Hinton discusses neuroscience's impact on AI, particularly the concept of neural networks. - Mentions contributions like dropout and ReLUs from neuroscience. - Notes potential future developments like fast weights. - Suggests that AI developments might not always align with neuroscience insights. - Discusses AI's efficiency and potential surpassing human intelligence. Chapter 4: Pietro Perona's Insights (7:02-13:49) - Perona touches on embodied intelligence and the need for machines to understand causation. - Highlights the challenge in creating AI that can design and interpret experiments. - Discusses the role of theory in AI and the dynamic nature of technology. Chapter 5: David Siegel's Reflections (13:49-21:08) - Siegel emphasizes understanding intelligence as a fundamental human inquiry. - Advocates for a theory of intelligence and its importance beyond commercial applications. - Sees neuroscience and AI as complementary in developing a theory of intelligence. Chapter 6: Demis Hassabis' Contributions (21:08-29:07) - Hassabis discusses neuroscience's subtle influence on AI. - Emphasizes the need for empirical study and analysis techniques in AI. - Suggests independent academic research in AI for better understanding and benchmarking. Chapter 7: Ilya Sutskever's Viewpoints (29:07-34:19) - Sutskever speaks on the role of theory in AI and its relation to neuroscience. - Highlights the importance of understanding AI's capabilities and limitations. - Stresses the need for collaborative research and evaluation in AI. Chapter 8: Panel Discussion on Theory and Empirical Studies (34:19-43:35) - Panel engages in a discussion on the importance of theory, benchmarking, and empirical studies in AI. - Emphasizes the need for a deeper understanding of AI systems and their capabilities. Chapter 9: Audience Q&A and Panel Responses (43:35-End) - Audience members pose questions on various topics, including AI's creativity, neuroscience's contribution to AI, and future developments in AI architecture. - Panelists share their insights, experiences, and speculations on these topics. Chapter 10: Exploring AI-Enabled Scientific Revolution (1:10:05-1:16:17) - Discussion on AI's potential to drive a scientific revolution, particularly in fields like biology and chemistry. - Demis Hassabis emphasizes AlphaFold as an example of AI's contribution to science. - The role of AI in solving complex combinatorial problems and generating hypotheses. - David Siegel reflects on AI's potential in understanding the brain and its complexities. Chapter 11: Panel's Take on AI's Creativity and Originality (1:16:17-1:23:46) - Panelists debate the creative capabilities of current AI systems, specifically large language models. - Question raised about AI's ability to state new, non-trivial mathematical conjectures. - Discussion on different levels of creativity and AI's potential to reach higher levels of invention and out-of-the-box thinking. - Geoffrey Hinton expresses skepticism about AI doing backpropagation through time, and discusses AI's information storage capabilities compared to the human brain. Chapter 12: Breakthroughs in Neuroscience Impacting AI (1:23:46-1:27:17) (continued) - The panel discusses the significance of understanding learning mechanisms in the brain for advancing AI. - Speculation on whether the brain implements a form of backpropagation and its implications for AI. - The importance of identifying and understanding diverse neuron types in the brain and their potential influence on AI development. - The discussion highlights the complex relationship between neuroscience discoveries and AI advancements. Chapter 13: Closing Remarks and Reflections (1:27:17-End) - The panel concludes with reflections on the discussed topics, emphasizing the interplay between AI and neuroscience. - Tomaso Poggio and other panelists summarize key insights, reiterating the potential of AI in advancing scientific understanding and the importance of continuous exploration in both AI and neuroscience fields. - Final thoughts underscore the significance of collaborative efforts and open research in pushing the boundaries of AI and understanding human intelligence.
  • @KaplaBen
    24:10 Great analogy by Demis with the oil / internet data allowing us to sidestep difficult questions (learn/form abstract concepts, grounding) in AI. Brilliant
  • @urtra
    feeling that Demis driven by goal design, Ilya by his own seriousness and Sir.Hinton by his deep intuition.
  • @BR-hi6yt
    Wow - what a treat for us nerds. Thank you so much.
  • @societyofmind
    For me, the main thing that LLMs show is that there is more than one way to generate natural language. A relatively simple model (like GPT) can generate very natural looking text BUT it requires an INSANE amount of training data. More training examples than any child or teenage could ever possibly hear. My 5 year old can comprehend and generate endless strings of natural language having heard less than 50 million words (most of which are redundant and far less diverse than the examples LLMs are trained on). Yet the algorithm in her brain easily exhibits intelligence. Now compare that to an LLM, even a simple one like GPT-1. The number of training tokens it needs to get even slightly close to comprehending and generating natural language is at least an order of magnitude more. All this tells me is that there are at least 2 ways to generate intelligence. A simple brute force transformer with an insane number of free parameters and orders of magnitude more training data is all that’s needed to learn the underlying statistics of human-generated language. It “displays” intelligence in a fundamentally different way than the brain, but does it actually teach us anything about how OUR brains work? That’s debatable. Evolution (over billions of years) discovered an exceptionally efficient algorithm for intelligence that requires extremely little energy to run and orders of magnitude less training. It’s fundamentally different, but that doesn’t necessarily mean that the brain is better. A less efficient / “dumber” algorithm might be able to achieve AGI as well, but it will need ungodly amounts of training data and free parameters to overcome its dumbness.
  • @andresgomez7264
    Awesome insights, love the focus on technical details that aren’t usually covered in more mainstream interviews.
  • @GeorgeRon
    An awesome discussion. These kind of panels where expert condensus and debates are exchanged would be great at staying grounded on AI.
  • @labsanta
    🎯 Key Takeaways for quick navigation: 00:00 🌟 Introduction and Panelist Change - Introduction of panelists and the replacement of Amnon Shashua by Pietro Perona. - Overview of the panel's topics and questions to be discussed. 01:20 🧠 The Role of Theory in Understanding Intelligence - Discussion on the importance of developing theories related to intelligence. - Emphasis on the need to explore common principles of intelligence across different forms. - Mention of the challenge posed by the curse of dimensionality in computation. 04:10 🧪 The Intersection of Neuroscience and AI - Exploration of the influence of neuroscience on AI development. - Examples of ideas from neuroscience influencing AI, such as dropout and ReLUs. - Speculation on the potential future role of fast weights in AI. 09:23 🌐 Embodiment and Understanding Causation - The importance of embodiment in intelligence and understanding causation. - The need for machines to carry out experiments and the limitations in this regard. - The challenge of developing AI systems that can think about how to conduct experiments and interpret results. 14:15 💼 Commercial AI and Research on Intelligence - Discussion on the intersection of commercial AI applications and fundamental research on intelligence. - Emphasis on the importance of basic research for understanding intelligence. - Acknowledgment of the small investments in fundamental research compared to practical applications. 21:23 🌌 Understanding Intelligence as a Grand Challenge - The broader perspective of understanding intelligence as a grand challenge for humanity. - Comparison to the study of the cosmos for a deeper understanding of our existence. - The importance of developing a theory of intelligence for a better comprehension of human existence. 26:04 🔍 The role of grounding and reinforcement learning in AI - AI systems can gain grounding and seep knowledge through human feedback and interactions. - There's room for improvement in AI planning and factuality. - Episodic memory and neuroscience-inspired ideas still have potential in AI. 26:34 🧠 Using neuroscience for AI analysis - AI understanding requires empirical approaches, along with theory. - Neuroscientists can contribute by applying analysis skills to AI systems. - There's a need for more research on analyzing AI representations and architectures. 27:32 📊 Empirical approaches and benchmarks in AI - Leading labs should provide access to large AI models for analysis and red teaming. - Creating benchmarks and testing AI capabilities is crucial for safety and performance. - Urgent need for more research and collaboration in understanding powerful AI systems. 29:24 🧩 The role of theory, neuroscience, and AI in understanding intelligence - Theory in AI, while challenging, can lead to valuable insights, especially regarding system scaling. - Borrowing ideas from neuroscience should be done with care, considering the complexity of the brain. - AI could help neuroscience by providing insights into brain functioning through comparisons. 30:22 🤖 Revisiting Chomsky's theory of language - Chomsky's theory of language focused on syntactic constructions and ignored meaning. - Large language models have highlighted the importance of understanding how language conveys meaning. - AI systems have contributed to reevaluating language theories. 34:34 💡 The importance of theory and empirical studies in AI - The historical example of Volta's battery and Maxwell's theory highlights the significance of both theory and practical discoveries. - Empirical studies and benchmarks are essential for understanding AI systems. - AI systems should be seen as tools for hypothesis-building and benchmarking in neuroscience. 36:57 🌐 Exploring different forms of intelligence - Studying various species' intelligence can provide insights into fundamental principles. - Avoid overemphasizing human language and consider a broader spectrum of intelligences. - Psychophysics and cognitive science should play a role in benchmarking and understanding AI and biological intelligence. 50:05 💡 Resource Allocation in AI Companies - Discussion on resource allocation in AI companies between technology, theory, and neuroscience. - Industry's hill-climbing approach vs. academia's exploration of new ideas. - The tension between commercial goals and long-term research. 51:30 🌟 Balancing Product and Research Needs - The tension between the needs of AI products and AI research. - Commercial incentives to improve AI and ensure safety. - Approaches to long-term research in AI companies. 53:27 📊 Challenges in Benchmarking AI - The difficulty of benchmarking AI systems, especially for large vision and language models. - The evolving nature of AI benchmarks. - The need to rethink benchmarking in AI research. 55:17 💬 Measuring AI Performance - The challenges of measuring AI performance, including claims of superhuman performance. - The role of training data in AI performance. - The complexity of AI measurement and the need for new methodologies. 58:08 🤖 AI Research Focus - The importance of focusing on research related to alignment, ethics, and benchmarking. - The potential for academia-industry collaboration in large-scale experimentation. - The need to provide access to AI models for academic research. 01:06:08 🔄 Scaling Experimentation in AI - The potential for academia-industry collaboration in conducting large-scale psychophysical experiments. - The availability of AI models for academic research. - The challenges and opportunities of scaling experimentation in AI. 01:10:47 🧠 Understanding Neuronal Diversity - The debate surrounding the significance of neuronal diversity in the brain. - Questions about the role of various types of neurons, glial cells, and complexity in intelligence. - Speculations on whether AI can replicate human-like intelligence without such complexity. 01:12:14 🌐 Beyond Human Intelligence - Exploring the potential paradigm-shifting capabilities that could emerge from understanding the complexities of human intelligence. - The question of whether human intelligence is uniquely valuable or can be surpassed by AI. - Considering the energy costs and benefits of replicating human cognition in AI. 01:12:42 🧠 Brain Optimization and Neuronal Diversity - The brain's optimization over evolutionary time has resulted in a variety of neuron types. - AI models like those using layer normalization are inspired by neural diversity in the brain. 01:13:41 🤖 AI-Enabled Scientific Revolution - AI has the potential to revolutionize science by helping us understand complex problems. - AlphaFold is an example of AI's application in biology, opening new possibilities. - AI models can assist in solving problems with massive combinatorial search spaces. 01:16:58 🎨 Creativity of Large Language Models - Large language models, like GPT-4, exhibit creativity in various domains, such as poetry and music. - They excel in interpolation and extrapolation, but inventing entirely new concepts or theories remains a challenge. 01:23:43 🤯 Future of AI Architectures - The panel discusses potential future AI architectures beyond transformers. - There is speculation about new architectures, but specifics are not disclosed. 01:25:10 🧠 Impact of Neuroscience on Machine Learning - Understanding how the brain learns, especially if it differs from backpropagation, could have a significant impact on machine learning. - Backpropagation, while successful in AI, may not be biologically plausible in the brain. - Brain research could provide insights into more eff
  • @andresprieto6554
    Impresive cast tbh. Everyone was fantastic and very insightful.
  • @zandrrlife
    When IIya was about to stunt and confirm they have been able to generate an actual novel idea and Hilton cut him off. Everybody looked annoyed 😂. It's Hinton though 😂.
  • @josy26
    We need more of this, extremely high signal/noise ratio
  • @kawingchan
    Sam Roweis (Hinton mentioned him for RELU), now thats a name i haven’t heard of for a while. Wish he had lived to see how this whole field developed. I really enjoyed his lectures, energy, and enthusiasm in ML.
  • @modle6740
    Developmental neuroscience research, on both typical and atypical development of the "system," is interesting to consider. Things can go highly awry (in terms of both cognitive and personality development, for example), depending on when, in the developing system, certain state/spaces arise...and what is "underneath" them as a connected whole, in terms of the developing system that did not have a sensorimotor stage.
  • @sidnath7336
    All these scientists are incredible but I think Demis has the best approach to this - RL and Neuroscience is the way. If we want to understand how these models work and how to improve them, we need to first understand the similarities and differences between the human brain and these systems and then see which techniques can help create a "map" between the 2 i.e. through engineering principles. When Demis talks about "if these systems are good at deception" and then trying to express what "deception" means, I believe this is a fundamental step towards complete reasoning capabilities. Note: I tried this with GPT-4 - I prompted it to always "lie" to my questions but through a series of very "simple" connected questions, it started to confuse its own lies with truths (which touches upon issues with episodic memory). Additionally, due to OpenAI ideologies, the systems are supposed to only provide factual, non-harmful information so this can be tricky to deal with.
  • @ReflectionOcean
    - Consider the role of theory in understanding and advancing AI (0:22) - Explore the relationship between AI models, human intelligence, and neuroscience (2:08) - Investigate the potential of AI to aid scientific discovery and problem-solving (14:02) - Discuss the creative capabilities of current and future AI systems, including large language models (1:21:53) - Debate the biological plausibility of backpropagation in the brain and its implications for AI (1:25:10)
  • @hugopennmir
    Amazing guests; Demis is out of the charts!
  • @labsanta
    00:00 🎙 Introduction and Panel Setup Introduction of panelists and the panel's focus on intelligence in the age of AI. 04:10 🧠 Geoffrey Hinton's Perspective Geoffrey Hinton discusses the role of neuroscience and AI, emphasizing the influence of neuroscience on AI development and the potential future impact of AI on neuroscience. He also touches on the concept of fast weights and statistical efficiency in AI. 09:23 💡 Pietro Perona's Thoughts Pietro Perona discusses the importance of embodied intelligence, the need for machines to carry out experiments, and the challenges of understanding causation. He highlights the difference between intelligence derived from analyzing data and intelligence grounded in real-world interactions. 14:15 💼 David Siegel on Intelligence Research David Siegel emphasizes the importance of studying intelligence as a fundamental research project, unrelated to commercial gains. He discusses the need for a theory of intelligence and how neuroscience and AI should work together to achieve this goal. 21:23 🤖 Demis Hassabis' Perspective Demis Hassabis acknowledges the significant contribution of neuroscience to AI development but notes the increasing divergence between AI systems and the brain. He highlights the role of large language models fueled by internet data and the importance of human feedback in grounding AI systems. 26:04 🧠 Role of Neuroscience in AI Integrating neuroscience knowledge into AI. 29:54 🤖 AI and Neuroscience Synergy The role of theory in AI. AI's contributions to neuroscience. The potential for neuroscience-inspired AI. 33:39 🧪 Empirical Study of Intelligence The need for empirical studies comparing AI and human intelligence. Benchmarking and evaluating AI systems. Shifting neuroscience focus towards AI systems for insights. 36:28 🌐 Studying Different Forms of Intelligence Exploring intelligence across various species. Importance of not fixating on human-centric approaches. 48:14 🎯 Psychophysics in AI and Benchmarking Leveraging psychophysics for AI benchmarking. Behavioral testing and rigorous controls in AI research. 50:33 🧠 Ideation Differences between Industry and Academia Industry tends to focus on hill-climbing and improving existing approaches. Academia often explores new ideas due to continuous exposure to fresh thinking. Infrastructure and scaling challenges exist in industry but not in academia. 51:30 💡 Balancing Product Needs and Research in AI There is competition among companies to improve AI performance. Short-term commercial incentives drive AI improvement. Long-term research on AI safety and alignment is essential for the future. 53:27 📊 Challenges in Benchmarking AI Progress Benchmarking AI performance is difficult, especially in complex tasks. Simple-minded benchmarks may not reflect true AI capabilities. The difficulty of measuring AI performance is an area for research. 55:45 🤖 Understanding AI Performance Claims of superhuman AI performance may not indicate true understanding. Neural networks' reasoning processes can be challenging to interpret. Measuring AI performance is complex and requires careful consideration. 57:39 🌐 Challenges in Experimentation and Data Access Conducting large-scale psychophysical experiments in AI research is valuable. Industry is willing to provide access to AI models for academic research. Collaboration between academia and industry can facilitate AI experimentation. 01:10:47 🧬 Exploring Neuronal Diversity and Paradigm Shifts The brain's diversity of neurons may have evolved for optimization. AI models inspired by brain mechanisms may not require the same complexity. Understanding the brain's intricacies can lead to paradigm-shifting capabilities in AI. Please note that these sections are based on the content of the provided transcript and may not encompass all topics discussed in the video. 01:13:12 🧠 Neural Diversity in AI Models Evolution as a tinkerer leading to neural diversity in AI models. Discussion on discovering interesting neuron types in trained neural networks. 01:14:10 🌐 AI-Enabled Scientific Revolution AI's potential to revolutionize science and understanding the world. AlphaFold as an example of AI's application in science. Applying AI to solve problems with massive combinatorial search spaces. 01:16:58 💡 Creativity of Large Language Models Discussing the creativity of large language models. Different levels of creativity: interpolation, extrapolation, and invention. Speculation on the possibility of AI inventing entirely new concepts. 01:23:43 🤖 Future Architectures Beyond Transformers Speculation on future neural network architectures beyond transformers. Humorous responses from panelists about sharing ideas publicly. Transition to discussing the impact of neuroscience on machine learning. 01:25:10 🧠 Breakthroughs in Neuroscience and Machine Learning Brain's potential role in machine learning, including backpropagation. The importance of understanding how learning happens in the brain. Speculation on potential breakthroughs in neuro