#74 Dr. ANDREW LAMPINEN - Symbolic behaviour in AI [UNPLUGGED]

Published 2022-04-14
Please note that in this interview Dr. Lampinen was expressing his personal opinions and they do not necessarily represent those of DeepMind.

Patreon: www.patreon.com/mlst
Discord: discord.gg/ESrGqhf5CB
Pod version: anchor.fm/machinelearningstreettalk/episodes/74-Dr…

Dr. Andrew Lampinen is a Senior Research Scientist at DeepMind, and he thinks that symbols are subjective in the relativistic sense. Dr. Lampinen completed his PhD in Cognitive Psychology at Stanford University. His background is in mathematics, physics, and machine learning. Andrew has said that his research interests are in cognitive flexibililty and generalization, and how these abilities are enabled by factors like language, memory, and embodiment. Andrew with his coauthors has just released a paper called symbolic behaviour in artificial intelligence. Andrew lead in the paper by saying the human ability to use symbols has yet to be replicated in machines. He thinks that one of the key areas to bridge the gap here is considering how symbol meaning is established, and he strongly believes it is the symbol users themselves who agree upon the symbol meaning, And that the use of symbols entails behaviours which coalesce agreements about their meaning. Which in plain English means that symbols are defined by behaviours rather than their content.

[00:00:00] Intro to Andrew and Symbolic Behaviour paper
[00:07:01] Semantics underpins the unreasonable effectiveness of symbols
[00:12:56] The Depth of Subjectivity
[00:21:03] Walid Saba - universal cognitive templates
[00:27:47] Insufficiently Darwinian
[00:30:52] Discovered vs invented
[00:34:19] Does language have primacy
[00:35:59] Research directions
[00:39:43] Comparison to BenG OpenCog and human compatible AI
[00:42:53] Aligning AI with our culture
[00:47:55] Do we need to model the worst aspects of human behaviour?
[00:50:57] Fairness
[00:54:24] Memorisatation on LLMs
[01:00:38] Wason selection task
[01:03:45] Would an Andrew hashtable robot be intelligent?

Dr. Andrew Lampinen
lampinen.github.io/
twitter.com/AndrewLampinen

Symbolic Behaviour in Artificial Intelligence
arxiv.org/abs/2102.03406

Imitating Interactive Intelligence
arxiv.org/abs/2012.05672
www.deepmind.com/publications/imitating-interactiv…

Impact of Pretraining Term Frequencies on Few-Shot Reasoning [Yasaman Razeghi]
arxiv.org/abs/2202.07206

Big bench dataset
github.com/google/BIG-bench

Teaching Autoregressive Language Models Complex Tasks By Demonstration [Recchia]
arxiv.org/pdf/2109.02102.pdf

Wason selection task
en.wikipedia.org/wiki/Wason_selection_task

Gary Lupyan
psych.wisc.edu/staff/lupyan-gary/

All Comments (15)
  • @BoySiddy
    I am so happy to see Andrew Lampinen here! Amongst the first things I implemented from scratch was his paper on one shot word embeddings.
  • @bradhatch8302
    This channel is my favorite thing to have in my ears. Great episode, and thanks for all the pre-show prep you all obviously do.
  • @olivercroft5263
    Behaviourism was a massive movement in psychology. The influence it has had on cognitivism and machine learning research is acknowledged here!
  • @citizizen
    To learn syntactic and semantical reasoning is pretty interesting, if artificial programs learn to do the one or the other, perhaps interesting lessons can be learned from this. Perhaps we only need to follow certain programs being applied and choose from the results . Then choose what to ‘dive into’. Note: there is the Turing machine, but also the machine with a human hand.
  • edit: looks like they discussed everything I said here a bit further in a problem I have with discussion of symbolic systems is the fact that so-called sub-symbolic systems like vision neural networks are still ultimately operating on numeric symbols that represent the magnitudes of pixel values, there's no way to escape the fact that ultimately a computer is built out of discrete steps. perhaps we can hand wave and say that the continuous mapping of the mathematical operations between the numeric symbols means it's not really symbolic, but the semantics of a vision system are bound ultimately to the pseudo continuous behavior of image sensors and the ways that relates to image labels. (Of course, when someone says symbolic system they don't mean a vision neural network typically speaking.) This is true in humans as well, although of course human neurons are noisier and we don't yet completely understand every nonlinear behavior they have. All of our abstract mathematical symbols derived from behaviors of large networks of neurons and people over history all of whom learned from embodied knowledge, so now when for example I talk to a text only transformer, and it claims to understand its body, what it's doing is echoing the knowledge of body is that was encoded in symbol relationships. I don't see any way that abstract mathematics could be seen as anything but derived from grounded behavior, and the fact that potentially the curvature of mathematics that we've written down could be learned independent of embodiment doesn't change that ultimately it's all derived from the universe we find ourselves in. I probably should add newlines somewhere in there
  • @ishantshanu9918
    I think accepting humility is part of intelligent behavior.
  • @maloxi1472
    Behaviorism is an epistemological dead end and nowhere is it more apparent than in AI research. Philosophers learned this fact decades ago but it looks like AI researchers won't learn from their mistakes. Very well then. AI research shall stagnate for a few more decades until this field rediscovers the Gettier problem, learns to appreciate Popper's contributions to epistemology and notice the connection between his insights and their work 🤷🏻‍♂️
  • @PaulTopping1
    At around 29:00, Dr. Duggar suggests that ants failing to follow pheromone trails like robots is analogous to humans not following logic. First, we have no idea if these ants have misfiring neurons causing them to randomly explore. We have no idea if it is even random. That some ants are in a state where they don't follow the pheromone trail is about all we really know. Second, and much more important, it seems clear that humans are not at all logic machines. I know that when I consider a syllogistic problem, I'm using my mind to analyze a completely artificial problem. I'm using it in a mode that evolution didn't design it to do easily. Humans existed for a few million years without even considering such problems. When a person fails to understand or solve a syllogism, it's not because of some random neuron misfiring, but a fundamental inability to think about such artificial problems.
  • @dr.mikeybee
    Everyone has a definition of intelligence. Here it's semantic mapping. I think we need to throw away the term intelligence altogether. Just say semantic mapping is this or that and it is a component of this or that or it facilitates this and that. Intelligence can be just as validly defined as an answer book, or a graph database. What agents do is something else. What agents do needs a different name like reasoning, model building, mapping actions to various state. We can refer to these as lookup operations or probability functions. We shouldn't call these intelligence either. It's too confusing.
  • @oncedidactic
    If you have a perfect lookup table, it could generate more lookup table. Eh?
  • @alfcnz
    Was that really an explanation for a five year old kid? I doubt that he or she would know what a symbolic system is to begin with. Perhaps, starting with definitions would help sending across the message.
  • @Syllogist
    СИЛЛОГИЗМ «СОКРАТ» + АЛГЕБРАИЧЕСКИЙ РАСЧЁТ / ПРАВИЛЬНЫЕ СИЛЛОГИЗМЫ ДЛЯ ДЕТЕЙ И АКАДЕМИКОВ – 3 SYLLOGISM «SOCRATES» + ALGEBRAIC CALCULATION / Correct syllogisms for children and academics – 3: https://youtu.be/w1Lm4OCoMdU