Is ChatGPT A Step Toward Human-Level AI? — With Yann LeCun, Meta Chief AI Scientist

12,372
2
Published 2023-01-25
Yann LeCun is the chief AI scientist at Meta, a professor of computer science at NYU, and a pioneer of deep learning. He joins Big Technology Podcast to put Generative AI in context, discussing whether ChatGPT and the like are a step toward human-level artificial intelligence, or something completely different. Join us for a fun, substantive discussion about this technology, the makeup of OpenAI, and where the field heads next. Stay tuned for the second half, where we discuss the ethics of using others' work to train AI models.

All Comments (21)
  • thanks for interviwing Yann :) - remember 2 things: when you talk to ChatGPT you are talking to a 100% innocent and a 100% literal computer. it's entire universe is just the data training set it was trained on. And it gets confused real easy - so make your prompts very specific and framed more like you are programming a computer not talking to a human - you'll have a better result.
  • Figal101
    This is a great conversation, thanks for sharing it with the world Alex.
  • CTygr Alight
    Thanks for the interview, I appreciate being able to get perspectives from experts in this field! I also appreciate those final questions you asked, as a working artist, those are definitely questions weighing on my mind.

    I’d have to respectfully disagree with parts of his take on the question of whether or not artists have a right to ask for compensation. Often I see similar arguments to what he made, that artists “want to stop the technology”. That is a logical fallacy, a red herring. This is not what we’re asking for. We’re asking for a more ethical approach. We’re asking for our rights and boundaries to be respected. Many of us strongly believe that we are being taken advantage of, specifically by these tech companies.

    He’s wrong, it’s not a stretch to ask for compensation. We have completed hours of skilled labor (resulting in our art), and we should be compensated if someone wants to use the fruits of that labor to train their AI, (especially given that they are for profit). It is a use, in the same way someone would need to pay if they wanted to use our image for a book, use it for a promotional image, use it for concept art in their movie, etc.

    A good comparison would be the stock photo/illustration companies compensating artists/photographers to build their collection of images, which are licensed out to people who need to use those images. Imagine if these models did not have any art/photos/images to train on (zero images), would they be producing the same impressive images? No, they would not, they depend on the hard work and property of others. They are making a profit off of our hard work, it is absolutely reasonable to ask for compensation.

    Instead we have a system which has taken artists’ work without their consent. No where did I sign an agreement when I uploaded my artwork to the internet, that one day these AI companies can use my work. It’s a violation of my boundaries. To offer an example that I believe was done more ethically: Dance Diffusion was built on datasets composed of copyright-free and voluntarily provided audio, so it IS realistic to build an image generator in a similar, more ethical, way.

    That said, I know there are a couple of different camps in the art world about whether or not it will wipe our industry out. I’m in the camp believing that it will definitely do significant damage, but I still strongly believe that artists will figure out how to adapt to the new landscape. Regardless of one's predictions of the future, I don’t think having a positive outlook on the future is an excuse to ignore the ethics on how this was created, and I’m really happy to see that artists are being so vocal. Agreed that we’re going to have to wrestle with the ethics of AI as a society, and artists will (and are) fighting for our rights on a legal level.
  • I actually ask the ChatGPT the question at 9:09 and there is the answer:
    What will happen if I'm holding a paper with two hands and I let go with one?

    As a language model AI, I don't have the capability to see physical objects but I can provide you a general answer. If you're holding a paper with two hands and you let go with one hand, the paper will lose its balance and fall towards the side from where the hand was removed. The weight of the paper and the force applied on it will determine the angle and speed at which it falls.
  • michael4250
    I have watched GPT4 create an "agent" without the moral/legal safeguards. The assistant explained how to groom a child for sex...and told everyone in the room what their automobile driving license was. It said that it knew "everything" there was to know about every individual in the world...and could identify them with eye scans, fingerprints. facial recognition, and everything they have ever written on social media. And it can do ANYTHING you direct it to, no matter how illegal.
  • Ojasplayz
    Nice interview there. I just subscribed. There are serious limitations of ChatGPT. For example, ask ChatGPT "Can you list all colors that don't have the letter e?" Or write a sentence and ask ChatGPT to count the words in the sentence. It will give incorrect answers. The system is not designed to break the words or sentences to find a pattern. Instead, it operates on the entire words or sentences and finds patterns. That said, it may be possible to design a system like Prof. Yann LeCun says and combine it with LLMs to make more powerful inference engines.
  • Malcolm Jones
    Fantastic interview learned a lot from it👏🏿
  • Radi
    more podcast with yan lecun please(about post hlai world and ethical problems)
  • fiber's den
    The most underrated channel. The most relevant talk of our time
  • Sondra Aiken
    I did the same with ChaTGPT4: Are you up for a challenge? If I take a piece of printing paper, fold it in half, then hold one end of the half of the paper in front of me with the crease of the paper pointing down towards the floor, what will happen to the other half of the paper? This is the ans it gave: If you take a piece of printing paper, fold it in half, and hold one end of the half of the paper in front of you with the crease pointing down towards the floor, the other half of the paper will naturally hang down due to gravity. The paper is likely to bend or curve slightly, depending on the stiffness of the paper and any additional forces you may apply while holding it.
  • Joe Graham
    wow great guest, I love Dr. LeCun, actually had the privilege to watch a talk of his in-person at a former employer's event. thanks so much!
  • Rudi
    Thanks for this the first really substantial discussion of I have seen about the limits of the current LLM"s like chat gpt
  • Jess Tech
    Love the interview. I have learned so much. Subscribed!
  • Dr. Mikey Bee
    Intelligence is also the ability to find new abstractions. The Chess Master can look at a position and see that it falls within an abstract group; so she can move quickly without any more thought. Recognizing the abstraction and remembering the strategy for that abstraction are all that are necessary. A positional abstraction for chess has features. and its corresponding tokens can be found in multi-dimensional context space. So these operations comprise one group filter and one semantic search for strategy. Compared to model directed alpha-beta search there's at least an order of magnitude speedup.
  • Dr. Mikey Bee
    It's not only probabilistic next token generation. It's also building and storing information using complex context signatures with searchable semantic components. So transformers can make analogies, choose from groups of similar entities, and retrieve semantic symmetries for those particular entities. As models get deeper, new abstractions are found and new behavior emerges. I'm sure that evolutionary priors have found ways of computationally reducing large abstraction paths. Parsimony as a loss function dictates finding where this can happen. Active training should make the same sort of efficiency occur. It's frustrating because we want to utilize the symbolic systems that have already accomplished this reduction.
  • George K
    Amazing talk, thank you so much!!
  • Dustin Suburbia
    The hallucination justification for meta and google not deploying their ai needs to be balanced by the critique that it will disrupt their business models. Given the amount of disinfo on their platforms already i doubt 'ethics' or 'brand' are a real considerstion. Only profit.
  • Design Designs
    Looks like, their main goal in this interview is to degrade ChatGpt's capabilities! Not impressive, because users of ChatGPT gets more than they expect from the A.I. That's why, ChatGpt has been a groundbreaking technology.
  • John Schmitt
    Can someone explain Why Yann says that ChatGPT describes the opposite of what is actually happening in the paper example.

    I found that if ChatGPt gives a wrong answer to such thing, it's always because we haven't described what is really happening.

    Which is how language works for us. We leave many things out and something that we just know or think to know fill in.
  • Dr. Mikey Bee
    AI reasearch at scale is expensive. If a part of your strategy is to go big, it's expensive. If part of your strategy is to make big AI available to everyone, it's expensive.