Anon Leaks NEW Details About Q* | "This is AGI"

Published 2024-03-21
A new anonymous drop has been released about Q*. Let's review!

Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? ✅

My Links 🔗
👉🏻 Subscribe:
👉🏻 Twitter:
👉🏻 Discord:
👉🏻 Patreon:

Rent a GPU (MassedCompute) 🚀
USE CODE "MatthewBerman" for 50% discount

Media/Sponsorship Inquiries 📈

Yann Interview -    • AI Godfather's STUNNING Predictions f...  
Q* VId 1 -    • Sam Altman's Q* Reveal, OpenAI Update...  
Q* Vid 2 -    • What Is Q*? The Leaked AGI BREAKTHROU...…………

All Comments (21)
  • Glad self play is finally getting the attention it deserves. I’ve been doing it for years.
  • We are so fascinated with AGI but no one has an agreed definition of AGI.
  • What's in this leak is basically the same thing that Yann Lecun said in the Interview with Lex Fridman that is clipped here, about how he thinks new models will overcome the current limitations of LLMs. The section in that video is labeled 'Reasoning in AI'.
  • @markh7484
    Also @Matthew, I may be mistaken but I think you misunderstand Yann's (badly worded) question. The question only makes sense if you consider the starting point to be the place AFTER you have walked 1km from the North Pole. The question is teasing out the mental picture that when you walk the initial 1km, you are doing so on a curved ball, so the radius of the circle around which you walk is actually slightly less than 1km and the answer is "less than 2 x pi km"
  • @swooshdutch4335
    this "leak" comes from someone using claude3 summerizing a part of Lex Friendman with Yann Lecun...
  • It sounds like Q* is an upgrade from the greedy approach of LLMs where they only finding the highest probability of the next token in the answer, to finding the the highest probability of all the tokens put together. With my limited understanding in this, it sounds like they're accomplishing this with having a second latent space. So we basically go from a normal LLM: input-text -> latent space -> output-text, to Q*: input-text -> input latent space 1 -> latent space 2 (i.e. EBM) -> output latent space 1 -> output text. We might finally get an LLM that can answer the age-old question of "How many tokens are there in your response" :)
  • @ThomasHauck
    "Hopfield networks" are considered energy-based models (EBMs). Here's why: 1. Energy Function: Hopfield networks define an energy function that describes the overall state of the network. The network tends to settle into states that minimize this energy function. 2. Equilibrium = Low Energy: Stable patterns or memories within a Hopfield network correspond to states of low energy in the defined energy function. 3. Learning Through Energy Minimization: The process of storing patterns in a Hopfield network involves adjusting the weights (connections) of the network to create energy minima that align with the desired patterns.
  • @ledgentai1227
    Essentially what Yann was explaining in Lex interview, word for word, without French accent
  • @SEPR2
    The letter Q is for quantum. If you want to understand why that matters, look at the original Star paper and look for every instance where it is doing an iterative process. A quantum computer could perform all iterations without the need for sequential computation. In other words, you would be able to determine which process is most efficient in a single computation without the need to test each in sequence.
  • @thanos879
    These models need all this compute, all this electricity. And are still no match for our brain running on junk food 😂. Really puts into perspective how special we are.
  • @rbdvs67
    Elon may be right, OpenAI has AGI already and is trying to figure out how to keep it contained.
  • I said less than 2xPi Km and I still stick with that, because an observer watching you, when you, the pole, and the observer are in a straight line, will see you move past the pole with the first step. Any amount of movement at all will cause the observer to perceive you getting ahead of the pole.
  • @ryanfranz6715
    The fact GPT-4, in the original GPT-4 paper, was able to create code representing an SVG graphic of a unicorn (and edit it in ways showing it had an understanding of what it had created)… that’s what convinced me that language is enough to form a world model. It blew my mind… I mean, it’s literally a blind/deaf system that has never experienced anything except text in it’s entire existence, yet, it understood what a unicorn *looks like*… clearly text carries the info necessary to build an approximate world model of the world that created that text. Yann LeCun is stuck on whether our human thinking about the world requires text, he would argue: “think of a unicorn… now what part of your thought process involved language? None. Therefore LLMs cannot know what a unicorn looks like.” But they do apparently know what unicorns look like… and if we’re being so nit picky that we’re saying “apparently knowing what a unicorn looks like isn’t the same as knowing”… ok, well let’s not worry when AI is only “apparently” superintelligent. Anyways. Very clear to me from the beginning something like Q* would be next, and very clear to me OpenAI already has it, and it was the reason for last Thanksgiving’s drama
  • @Zelousfear
    Thanks for the update, keep them coming!
  • @Anton_Sh.
    13:40 The "Respresentation space" here is basically a space of language constructs with added energy evaluation, so it's still a language.
  • @Charvak-Atheist
    Yeah, Large Language model is not enough for AGI. But Language model is necessary for that. Both Language and Visual model is required. 2nd, LLM do have some internal model or world (Its just that those are not accurate or complete model ). It may seem it's just next token prediction but in order to do next token prediction you need to make some internal model.
  • @rdcline
    The answer is 3: “less than 2xPi”. Chat GPT got it right, sort of making the point that LLMs are more powerful than some believe. From the north pole, if you walk 1km, you’re going South. When you turn 90 degrees left, you will be going east. Next, the answer depends on the nuance between walking in a straight line or walking east. If you continue east, you will go in a very small, 2km diameter circle around the pole. The earth’s surface within this circle is not a plane due to the curvature of the earth, meaning you went slightly less than 2xPi km. If, however, you add the original 1km traveled south, then the answer is “1: more than 2xPi”.
  • @FireFox64000000
    You know those energy-based models are sounding an awful lot like the A* algorithm and GOAP.
  • @n0red33m
    Thanks for all your coverage of these topics, you're really saving people a lot of man hours keeping up to date on this stuff