#58 Dr. Ben Goertzel - Artificial General Intelligence

Published 2021-08-11
Patreon: www.patreon.com/mlst
Discord: discord.gg/ESrGqhf5CB

The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” - that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots.

Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of SingularityNET. A project combining artificial intelligence and blockchain to democratize access to artificial intelligence. Ben seeks to fulfil the original ambitions of the field. Ben graduated with a PhD in Mathematics from Temple University in 1990. Ben’s approach to AGI over many decades now has been inspired by many disciplines, but in particular from human cognitive psychology and computer science perspective. To date Ben’s work has been mostly theoretically-driven. Ben thinks that most of the deep learning approaches to AGI today try to model the brain. They may have a loose analogy to human neuroscience but they have not tried to derive the details of an AGI architecture from an overall conception of what a mind is. Ben thinks that what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented.

Ben thinks that there is a certain set of key cognitive processes and interactions that AGI systems must implement explicitly such as; working and long-term memory, deliberative and reactive processing, perc biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitatively different domain.

Panel: Dr. Tim Scarfe, Dr. Yannic Kilcher, Dr. Keith Duggar

Pod version: anchor.fm/machinelearningstreettalk/episodes/58-Dr…

Artificial General Intelligence: Concept, State of the Art, and Future Prospects
sciendo.com/abstract/journals/jagi/5/1/article-p1.…

The General Theory of General Intelligence: A Pragmatic Patternist Perspective
arxiv.org/abs/2103.15100

[00:00:00] Lex Skit
[00:03:00] Intro to Ben
[00:10:42] Concept paper
[00:20:50] Minsky
[00:21:42] OpenCog
[00:25:50] SinglularityNet
[00:27:19] Patternist Paper
[00:30:13] Short Intro
[00:35:43] Cognitive Synergy
[00:41:29] Hypergraphs vs vectors: focus operations and algebra, not representations
[00:47:46] Does brain structure form a hypergraph?
[00:51:21] What's missing from neural networks today?
[00:56:52] Sensory knowledge, bottom-up and top-down reasoning
[01:02:02] If the brain is a continous computer, then why graphs?
[01:08:54] Forgetting is as important as learning
[01:11:55] Should we ressurrect analog computing?
[01:18:18] AIXI - limitations
[01:25:20] AIXI - the reductio absurdum of reinforcement learning
[01:27:56] Defining intelligence
[01:33:34] Pure Intelligence
[01:40:08] SingularityNET - a decentralized path to practical AGI
[01:47:18] SingularityNET - can we automate API discovery and understanding?
[01:53:36] Wrap up
[01:56:36] A true polymath
[01:59:58] SigularityNET and the API problem
[02:04:45] Dynamic AGI vs reliable engineering
[02:10:42] Can intelligence emerge in SingularityNET?
[02:19:10] How is AIXI a useful mental exercise?


opencog.org/
singularitynet.io/
Yannic's video on SingularityNet    • SingularityNET - A Decentralized, Ope...  

Music credit: soundcloud.com/vskymusic

All Comments (21)
  • @lexfridman
    Yannic doing an impression of me was the only unfinished item left on my bucket list. I can now die a happy man. Thank you gentlemen. I'm a big fan, keep up the great work!
  • @KenSilverman1
    I enjoyed building the core engine of the first version of Open Cog, back when our representation of knowledge was already implemented as a hyper-graph (considering, for example, that a single node linking to multiple other nodes is a set, and each set might have flavors and subsets depending on the flavor of edge creating the set symetric/asymetric/hierarchical etc ...). What Ben and I had managed to construct was a highly non-linear model of mind where a far more elaborate and flexible set of structures could emerge (than from a simple layered neural network) from what we called "activation spreading" where the outcome was a non-linear structure called a "halo" that was our meta-structure representation of a 'thought' or focused region of data represented by weighted links and nodes which could also be viewed as a set within a hypergraph model. The age of AGI had already begun without the buzz acronym yet made commonplace. This was a highly distributed, asynchronous, semantic network meant to handle multiple-domain, asynchronous, algorithmic 'thought' processes and therefore to be a general AI solution. It is, in my view, important that the core architecture of Open Cog and a portion of this video (at least conceptually) is historically noted as exactly what we built in 1997-2001 when I took 40 lines of array code from Ben and we picked up where we left off when we were 15 years old and first started talking about it. Ben has remained steadfastly dedicated to building out this architecture, and now with vast improvements in speed/memory and narrow AI components (trained nets) for visual and other low-level processes to integrate with, that we did not have before, it is time to bear the fruit!
  • @lenyabloko
    Your channel is a definition of "meeting of minds". It is a modern version of Socratic dialog. I always had to simulate each of your in my head and drove myself crazy doing it. Now I just watch and relax because you have it covered. Thank you and please keep it going.
  • @AICoffeeBreak
    Yeas! I'm so happy for the new episode, since I didn't know what to do with the 2 spare hours I do not have. Kidding, the show is so cool, I am making the time. This show deserves it. 🤘
  • @citiblocsMaster
    Ben's room is exactly what I would imagine an AI researcher's room to look like
  • @nauman.mustafa
    A lot of researchers in the field of AGI can't distinguish between AGI and conciousness. it is good to see him distinguish the two.
  • @DavenH
    I am ever thankful for this podcast.
  • @omkarchandra
    Now you guys have dropped a big one! Awesome! One of the most anticipated guests.
  • @ulf1
    i found him immediatly sympathetic when he mentioned nonlinear dynamic systems
  • @qasimwani4889
    i gotta ask, what tool do u use to edit your videos Tim? This is God-level production quality!!!
  • @dr.mikeybee
    Please pardon all my comments. Your shows are just so exciting they bring out aspie behavior.
  • @arvisz1871
    Excellent discussion at the end 👍 it definitely adds value to the conversation. Very good!
  • @stretch8390
    That intro was an epic overview and I have many questions from that alone. Have a subscription!
  • One of the most brilliant thinkers on artificial intelligence
  • MORE LEX FRIDMAN IMPRESSIONS PLEASE! 😂
    I watched that part 10 times already 😁
  • @halneufmille
    1:20 Glasses so reflective I can almost read the text for myself in the teleprompter.
  • @Georgesbarsukov
    Lex Fridman and Tim Scarfe make me want to get a PhD. I thought a master's from Berkeley would get me deep into AI but the more of these podcasts I watch the more I see myself only on the surface. Ironically, I'll probably feel the same after a PhD.
  • @citizizen
    I always like to think about how our brains did/do it. We can learn from that.