Joscha Bach and Connor Leahy [HQ VERSION]

Published 2023-06-19
Support us! www.patreon.com/mlst
MLST Discord: discord.gg/aNPkGUQtc5
Twitter: twitter.com/MLStreetTalk

Sorry about the quality on the live one guys, this should be a big improvement!
Transcript and longer summary: docs.google.com/document/d/1TUJhlSVbrHf2vWoe6p7xL5…
Pod: podcasters.spotify.com/pod/show/machinelearningstr…

Dr. Joscha Bach argued that general intelligence emerges from civilization, not individuals. Given our biological constraints, humans cannot achieve a high level of general intelligence on our own. Bach believes AGI may become integrated into all parts of the world, including human minds and bodies. He thinks a future where humans and AGI harmoniously coexist is possible if we develop a shared purpose and incentive to align. However, Bach is uncertain about how AI progress will unfold or which scenarios are most likely.

Bach argued that global control and regulation of AI is unrealistic. While regulation may address some concerns, it cannot stop continued progress in AI. He believes individuals determine their own values, so "human values" cannot be formally specified and aligned across humanity. For Bach, the possibility of building beneficial AGI is exciting but much work is still needed to ensure a positive outcome.

Connor Leahy believes we have more control over the future than the default outcome might suggest. With sufficient time and effort, humanity could develop the technology and coordination to build a beneficial AGI. However, the default outcome likely leads to an undesirable scenario if we do not actively work to build a better future. Leahy thinks finding values and priorities most humans endorse could help align AI, even if individuals disagree on some values.

Leahy argued a future where humans and AGI harmoniously coexist is ideal but will require substantial work to achieve. While regulation faces challenges, it remains worth exploring. Leahy believes limits to progress in AI exist but we are unlikely to reach them before humanity is at risk. He worries even modestly superhuman intelligence could disrupt the status quo if misaligned with human values and priorities.

Overall, Bach and Leahy expressed optimism about the possibility of building beneficial AGI but believe we must address risks and challenges proactively. They agreed substantial uncertainty remains around how AI will progress and what scenarios are most plausible. But developing a shared purpose between humans and AI, improving coordination and control, and finding human values to help guide progress could all improve the odds of a beneficial outcome. With openness to new ideas and willingness to consider multiple perspectives, continued discussions like this one could help ensure the future of AI is one that benefits and inspires humanity.

TOC:
00:00:00 - Introduction and Background
00:02:54 - Different Perspectives on AGI
00:13:59 - The Importance of AGI
00:23:24 - Existential Risks and the Future of Humanity
00:36:21 - Coherence and Coordination in Society
00:40:53 - Possibilities and Future of AGI
00:44:08 - Coherence and alignment
01:08:32 - The role of values in AI alignment
01:18:33 - The future of AGI and merging with AI
01:22:14 - The limits of AI alignment
01:23:06 - The scalability of intelligence
01:26:15 - Closing statements and future prospects

All Comments (21)
  • Top quotations: [00:05:42] Joscha Bach, "I expect that AGI is going to happen. And it's very likely going to happen in our lifetimes and quite possibly very soon." [00:06:08] Joscha Bach, "And to me, this is one of the most exciting developments in the history of philosophy and in the history of science." [00:14:16] Connor Leahy, "I don't want my mom to die. I don't want to die. I don't want Tim to die. I don't want Joscha to die." [00:16:42] Connor Leahy, "I don't care about, you know, the unfolding of thermodynamic and efficient processes. I care about my friends and my family and us having a great time and having fun and like doing cool things." [00:20:03] Connor Leahy, "I think AGI is the latest step in this chain. Like, a lot of these arguments about AGI could have and were, I think, apply to stuff like nuclear war." [00:37:45] Connor Leahy, "By default, humanity ends in ruin. This is the default outcome. This is the default outcome for any intelligent species that can't coordinate, that can't work together, that can't become coherent and can't coherently maximize their values, whatever those values might be." [00:56:16] Connor Leahy, "If your model is that people do not have control, that it is not possible to steer away from the global minimum, then yes, we are fucked and you should go spend time with your family until it's over." [00:09:10] Joscha Bach, "I feel that there is a number of groups that form opinions, and they often form these opinions based on the group dynamics and of the incentives." [00:10:16] Joscha Bach, "If the AI says bad things about other people or about the world, if the AI, for instance, says things that are racist or sexist, then this is going to have an extremely bad influence of society." [00:24:24] Joscha Bach, "Our civilization is not very coherent. Right? As a civilization, we are pretty much like an irresponsible child that is explorative and playful, but it does not have a species level regard to due like the duty to our own survival or to life on earth." [00:38:23] Connor Leahy, "I think most people I feel like really underestimate how much of intelligence is not in the brain. It's in social networks. It's in the environment. It's in tools. It's in, you know, culture, memetics, etcetera. Like so much of what we consider human is not the brain." [00:59:08] Connor Leahy, "If the default outcome is you lose, the default outcome is entropy wins, some random AGI with some random mass values that does not care a couple of cosmopolitan life on earth, you know, you know, wins over. Or maybe it's a bunch of them, and then they all, you know, coordinate because they can actually coordinate because they're actually coherent, because they are actually superintelligence. So they can coordinate against humanity. And then that's just it, and it just came over forever." [01:14:42] Joscha Bach, "That's philosophically extremely deep and important question, but and I'm afraid we cannot do it justice. But very briefly, I think that free will is the representation of an agent. That it makes a decision for the first time under conditions of uncertainty." [01:26:21] Joscha Bach, "I think that we are an absolutely fascinating point in history, and I'm very grateful of having been born at this point. So I can experience this, which to me is 1 of the most fascinating things that humanity can experience during its run." [01:29:01] Connor Leahy, "I think we can get to the outcomes that Joscha likes because those are our outcomes I like too. You know, like a living side by side with you beautiful nice AGI. Wow. That would be awesome. Unfortunately, I just think you don't get there by default. This is not the default trajectory. This is a very non our trajectory for the universe to go down."
  • @jamespercy8506
    That's a fascinating assertion Joscha, that corporations and institutions are actually more than mere, arbitrary 'constructs', that they actually embody something analogous to a physical organism but working off of a substrate that could be considered derivative of 'real' organisms, that artificial intelligence is a natural, possibly inevitable outgrowth of the whole process. That idea all by itself warrants a deep dive. It has the potential to at least challenge some of the more corrosive and puerile assertions of post-modernism, that our historical cognitive grammar is somehow ultimately arbitrary rather than deeply rooted in affordance-yielding, dynamic, evolving, integrated ecologies. Thank you for that.
  • @joshuasmiley2833
    Joscha Bach It’s such an amazing person, intellect, and soul. It is so easy to hear his point of view because he speaks with such respect for life and people and without a selfishness, and disrespect, some other people have when they are trying to sway the mind. It’s amazing how when somebody speaks with empathy and respect the ears are open, and when the opposite happens the mind and ears just close
  • @MattMacPherson
    Framing these as discussions rather than debates would be better.
  • @erickmagana353
    I happen to agree with both at different points, disagree with both at different points and like both styles of rhetorics and tones. To me this was a very intellectually productive and stimulanting conversation.
  • @sethhavens1574
    fascinating conversation, seen a few vids with these guys who are both always enjoyable and impressive but what a perfect combo! more please 🙏
  • @hannes7218
    would love to see/hear Schmidhuber on your podcast
  • @Darhan62
    Great conversation. And yeah, there are parts where it's a debate, but they seem to agree on quite a lot.
  • @truthlivingetc88
    it`s very good that we can watch the facial reactions happening in real time ; there are many important clues in them [excellent chat thanks]
  • @NoMoWarplz
    TY. to the host. Great to have such diff perspectives. Taking me 2 hours to get thru the first 15 mins. Epic. Joscha Bach: The AI is a mirror. What you see in it is ... Connor: We fight for the outcome we want... Extraordinary how the battle lines are being seized up... if you dint know before, you will now. We are at the last test of Humanity 1.0 ... we know we got to H 2.0 b/c ... Added: Yup: LIfe is asking us to get our truths aligned ourselves... this epic advent of AI is forcing discussions of everything as in: Oh, you are going to code that??? what definition of let's go b are we using? .. seriously... we are going to have to have lot of discussions. and dont make assumptions!!! < I think this is what Connor is saying. "In the context of AGI alignment, epistemic autonomy is important because it means that we can have a say in the values that are embedded in these systems. We cannot simply trust that these systems will be aligned with our values by default."
  • @kirktown2046
    Man Connor was so annoying and standoffish here, what a missed opportunity to talk about something interesting in detail instead of brushing it aside, assuming how the conversation would go, and not just taking the time to dig in. If you don't have time to talk about this, wtf are you doing here? Have enough respect for yourself and Bach to really take your time to lay out exactly what you think here. This IS the place to have all these conversations instead of hand waving your own perspective.
  • @RilkeForum
    @Joscha Bach: in 42:47 you assume that our reality is an attractor -, do you have an argument for that? Quantum mechanically all possible worlds per se seem equally possible to me and I don‘t see where the idea of an attractor solution would come from. This is not a criticism, just being curious.