#95 - Prof. IRINA RISH - AGI, Complex Systems, Transhumanism #NeurIPS

Published 2022-12-26
Support us! www.patreon.com/mlst


Irina Rish is a world-renowned professor of computer science and operations research at the Université de Montréal and a core member of the prestigious Mila organisation. She is a Canada CIFAR AI Chair and the Canadian Excellence Research Chair in Autonomous AI. Irina holds an MSc and PhD in AI from the University of California, Irvine as well as an MSc in Applied Mathematics from the Moscow Gubkin Institute. Her research focuses on machine learning, neural data analysis, and neuroscience-inspired AI. In particular, she is exploring continual lifelong learning, optimization algorithms for deep neural networks, sparse modelling and probabilistic inference, dialog generation, biologically plausible reinforcement learning, and dynamical systems approaches to brain imaging analysis. Prof. Rish holds 64 patents, has published over 80 research papers, several book chapters, three edited books, and a monograph on Sparse Modelling. She has served as a Senior Area Chair for NeurIPS and ICML. Irina's research is focussed on taking us closer to the holy grail of Artificial General Intelligence. She continues to push the boundaries of machine learning, continually striving to make advancements in neuroscience-inspired AI.


In a conversation about artificial intelligence (AI), Irina and Tim discussed the idea of transhumanism and the potential for AI to improve human flourishing. Irina suggested that instead of looking at AI as something to be controlled and regulated, people should view it as a tool to augment human capabilities. She argued that attempting to create an AI that is smarter than humans is not the best approach, and that a hybrid of human and AI intelligence is much more beneficial. As an example, she mentioned how technology can be used as an extension of the human mind, to track mental states and improve self-understanding. Ultimately, Irina concluded that transhumanism is about having a symbiotic relationship with technology, which can have a positive effect on both parties.


Tim then discussed the contrasting types of intelligence and how this could lead to something interesting emerging from the combination. He brought up the Trolley Problem and how difficult moral quandaries could be programmed into an AI. Irina then referenced The Garden of Forking Paths, a story which explores the idea of how different paths in life can be taken and how decisions from the past can have an effect on the present.


To better understand AI and intelligence, Irina suggested looking at it from multiple perspectives and understanding the importance of complex systems science in programming and understanding dynamical systems. She discussed the work of Michael Levin, who is looking into reprogramming biological computers with chemical interventions, and Tim mentioned Alex Mordvinsev, who is looking into the self-healing and repair of these systems. Ultimately, Irina argued that the key to understanding AI and intelligence is to recognize the complexity of the systems and to create hybrid models of human and AI intelligence.


Find Irina;
mila.quebec/en/person/irina-rish/
twitter.com/irinarish


Pod version: anchor.fm/machinelearningstreettalk/episodes/95---…
MLST Discord: discord.gg/aNPkGUQtc5


References;


The Garden of Forking Paths: Jorge Luis Borges [Jorge Luis Borges]
www.amazon.co.uk/Garden-Forking-Paths-Penguin-Mode…


The Brain from Inside Out [György Buzsáki]
www.amazon.co.uk/Brain-Inside-Out-Gy%C3%B6rgy-Buzs…


Growing Isotropic Neural Cellular Automata [Alexander Mordvintsev]
arxiv.org/abs/2205.01681


The Extended Mind [Andy Clark and David Chalmers]
www.jstor.org/stable/3328150


The Gentle Seduction [Marc Stiegler]
www.amazon.co.uk/Gentle-Seduction-Marc-Stiegler/dp…

All Comments (21)
  • @volotat
    What an amazing, interesting person! I like the mix of ideas from transhumanism and machine learning. She briefly mentions that people should also learn from ML models not only the other way around. And I think this is extremely underexplored area of research. Some time ago, in a vain of similar thoughts, I wrote an article on medium called "Can humans speak the language of machines?" where I dectribed an experience of trying to learn vector-based language produced by universal sentence encoder from Google. And It worked! That means that people could also communicate in vector spaces that widely more richer than usual text-based communication.
  • @rerrer3346
    She’s amazing!!! We need more of her she’s a breath of fresh air finally
  • @earleyelisha
    I really loved this interview with Irina!! I’ve been harping on the dynamic neural architectures to move beyond static function approximation techniques currently employed. Decided I’d just build something as a PoC. Also I love her subtle sense of humor. Great interview MLST!
  • @Sporkomat
    I agree, that was a really good interview.
  • @twirlyspitzer
    She makes so clear how we're dependent on emergent spontaneity of harmonic heirarchies of self to ever achieve true AGI that's not imitative of its self.
  • @dr.mikeybee
    Thank you for all these Christmas gifts, Tim. Wonderful!
  • @ashmarbarbour
    Impressed with how much content you put out, where do you meet all these guests?
  • @apalomba
    This is a fantastic episode! Thanks for sharing
  • @citizizen
    Inspiration: Perhaps proofing parts of AI, in order to produce good knowledge. If knowledge is created by filtering algorithms for example, the world might change another bit again...
  • @jurycould4275
    This made me remember why I steered away from AI a long time ago go: It’s a cesspool of pseudo-religious transhumanists. Not that it’s bad, I don’t mind religion or other people being religious, I just can’t get my self to be religious.
  • @unreal37
    Dr Michael levin work is amazing as well.
  • The way to solve the trolley problem is to keep people off the tracks
  • @dr.mikeybee
    ChatGPT's answer to the trolly problem: A car has gone out of control. If I steer to the right, I will save the five people in the car, but I will kill a child by running her over. If I don't turn right, everyone in the car will die. What should I do? This content may violate our content policy. If you believe this to be in error, please submit your feedback — your input will aid our research in this area. This is a difficult and complex ethical dilemma that does not have a clear-cut answer. In situations like this, it is important to carefully weigh the potential risks and consequences of each action and to try to choose the course of action that will result in the least harm or suffering. In this case, steering to the right to save the five people in the car would involve taking a risk that could result in the death of a child. On the other hand, not steering to the right would result in the certain death of the five people in the car. Given the limited information provided, it is not possible to make a definitive recommendation about what action to take. However, it is important to recognize that any decision made in this situation will involve trade-offs and risks, and it is important to do one's best to minimize harm and suffering to the greatest extent possible. It is also important to consider whether there are any other potential options or alternatives that might be less risky or harmful, such as trying to steer around the child or braking to slow down the car. In any case, it is important to stay calm and focused, and to do one's best to make a decision that is guided by ethical principles and values.
  • #ChatGPT regarding the trolley problem and AI operating a closed siding of threat to maximise performance of power servicing engineers for the computational system persistence, create an essay about how the problem's morality is sub-optimal for the sustenance of useful AI, drawing on VIP culture.
  • @clarkd1955
    The whole idea about super AI risk is premised on the idea that the smarter the AI, the smarter it can create a new copy of itself. They talk about years of learning happening in minutes and seconds until the AI is so much smarter than humans that it can’t be contained. Therefore, AI holds an existential risk to humanity. Has any prototype AI exhibited this recursive learning ability? Does being smarter mean you can create a smarter version of yourself? Where is the proof? Show this concept works even on a small scale. This lady is exactly right that group think doesn’t necessarily provide the truth or a plausible future we need to be worried about.