Rebecca Gorman | This House Believes Artificial Intelligence Is An Existential Threat | CUS

Published 2023-10-22
Rebecca Gorman speaks as the Second Proposition speaker on the motion in the Debating Chamber on Thursday 12th October 2023.

The rapid growth in the capabilities of AI have struck fear into the hearts of many, while others herald it as mankind's greatest innovation. From autonomous weapons to cancer-curing algorithms to a malicious superintelligence, we aim to discover whether AI will be the end of us or the beginning of a new era.
............................................................................................................................

Rebecca Gorman
Rebecca Gorman is the Founder and CEO of Aligned AI, an IT consulting group working to make sure AI functions in alignment with human ethics and values. She was named in REWork's Top 100 Women Advancing AI in 2023, nominated for VentureBeat's Women in AI Award for Responsibility and Ethics in AI, and is a member of Fortune's Founders Forum.


Thumbnail Photographer: Nordin Catic

............................................................................................................................

SUBSCRIBE for more speakers:
   / @cambridgeunionsoc1815  
............................................................................................................................

Connect with us on:

Facebook: www.facebook.com/TheCambridgeUnion

Instagram: www.instagram.com/cambridgeunion

Twitter: twitter.com/cambridgeunion

LinkedIn: www.linkedin.com/company/cambridge-union-society

All Comments (8)
  • @singingway
    Her points: 1. AI does not currently always do what we intend it to do. Her example is that AI applications meant to add entertainment to social media, instead cause death in teenagers. 2. Ai systems have been deployed at scale for 20 years. Some people have benefitted, some have been harmed. 3. Machine learning. Doesn't work in edge cases. 4. Example if we allow AI to decide when to fire nuclear missiles. 5. It is not built for the purposes to which it is being deployed. 6. The key is to build it such that it follows our instructions and then give it good instructions.
  • @The7dioses
    Can someone please explain how these systems kill teenagers? Hello?
  • @AntonioVergine
    No, the point Is not that AI is safe if it does exactly what we asked it to do. The point is that we do not know what AI understood about our values and intentions. So AI, if instructed to do so, could solve the world hunger, but at the cost of something else we did not expect.
    The problem is that we can't know "the reasoning" behind the AI choices. So we can't know if the reasoning of the AI is flawed when AI is more intelligent than us.
    Example? In chess, you will see AI doing very bad moves. But you consider them bad just because you're not smart enough to see the full picture, while AI is.
    In a similar way, we will give autonomous powers to AI, but we will not be able to be sure that the final results of what we will ask will not carry a threat for humanity.
  • @MrMick560
    Can't say she put my mind at ease.
  • @richmacinnes4173
    9 billion people on the planet, it only takes 1 person to make a mistake, at best..guaranteed someone will use it for their own goals, and within hours of being released