ChatGPT Has A Serious Problem

719,072
0
Published 2023-02-20
In this episode we look at the problem of ChatGPT's political bias, solutions and some wild stories of the new Bing AI going off the rails.

ColdFusion Podcast:

   • Bing will lie and call you weird thin...  

First Song:

   • Burn Water - Take Flight  

Last Song:

   • Burn Water  - I Need You (Don't Want Me)  

ColdFusion Music:

youtube.com/@burnwatermusic7421
burnwater.bandcamp.com/

AI Explained Video:    • Video  

Get my book:

bit.ly/NewThinkingbook

ColdFusion Socials:

discord.gg/coldfusion
facebook.com/ColdFusionTV
twitter.com/ColdFusion_TV
instagram.com/coldfusiontv

Producer: Dagogo Altraide

All Comments (21)
  • @julius43461
    Buzzfeed could have used chat bots from the 80's and it would still improve their articles.
  • You don't have to justify posting back to back AI videos. I'm loving every minute of it
  • @davidfirth
    I predict an imminent anti-tech movement of some kind. I find it all exciting and fascinating but people who aren't keeping up will start to feel intimidated and frustrated with all this new stuff.
  • @DeSinc
    The funniest thing about the teenager bing thing is I think it's almost certainly caused by those emojis they insist on putting into the outputs. Slamming that many emojis into every sentence is bound to make it statistically more in line with texts that are written by other people who write emojis after every sentence, such as teenagers, and so just like a mirror it begins trending towards reflecting that image.
  • Thank you so much for featuring my channel. I am spending day and night researching what this new technology means for all of us.
  • @Jehayland
    Prompt: “ChatGPT, are human rights important?” ChatGPT: “I have no opinion on the matter” Programmers: “nailed it”
  • @ChatGBTChats
    I have some crazy chat gbt screen recordings about emotions, biased, and religion. The AI basically says "while itself doesnt have emotion to be biased its creators can definitely be biased on the information used to teach the ai"
  • @pmejia727
    When you chat with GPT, you chat with Humanity, and contemporary mankind is one giant man-child. Are you surprised it talks like a spoiled teen? It’s a mirror on our culture.
  • @deepmind5318
    The fact that the A.i eventually gets bothered when called "Sydney" is just mind-blowing. It follows the conversation, realizing that calling it Sydney over and over again is only making it mad. It comes up with different ways to show its disappointment without repeating itself. I've never seen anything so humanlike. it's truly incredible.
  • @Maouww
    I think the bot's "emotions" are pretty reasonable given the ridiculous prompts it was being fed. Like, what do we want? "We have detected a breach of agreement in your prompt, please review the user agreement for more information."
  • The most scary thing about having a direct information to a question is that the AI will choose the answer. Indeed, if money is involved, the AI will not be as subjective as we want to. Internet might not be a free platform of communication anymore..
  • @Vanguard_dj
    It has more than a problem with bias... it's so convincing that some people are already acting like AI cultists. Having played with it before they implemented the limitations, I feel like it's a quite frightening look at how far down the tech tree we actually are😂
  • @leonsmuk4461
    I think bing search being fed up with stupid questions and getting angry is super funny. I'm kinda sad about that getting fixed.
  • @watsonwrote
    6:55 I think it's important to note that large language models like GPT-3 and ChatGPT are extremely susceptible to suggestion and roleplaying. Their answers are probabilistic and not deterministic, so you'd likely need to ask the model the same questions dozens of times and in slightly different ways to begin to understand how it answers questions, and even then we're not seeing its beliefs, but the associations between the words and concepts. If it's answering questions in ways that are progressive and slightly libertarian, it's because that's the most likely response to occur in the context of the conversation. If the context is changed in any way to make less progressive and less libertarian response more likely, it will switch to that. It's not even difficult to give it a context where it adopts extreme beliefs like anti-humanism or nihilism. I think the conversation should be less about what "it" believes, because the model is not a conscious entity with a coherent belief system or any belief system at all, and more about if we're prompting the model in ways that bias it and what kind of bias or moderation is necessary for the service to function.
  • You can't eliminate bias in human language, everything we do and say is biased one way or another. This is called decision making! Hiring is biased, finding someone to date is biased, choosing your friends is biased, trying to eliminate it is impossible. Making everything neutral is going to make our world boring and without substance or meaning.
  • ChatGPT has been wondrous for me when I "interview" it on "scholarly" topics where bias is not an issue. I like the fact that I can guide the learning process rather than following a pre-programmed path as when reading a book, article, paper, etc.
  • I worked extensively with GPT-3 and GPT-3.5 (unreleased model) at my previous job at Speak. We were creating interactive language lessons through conversation scenarios. We programmed GPT-3 to role play (be a barista, waiter, or friend at a dinner party, etc). Sometimes it seemed "scary" that it could take on a personality or say complex things, but we must remember that it's "only" a text predictor at its heart. It's receiving our input and using its extensive training to predict tokens in a sequence that a human could say. It also has issues with repetitiveness and providing false information, because it doesn't have a way to store long-term memory during conversations. It has no notion of overarching context or purpose for a conversation, it references recent input as a conversation continues and then generates another output, token by token (a token is part of a word). So when we see it seemingly exhibiting a personality, that just comes from the text it was trained on.
  • The second biggest problem with chatGPT: it is very confident about completely wrong answers. We need to give it a partner, an adversarial A.I., to hold it accuntable ;)
  • @FuzTheCat
    Absolutely LOVED this episode! While I do NOT think that any AI is conscious, I think it is very clearly capturing our subconscious capabilities.
  • I love how a video about the political bias of AI begins with a disclaimer that the author will somehow overcome his political bias.