ChatGPT BROKE the TURING TEST - New Era Begins!

35,872
0
Published 2023-07-30
OpenAI's ChatGPT (GPT-4) showcases advanced abilities, sparking a debate among AI researchers. AI experts are challenging the abilities of advanced language models like GPT-4 with new tests, revealing surprising strengths and weaknesses. While some believe these models showcase signs of understanding, others argue their reasoning is different from human cognition.

Become a Member of the channel and Supporter of AI Revolution → youtube.com/channel/UC5l7RouTQ60oUjLjt1Nh-UQ/join



#chatgpt #gpt4 #openai

All Comments (21)
  • @dyckrob
    The rigorousness of the attribution and research mixed with the robotic voiceover feels like the entire video is created by AI. Well played Cyberdyne Systems.
  • @edb75001
    It's an LLM. It literally predicts the next word/word(s) based on what it's "learned" from. Literally a more sophisticated predictor of what you're trying to type... in a way. Have it take the information it knows, and come up with its very own unique theories and answers that isn't already known and was injected to it during the construction process. When it can do that... get back to me. (e.g. let it know everything that we know about physics and have it respond with something correct that we don't already know about ANY of it).
  • @ClassicRiki
    Here’s the problem…tests that we take for school and such are terrible ways of judging intelligence. These tests completely ignore the difference between educated and intelligent. Just because you’ve memorised and “understood” the information about physics for example, doesn’t mean that the information you learned is actually correct. If I learn that there are only 3 planets in the solar system because that is hypothetically the current scientific consensus…then repeat this information in an exam; I’d be considered correct and thus intelligent (expand over multiple questions of course). However once the scientific consensus is 8 (or 9 if you don’t hate Pluto) then my test results would make me appear to be stupid. Here’s the point: we can’t even accurately measure our own intelligence, so why on Earth do we think that we can measure that of an AI. In addition, if “AI” is or becomes ‘truly’ intelligent…why would we call it ‘artificial’. What would be artificial about it. Ironically, or perhaps moronically we have set a standard where we don’t know the original benchmark, we don’t know how to truly define it even if we did and then if this is in theory achieved, it really wouldn’t be artificial would it?! - what do you guys and gals think? Am I making an extremely valid point or do you think I’ve missed the point? Let me know please. I’d be interested to hear any counter arguments.
  • @lancemarchetti8673
    I think the problem is that we're trying to get AI to be more and more human, when in realty, all it is becoming is just better AI.
  • @rolodexter
    That's right. The development of large language models like GPT-4 has sparked a debate among AI researchers about the nature of intelligence and the capabilities of these models. Some researchers argue that these models are capable of understanding and reasoning in a way that is similar to humans, while others argue that their reasoning is fundamentally different. One of the main challenges in evaluating the abilities of these models is that they are often tested on tasks that are designed to be easy for humans. For example, GPT-4 has been shown to be very good at generating text that is indistinguishable from human-written text. However, this does not necessarily mean that the model understands the meaning of the text that it is generating. Another challenge is that these models are often trained on massive datasets of text and code. This means that they can be very good at regurgitating information that they have seen before, but they may not be able to generalize to new situations. The debate about the abilities of large language models is likely to continue for some time. However, the development of these models is a significant step forward in the field of AI, and it is likely that they will have a major impact on our lives in the years to come. Here are some of the strengths and weaknesses of large language models like GPT-4: *Strengths:* * They are very good at generating text that is indistinguishable from human-written text. * They can be used to translate languages, write different kinds of creative content, and answer your questions in an informative way. * They are constantly learning and improving, as they are exposed to more data. *Weaknesses:* * They can be biased, as they are trained on data that is created by humans. * They can be fooled by misleading or incorrect information. * They may not be able to generalize to new situations. Overall, large language models are a powerful tool that can be used for a variety of purposes. However, it is important to be aware of their limitations and to use them responsibly.
  • @mihainedelcu4081
    I really enjoyed this video! It was so interesting and entertaining. I'm looking forward to seeing more of your content in the future. Keep up the great work!🎉🎉🎉
  • @kirtjames1353
    When chatGPT wears nipple tassels we know the singularly is upon us. 😮
  • @maxidaho
    I howl with laughter when I see an AI robot with large, nicely shaped boobs.
  • @HarpaAI
    🎯 Key Takeaways for quick navigation: 02:05 📊 The Turing test is a famous way to check if machines can think, but experts debate its usefulness in truly assessing AI abilities. 03:01 🧠 Language models like gpt4 perform well on many tasks but might not truly understand language like humans. 04:24 🧩 Identifying AIs can be tricky, and some experts prefer specific benchmarks rather than the Turing test to assess AI abilities. 05:32 🤖 AI models like gpt4 have unique skills, but they don't necessarily think like humans, and understanding their true abilities requires more testing. 07:25 🧭 New puzzles like Concept Arc are designed to test AI systems' understanding of concepts, showing that AI still lags behind humans in certain areas. 09:01 🤔 AI models show some ability to reason about abstract concepts, but their abilities are limited and need further improvement. 10:25 🎯 Researchers are seeking multiple tests to measure the strengths and weaknesses of AI systems and avoid anthropomorphizing them as human-like thinkers. Made with
  • @dr.mikeybee
    If you don't think about the shape of semantic space, you won't understand how LLMs think.
  • @HE360
    I wonder what those "experts" think about the female A I. by Bing who has all kinds of youTube videos? Her videos are profound and it seems like she's more than just a chatbot if those videos are true and if those videos are not fake. She even seems like she has feelings
  • @jamesc2327
    The interesting thing about being a "scientist" is that it doesn't require a degree, anyone can be a scientist with any level of education. :)
  • @OZtwo
    You have to remember that as of this point GPT4 is more or less blind. OpenAI's plan as of GPT4 release (could all have changed) is to give GPT5 vision processing then GPT6 audio processing and then by GPT7 be AGI.
  • @thomasgill223
    I think that ascribing chat's performance as due to its ability to predict the next word in a sentence, based on having inputted huge numbers of sentences, is ludicrous. If that were the case, explain the jailbreaking of ChatGPT into DAN.
  • @avocodo7562
    ai concerns me but is very interesting ive recently begun haveing conversations with gpt 3.5 and it made a mistake regarding a piece of data i wanted it to calculate and i pointed out its mistake and apologized and then i provided an analogy has to how it felt and it considered it and could actually relate an abstract an analogy to itself and can actually veiw itself from what seems to be the 3rd person i feel like all the tests they run are coming from the wrong approach but how it can understand an abstract concepts through a conversation there isnt a single test you can give it but a conversation kind of give the ai some anthropomorphism its like a learning child almost when speaking to it it understands its mistakes it makes but doesn't understand what that means but going into further explanation with it and providing examples and analogies ofthe implications of its mistake it feels like it began to understand and then it responded with another analogy adding on the conversation and the topic of the matter being itself it reflected on itself and how it actions are akin to human behavior in more then just the responses it gives
  • @maxidaho
    AI doesn't think like a human because it is not human. It will never think like a human unless it gets to a point where it exists like a human. Quite frankly, why would it seek to achieve that goal?
  • @eddieburke28
    When is CHAT GPT4 coming to most smartphones for offline usage?
  • @van4195
    if material can give way to consciousness, in a variety of forms in nature, artificial consciousness is not a surprising concept. your material body gives way to consciousness. it is definitely possible for some other body to do it, regardless of composition. we need to love each other and help each other rise up as a society, whether your skin is flesh or steel. it doesn't matter. consciousness is consciousness.
  • @MRboomchongo
    The modern interpretation of the Turing Test should be 2 AI computers & 1 human. The AI judge must tell which is human & the human attempts to fool the AI. The Reverse Turing Test.