3.5 C
New York
Wednesday, February 21, 2024

Beyond the Hype: A Realistic Take on AI ‘Hallucinations’ and Human Responsibility

Beyond the Hype: A Realistic Take on AI ‘Hallucinations’ and Human Responsibility

Human and AI hallucinations
Generated by DALL-E 3

AI tools, such as ChatGPT, are playing an increasingly significant role in the everyday lives of many people and organizations. We’re becoming very reliant on AI in ways we could not have imagined before 2021.

For all the amazing ways that AI is being used to empower us to do and be more, it is widely understood that current AI models can generate false or misleading information — known as “hallucinations” — in some scenarios.

In some cases, it is true that a hallucination presented by an AI agent, if believed and acted upon by the human user, could be serious. I wouldn’t dare dispute this fact. And, I understand that we should continue to pursue ways to limit AI technology’s tendencies to “make stuff up”.

Having said all this, I also feel strongly that, in many cases, the seriousness of the “hallucination” problem has been over-hyped, leading some to mistrust the technology to the point of concluding that it is “useless” or even declaring it as “dangerous” in an almost alarmist tone.

I think it’s time we put this issue in context.

Human “Experts” Provide Bad Advice All the Time

We all know that even the most knowledgeable experts in any field often get their facts wrong or offer biased advice. It’s not uncommon at all, and yet we continue to seek their advice.

Whether it’s your doctor, your financial advisor, even just your best friend who “knows a guy who knows about these things”, you often seek counsel from those who you believe know more about a subject than you. We all do it. I do it.

Often, when we ask for guidance, we get a mix of truth and falsehoods, fact and opinion, some of it good and useful, some of it absolute garbage. This is life and the way of the universe!

The Internet, Social Media, Even News Lies to Us

The Internet is loaded with misinformation. People post incorrect information and really bad advice on social media all day, every day. Even many “news” outlets give us “editorials” disguised as “facts”.

This has been going on for many, many years. And yet, the world continues to turn. Because most of us know better than to blindly accept it.

Awareness is Key

The reason the constant barrage of this mix of good and bad, true and false, information and advice doesn’t lead us (usually) to our demise is that we’re aware that it’s happening. We maintain some sort of vigilance that allows us to not blindly believe every word we hear or follow every bit of advice we receive.

Trust, But Verify

Whether we’re interacting with a human or a human “proxy”, such as an AI chatbot, we are never absolved of the responsibility to consider the information we’re given and evaluate it before acting.

I’m not sure we’ll ever see a time in which this will not be true. I, for one, would not desire such a world, as it would be a world where humans are automatons, mindlessly walking about the Earth, letting someone or some “thing” direct us like puppets. No… thank.. you!

Reducing AI Hallucinations

Although there are many techniques you can use to help reduce the frequency of AI hallucinations, I won’t get too deep into this in this article. The main point of this article is to stress that, while it is true that AI tools will almost certainly continue to sometimes make up facts for the foreseeable future…

The best remedy is for each of us to be aware, vigilant and take the same responsibility we always have with receiving information, from humans or machines.

However, I will offer a couple of simple tips that can, in some cases, reduce the chances that an AI chatbot will hallucinate when responding to you:

  • Be very clear and specific in your questions or directions.
  • Be direct in telling the chatbot not to “create” facts. For example, consider adding something like this to your instructions: “When responding, if you do not know the answer, say ‘I do not know the answer’”. Or “Do not make up any facts”.
  • Direct the chatbot to provide links to sources of the information provided and cross-reference them yourself if something seems questionable.

Conclusion

AI hallucinations are a natural side effect of current technology limitations and communication gaps. It remains the responsibility of the consumer to actively evaluate and question information, whether from AI or human sources.

If we remain aware and treat information we receive from AI-generated sources with the same mindset we’ve always treated similar information from other sources, we can remain safe, while also enjoying the many tremendous benefits this wave of technology offers!

Source link

Latest stories