I have been experimenting with ChatGPT lately, trying to find some useful application for it. In that process, I try to run some of my searches both through a search engine and also ask the bot for it.
In one of my last tests, I asked ChatGPT to cite its sources, and she did, giving me a few research papers with their authors. When I couldn’t find those, I asked where to search for them. Here is what I found: ChatGPT was wrong. There was no article in the referenced publication.
This made me think: What happens when we trust our resources to the point where we don’t fact-check them anymore?
With “deep fakes” we have been given a view into what are the risks of AI making up fake images, audio, and even footage. Those were made on purpose and properly flagged.
What if they just happen by accident and we just take them at face value? What if very important decisions are taken based on the information one of these gives us? Are we going to need something like a “AI malpractice” insurance? Do we even know who will be responsible for it?