A new Law of AI is emerging - Gell-Mann Amnesia!
I noticed this throwaway comment recently: " Whenever I use AI to probe a topic I know something about, it seems to make a number of errors; by contrast, whenever I use it to explore topics I know little about, it knows so much more! " I must admit I have noticed something similar. When I am asking about something I know about, such as software, AI generated responses tend to be at best flawed, at worst plain wrong. Yet when I ask about something I genuinely know nothing about the results are compelling and I have to rein myself in. It turns out this is a named effect, observed by Michael Crichton (yes, that Michael Crichton !). " Gell-Mann Amnesia ". He noticed that experts reading articles in their field of expertise will tend to find them full of errors, whereas reading other articles, even in the same publication, on topics outside their expertise are believed to be credible. That is exactly what is happening with AI. When the results are critiqued by an exp...