A long read, but fascinating.
Thanks for posting this, I’ve learned things about cold reading and the Forer effect that, regardless.of whether they can be applied to LLMs, are fascinating information on our own minds.
I will try experimenting some more with ChatGPT and Bard and see if I can spot these effect the author deacribes
Weird, I was pondering exactly this analogy earlier. Specifically astrology, but cold reading is a better fit for a conversation with an ‘AI’; astrology works well for one-off articles. Both dependent on the Forer Effect, of course.
Important piece, I think. Because this part is 100% true:
There are many examples of this easily found once you start doing the research. The mechanism is simple enough and already baked into people’s preconceptions of how readings work so many psychics accidentally develop the knack for it, meaning that they’re not just conning the person being read, they are also conning themselves.
This is Sam “I’m a stochastic parrot and so are you” Altman. He thinks his high tech magic 8-ball really does think just like human beings do. He’s not so much trying to persuade us that ‘AI’ has achieved our level so much as persuade us that we have always been on its level.
Maybe he is just 100% grifter and absolutely knows he is bullshitting. But I think he’s at least 50% conning himself.
Which is probably worse. True believers are so tiring.
It’s an incredibly powerful illusion. And no matter how often someone draws back the curtain, usually with a spectacularly nonsensical example of its complete inability to think, there will always be mini-Sams out there. Desperate to believe. Unwilling to think.
Tldr bot!
Sorry, but that article is completely non sense.
LLMs are pretty clear in what they do. It’s true that they are often superficial, but most of this superficiality is due to creator trying to escape liabilities. ChatGTP is often evasive or superficial on purpose, because openai is trying to find a balance between usefulness and risk of being sued.
LLMs do not try to be smart. They don’t do trick. They are built to give the best possible answer they are capable of doing (given how they are trained and built). Sometimes these answers are good, sometimes not, sometimes mixed.
Why writing a whole article trying to demonstrate frauds in a tool. Is a washing machine a fraud because it tries to convince me clothes are clean? I am satisfied by the results, given it is a machine, my aunt complains that “washing by hand” is better.
Same situation here, some people are happy, some would like more…
This is about how people perceive LLMs, not the LLMs themselves.
The article imply a fraud in the LLMs… Compares it to psychs
Non sense. They are just tools. One can like them or not
No it doesn’t. It doesn’t claim they’re frauds. It claims that people are seeing things in them that aren’t there and giving them abilities they don’t have. I don’t think you read it all the way through.
I did, and I believe the author is the one using psych tricks. He tries to persuade readers to see things that are not there. Similar as he claims that the LLMs are doing, he prepares the scene by creating some comparison worded in a way to make them credible. But in practice none of those comparisons are true. They appear true because the author is good with words, and leave to the reader the message that LLMs are frauds. It is a well executed rhetoric exercise, but it is still non sensical. An LLM is just a model that doesn’t try to be intelligent, it tries to answer questions or, better, to complete text
Again, it’s not about what the model tries to be. It’s about what people perceive it to be. I don’t know how you can say you read the whole article when you keep claiming it’s saying something it isn’t saying.
If you read it all the way through, you didn’t comprehend any of it. Maybe work on that and try again.
You didn’t understand the assignment.
Read it again.