Eerke Boiten, Professor of Cyber Security at De Montfort University Leicester, explains his belief that current AI should not be used for serious applications.
An LLM cannot think like you and I. It doesn’t have a motivation.
It is just a system which learns the rules of something by means of reinforcement learning to tune the coefficients of its heap of linear equations. It is better than a human in that area. I guess it can be good for tedious, repetitive tasks.
But it can only reproduce what is in the training data (that’s the reason, why LLMs don’t give good answers to questions about specialized niche topics. When there are just one or two studies, there just isn’t enough training data for the LLM.)
But it cannot solve new problems or even think about and solve entirely new problems.
An LLM cannot think like you and I. It doesn’t have a motivation.
It is just a system which learns the rules of something by means of reinforcement learning to tune the coefficients of its heap of linear equations. It is better than a human in that area. I guess it can be good for tedious, repetitive tasks.
But it can only reproduce what is in the training data (that’s the reason, why LLMs don’t give good answers to questions about specialized niche topics. When there are just one or two studies, there just isn’t enough training data for the LLM.)
But it cannot solve new problems or even think about and solve entirely new problems.
This was already disproven a year ago.
They replaced the training data with an evaluator. (which rates the LLMs output for training?) Interesting, thanks.
Edit: this reminds me of the self evolving (virtual) robot problem, a robot which is rated by an external moderator and improves over time. I.e.: https://www.sciencedirect.com/science/article/pii/S0925231221003982