• hsdkfr734r@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    An LLM cannot think like you and I. It doesn’t have a motivation.

    It is just a system which learns the rules of something by means of reinforcement learning to tune the coefficients of its heap of linear equations. It is better than a human in that area. I guess it can be good for tedious, repetitive tasks.

    But it can only reproduce what is in the training data (that’s the reason, why LLMs don’t give good answers to questions about specialized niche topics. When there are just one or two studies, there just isn’t enough training data for the LLM.)

    But it cannot solve new problems or even think about and solve entirely new problems.