• designatedhacker@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    The approach of LLMs without some sort of symbolic reasoning layer aren’t actually able to hold a model of what their context is and their relationships. They predict the next token, but fall apart when you change the numbers in a problem or add some negation to the prompt.

    Awesome for protein research, summarization, speech recognition, speech generation, deep fakes, spam creation, RAG document summary, brainstorming, content classification, etc. I don’t even think we’ve found all the patterns they’d be great at predicting.

    There are tons of great uses, but just throwing more data, memory, compute, and power at transformers is likely to hit a wall without new models. All the AGI hype is a bit overblown. That’s not from me that’s Noam Chomsky https://youtu.be/axuGfh4UR9Q?t=9271.

    • NABDad@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      I’ve often thought LLMs could replace all of the C-suites and upper and middle management.

      Funny how no companies push that as a possibility.

      • Zink@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        I almost expect that we’ll see some company reveal it has been letting an AI control the top level decision making for the business itself, including if and when to reveal the AI.

        But the funny thing will be that all the executives and board members still have jobs and huge stock awards. They will all pat each other on the back for getting paid more money to do less work, by being bold and taking a risk to let the computer do half their job for them.