• jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    31
    ·
    12 hours ago

    ChatGPT is a tool. Use it for tasks where the cost of verifying the output is correct is less than the cost of doing it by hand.

    • qarbone@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      11 hours ago

      Honestly, I’ve found it best for quickly reformatting text and other content. It should live and die as a clerical tool.

      • ArchRecord@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Which is exactly why every time I see big tech companies making another stupid implementation of it, it pisses me off.

        LLMs like ChatGPT are fundamentally word probability machines. They predict the probability of words based on context (or if not given context, just the general probability) when given notes, for instance, they have all the context and knowledge, and all they have to do it predict the most statistically probable way of formatting the existing data into a better structure. Literally the perfect use case for the technology.

        Even in similar contexts that don’t immediately seem like “text reformatting,” it’s extremely handy. For instance, Linkwarden can auto-tag your bookmarks, based on a predetermined list you set, using the context of each page fed into a model running via Ollama. Great feature, very useful.

        Yet somehow, every tech company manages to use it in every way except that when developing products with it. It’s so discouraging to see.

    • tacobellhop@midwest.social
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 hours ago

      Youre still doing it by hand to verify in any scientific capacity. I only use ChatGPT for philosophical hypotheticals involving the far future. We’re both wrong but it’s fun for the back and forth.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        10 hours ago

        It is not true in general that verifying output for a science-related prompt requires doing it by hand, where “doing it by hand” means putting in the effort to answer the prompt manually without using AI.

        • tacobellhop@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          You can get pretty in the weeds with conversions on ChatGPT in the chemistry world or even just basic lab work where a small miscalculation at scale can cost thousands of dollars or invite lawsuits.

          I check against actual calibrated equipment as a verification final step.

          • jsomae@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            6 hours ago

            I said not true in general. I don’t know much about chemistry. It may be more true in chemistry.

            Coding is different. In many situations it can be cheap to test or eyeball the output.

            Crucially, in nearly any subject, it can give you leads. Nobody expects every lead to pan out. But leads are hard to find.

            • tacobellhop@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              I imagine ChatGPT and code is a lot like air and water.

              Both parts are in the other part. Meaning llm is probably more native at learning reading and writing code than it is at interpreting engineering standards worldwide and allocation the exact thread pitch for a bolt you need to order thousands of. Go and thread one to verify.

              • jsomae@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                6 hours ago

                This is possibly true due to the bias of the people who made it. But I reject the notion that because ChatGPT is made of code per se that it must understand code better than other subjects. Are humans good at biology for this reason?

                • tacobellhop@midwest.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 hours ago

                  You might know better than me. If you ask ChatGPT to write the code for itself I have no way to verify it. You would.