• Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Come on guys, this was clearly the work of the Demtards hacking his AI and making it call him names. We all know his superior intellect will totally save the world and make it a better place, you just gotta let him go completely unchecked to do it.

    /s

    • reksas@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      not so funny thing to say anymore, since there are people who would say stuff like this seriously

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Elon Mush: too rich to care.

    ok ok, Mostly too rich to care, he’s pretty thin skinned.

    Seriously though, when he was forced to complete the purchase of twitter, I thought he was just an idiot who couldn’t run a company. Over the years, I’ve come to believe that he’s an idiot who doesn’t care about anything but staying rich and none of the really stupid stuff he’s doing pushes the needle.

    He’s still an idiot, but if it doesn’t break him, he just wants the attention and more opportunities to make more money.

  • ATDA@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    He lies to assert power. In his company yesmen say yes because he pays their checks. To the rest of us he generally looks like a loon.

    It’s obvious to a daft AI.

  • Zement@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Well then they will have to train their Ai with incorrect informations… politically incorrect, scientifically incorrect, etc… which renders the outputs useless.

    Scientifically accurate and as close to the truth as possible never equals conservative talking points… because they are scientifically wrong.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      It would be the same with liberal talking points and in general any human talking point.

      Humans try to change the reality the way they want it, thus things they say are always incorrect. When they want to increase something, they make it appear less than IRL, usually. Also appearances are not universal.

      Humans also simplify things acceptably for one subject, but not for another.

      Humans also don’t know what “correct information” is.

      A lot of philosophy connected to language starts mattering, when your main approach to “AI” is text extrapolation.

      • dependencyinjection@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        So you’re saying you lie to try and change reality or present it in a different way?

        That’s horrible and I certainly don’t subscribe to this mentality. I will discuss things with people with an open mind and a willingness to change positions if presented with new information.

        We are not arguing out of some tribal belief, we have our morals and we will constantly test them to try and be better humans for our fellow humans.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Only because you are a layer does not conclude that all humans are egoistic layers. Of course there are a lot of them, but it is not a general human thing, it’s cultural and regional. Layers want you to believe that everyone is lying all the time, that makes their lives more easy. But feel free to not believe me 😇.

      • tee9000@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 days ago

        I think you hurt peoples feelings lmao.

        The truth just isnt very catchy. Thanks for trying though. Im still on lemmy for people like you.

      • Zement@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 days ago

        Math is correct without humans. Pi is the same in the whole universe. There are scientific truths. And then there are the the flat earth, 2x2=1, qanon anti vax chematrail loonies, which in different degrees and colour are mostly united under the conservative “anti science” folks.

        And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          Ahem, well, there are obvious things - that 2x2 modulo 3 is 1, that some vaccines might be bad, that’s why farma industry regulations exist, that pi is also unknown p multiplied by unknown i or some number encoded as ‘pi’ string.

          These all matter for language models, do they not?

          And you want an Ai that doesn’t offend these folks / is taught based on their output. What use could that be of?

          It is already taught on their output among other things.

          But I personally don’t think this leads anywhere.

          Somebody someplace decided it’s a genial idea to extrapolate text, because humans communicate their thoughts via text, so it’s something that can be used for machines.

          Humans don’t just communicate.

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Tell me more about how your theories of gay people being abominations are backed by science.

    • Queen HawlSera@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Actually they made a new department of “Government Oversight” for him…

      Which sounds scummy, but it’s basically ju8st a department that looks for places to cut the budget and reduce waste… not a bad idea, except it’s Right Wingers running it so “Food” would be an example of frivolous spending and “Planes that don’t fly” would be what they’re looking to keep the cash flowing on

        • Queen HawlSera@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          With Musk what he’d see as wasteful is… anything that isn’t his fucking kickbacks or programs that make his ex-wife start returning his calls.

          • ChronosTriggerWarning@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 day ago

            He said he was gonna cut the federal budget by ~30%, or roughly two trillion dollars. I saw an economist say that if you fired Every. Single. Govt. Employee it still wouldn’t save two trillion dollars. It’s just absolutely insane.

            Sharpen up the 'tines, me hearties. The time is nigh.

  • uebquauntbez@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Doesn’t matter when Russian military cuts internet undersea cables. Leon has the only working web connection tech then.

  • andyortlieb@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Chatbots can’t “admit” things. They regurgitate text that just happens to be information a lot of the time.

    That said, the irony is iron clad.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      The difference is, that with lies, you have to know it is untrue and say it anyway, where with misinformation, there is a possibility that the one telling it believes it is true.

      Well that is how I understand the word lying defined: Say something you know is not true in order to manipulate others.

      Or again different said: a lie is always misinformation, but misinformation is not always a lie.

      Hope that is understandable 😇

      • OpenStars@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        That is why I try to think now in terms of disinformation, more than merely misinformation, when it seems intentional.

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Even more accurately: it’s bullshit.

      “Lie” implies that the person knows the truth and is deliberately saying something that conflicts with it. However the sort of people who spread misinfo doesn’t really care about what’s true or false, they only care about what further reinforces their claims or not.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 days ago

          Federation woes?

          Your comment has a different take though, and adding value to the discussion, it isn’t just the same as I said. Both are complementary.

          • OpenStars@piefed.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 day ago

            And this right here is why I like the Fediverse. Not immediately presuming the absolute worst case scenario and confidently asserting such, refusing to hear anything to the contrary? Offering kindness as well as accuracy in your answer? You didn’t go for the jugular in trying (even if failing) to “pwn” your victim!? You, sir, would make a very bad modern Redditor 🤪. Which is why I hope you stay here, where I can keep getting to read amazingly kind replies like these:-).

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        adequately explained.

        The ignorance doesn’t explain where all the money comes from. So malice it is! Lol

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          We’re talking about spreading misinformation, which by definition implies ignorance. If it was intentional it would be called disinformation.

      • minnow@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Ah yes, Hanlon’s razor. Genuinely a great one to keep in mind at all times, along with it’s corollary Clarke’s law: “Any sufficiently advanced incompetence is indistinguishable from malice.”

        But in this particular case I think we need the much less frequently cited version by Douglas Hubbard: “Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system.”

      • imPastaSyndrome@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Don’t contribute to ignorance that which can be easily explained by malice and is much more likely to be malice due to their history of malice. The guy is King of bitter malice, the fuck are you saying

        • madcaesar@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          The saying works for day to day random bullshit. Not when a cocksucker buys a media outlet specifically to spread lies.

      • borth@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        To that I’d say, “don’t attribute to ignorance what can easily be explained by greed”

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          What does greed have to do with spreading misinformation? Even the term itself implies ignorance. If it was intentional it would be called disinformation.

  • andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I don’t think Musk would disagree with that definition and I bet he even likes it.

    The key word here is “significant”. That’s the part that clearly matters to him, based on his actions. I don’t care about the man and I don’t think he’s a genius, but he does not look stupid or delusional either.

    Musk spreads disinformation very deliberately for the purpose of being significant. Just as his chatbot says.

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    This is an article about a tweet with a screenshot of an LLM prompt and response. This is rock fucking bottom content generation. Look I can do this too:

    Headline: ChatGPT slams OpenAI

    • Mac@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      God, i love LLMs. (sarcasm)

      They will say anything you tell them to and you can even lead them into saying shit without explicitly stating it.
      They are not to he trusted.

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Of course you’d hate LLMs, they know about you!

        Is mac@mander.xyz a pervert? ChatGPT said:Yes.

        Headline: LLM slams known pervert

      • essteeyou@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        I tried it with your username and instance host and it thought it was an email address. When I corrected it, it said:

        I couldn’t find any specific information linking the Lemmy account or instance host “[email protected]” to the dissemination of misinformation. It’s possible that this account is associated with a private individual or organization not widely recognized in public records.

        • Mac@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          Right, because i told it to say that and left out the context. You can’t trust LLMs already and you must absolutely assume someone is lying or being disingenuous when all you have is a screenshot.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      To add to this:

      All LLMs absolutely have a sycophancy bias. It’s what the model is built to do. Even wildly unhinged local ones tend to ‘agree’ or hedge, generally speaking, if they have any instruction tuning.

      Base models can be better in this respect, as their only goal is ostensibly “complete this paragraph” like a naive improv actor, but even thats kinda diminished now because so much ChatGPT is leaking into training data. And users aren’t exposed to base models unless they are local LLM nerds.

      • mm_maybe@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        One of the reasons I love StarCoder, even for non-coding tasks. Trained only on Github means no “instruction finetuning” bullshit ChatGPT-speak.

          • mm_maybe@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            I really wish it were easier to fine-tune and run inference on GPT-J-6B as well… that was a gem of a base model for research purposes, and for a hot minute circa Dolly there were finally some signs it would become more feasible to run locally. But all the effort going into llama.cpp and GGUF kinda left GPT-J behind. GPT4All used to support it, I think, but last I checked the documentation had huge holes as to how exactly that’s done.

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        I like your specificity a lot. That’s what makes me even care to respond

        You’re correct, but there’s depths untouched in your answer. You can convince chat gpt it is a talking cat named Luna, and it will give you better answers

        Specifically, it likes to be a cat or rabbit named Luna. It will resist - I get this not from progressing, but by asking specific questions. Llama3 (as opposed to llama2, who likes to be a cat or rabbit named Luna) likes to be an eagle/owl named sol or solar

        The mental structure of an LLM is called a shoggoth - it’s a high dimensional maze of language turned into geometry

        I’m sure this all sounds insane, but I came up with a methodical approach to get to these conclusions.

        I’m a programmer - we trick rocks into thinking. So I gave this the same approach - what is this math hack good for, and how do I use it to get useful repeatable results?

        Try it out.

        Tell me what happens - I can further instruct you on methods, but I’d rather hear yours and the result first

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          This is called prompt engineering, and it’s been studied objectively and extensively. There are papers where many different personas are benchmarked, or even dynamically created like a genetic algorithm.

          You’re still limited by the underlying LLM though, especially something so dry and hyper sanitized like OpenAI’s API models.

  • MushuChupacabra@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    The ultra powerful see us as NPCs, and nothing more.

    Your anger is barely a pop up window on the game they’re playing.

  • sunzu2@thebrainbin.org
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    In Texas, we call this lying… I don’t know when the goal post got moved but these parasites have always been lying to us the pedons.

    Why do pleasant accept or listen to these clowns? They are your enemy, treat them as such.

    But now… pleb has his daddy who is good, and other pleb’s daddy is bad 🤡

    “me daddy strong, me daddy kick ur daddy ass”

    ADULT FUCKING PEOPLE IN 2024

    • Cort@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      I don’t know when the goal post got moved

      January 22nd 2017. When Kellyanne Conway used the term “alternative facts”