• Corroded@leminal.space
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    “We want to ensure that people have maximum control to the extent that it doesn’t violate the law or other peoples’ rights,” Joanne Jang, a member of the product team at OpenAI, told NPR. “There are creative cases in which content involving sexuality or nudity is important to our users.”

    The other problem in my mind is the fallibility of current safeguards. OpenAI and rivals have been refining their filtering and moderation tools for years. But users constantly discover workarounds that enable them to abuse the companies’ AI models, apps and platforms.

    Some highlights from the article.


    It seems like AI porn is inevitable and OpenAI has safeguards in mind for exploitative content so it doesn’t seem like a horrendous idea.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      to the extent that it doesn’t violate the law or other peoples’ rights

      Am I the only one who finds this so weird when we talk about LLMs? If someone makes a bot that resembles some specific person, that person’s rights aren’t really violated, and since they’re all fictional content, it is very hard to break actual laws through its content. At that point we would have to also ban people’s weird fan fiction, no?

      Not arguing about whatever they want or don’t want on their platform, but the legal & alleged moral questions / arguments always weird me out a bit, because there’s no one actually getting hurt in any sort of way by weirdos having weird chats with computers.

      The bigger issue is the enforcement. Either you monitor an absurd amount of content, which is worse for privacy, or you straight up censor the models, which makes them typically restrictive even in valid cases / scenarios being played out (other platforms went through this, with a consequential loss of users).