• 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • That’s not what an algorithms researcher means when we talk about “understanding”. Obviously we know the mechanism by which it operates, it’s not an unknown alien technology that dropped into our laps.

    Understanding an algorithm means being able to predict the characteristics of its outputs based on the characteristics of its inputs. E.g. will it give an optimal solution to a problem that we pose? Will its response satisfy certain constraints or fall within certain bounds?

    Figuring this stuff out for foundation models is an active area of research, and the absence of this predictability is an enormous safety concern for any use cases where the output can be consequential.

    It cannot possibly develop agency.

    I don’t believe I’ve suggested anywhere that I think it will, but I’ll play around with this concern anyway… There’s a lot of discussion going on about having models feed back on themselves to learn from their own output. I don’t find it all that hard to imagine that something we could reasonably consider self awareness could be formed by a very complex neural network that is able to consume and process its own outputs. And once self awareness starts to form, it’s not that hard for me to imagine a sense of agency following. I have no idea what the model might use that agency for, but I don’t think it’s all that far fetched to consider the possibility of it happening.


  • Sure, but this outcome is not at all surprising. There are plenty of smart AI people that have nuanced views of what kind of threat could be posed by recklessly unleashing tools that we don’t fully understand into the hands of people who are likely to do harmful things with them.

    It’s not surprising that those valid nuanced concerns get translated into overly simplistic misrepresentations entangled with pop sci fi panic around rogue AI as they try to move into public discourse.


  • AI person reporting in. Without saying whether or not I personally believe that the current tools will lead to the end of humanity, I’ll point out a few possibilities that I find concerning about what’s going on:

    • The hype around AI is being used to justify mass layoffs, where humans are being replaced by tools that do a questionable job and can’t really understand the things those humans could understand. Whether or not the AI can do as good of a job according to some statical measurement is less relevant than the fact that a human is less likely to make an extremely grave mistake and more likely to be able to recognize when that does happen. I’m concerned this will lead to cross-industry enshitification on an unprecedented scale.

    • The foundation models consume a huge amount of energy. The more impressive you want it to be, the more energy it needs. As long as the data centers which run them are dependent on fossil fuels, they’ll be pumping a huge amount of carbon in the air just to do replace jobs that we didn’t need to have replaced.

    • As these tools are used more and more, they’re going to end up “learning” from content created by themselves instead of something that’s closer to a ground truth. It’s hard to predict what kind of degradation of service will come from this, but the more we create systems that rely on these tools, the more harm it will do to us.

    • Given the cost and nature of these tools, they’re likely to yield the most benefit to moneyed interests that want to automate the systems that maintain their power and wealth. E.g. generating large amounts of convincing disinformation to manipulate the public into supporting politicians or policies that benefit a small number of wealthy people in the short term while locking humanity into a path towards destruction.

    And none of this accounts for possible future iterations of AI tools that may be far more capable than what exists today. That future technology will most likely be controlled by powerful people who are primarily interested in using it to bolster the systems that keep them in power, to the detriment of humanity as a whole.

    Personally I’m far less concerned about a malicious AI intentionally doing harm to humanity than AI being used as a weapon by unscrupulous people.




  • 5C5C5C@programming.devtoLemmy Shitpost@lemmy.worldAnother mystery solved.
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    12 days ago

    Ordinary biomatter is very close to the density of water to begin with. That’s why having a little air in your lungs is enough to be the difference between sinking and floating.

    If Godzilla’s biomatter under 1atm of pressure has a density close to water then being able to compress or expand an empty chamber inside his body by even just a tiny percentage of his ordinary overall volume could be the difference between floating at sea level or sinking to extreme depths.

    Or if you prefer we can imagine that Godzilla gives himself a big ole booty when he needs to come up to the surface and make a mess of things.



  • You’d be right if the cavity is only compressing other organs inside the body without changing the overall volume, but I don’t know why you seem to insist on making that assumption.

    I thought it would be clear from my original description, via the analogy with lungs, that the cavity would not squish the internal organs but rather expand the overall volume of the body.


  • My head canon for sea-based Kaiju is they have a sack of muscles somewhere inside their body that can expand a cavity, kind of like the diaphragm expands the lungs, except instead of taking in air or water it just creates a volume of vacuum inside of them. This makes them extremely bouyant relative to the surrounding sea pressure, so they rapidly ascend and can casually float like a boat near the surface.

    But if they ever want to dive again, they just let that cavity collapse and all their bouyancy goes away.


  • 5C5C5C@programming.devtoLemmy Shitpost@lemmy.world🤠 Yee-haw!
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    26 days ago

    Might be interesting if we could compare the percentage of pitbulls that have killed or maimed a child versus the percentage of IDF soldiers that have killed or maimed a child.

    Maybe even more informative would be the average number of children killed or maimed per pitbull versus the average number of children killed or maimed per IDF soldier 🤔