• 54 Posts
  • 2.78K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • All right. Wanted to sanity-check that 3D acceleration was working at all.

    I haven’t played the game, so I don’t know how it might perform off the top of my head.

    goes to look at game specs

    According to the data sheet, you have an Intel HD 5500, an integrated GPU.

    The game’s system requirement has an Intel GMA X4500 as the minimum, which is an older integrated GPU, though the recommended GPUs are dedicated. It’s hard to know what “minimum” translates to; some games have wildly impractical “minimum” requirements, though I’d hope not that far.

    https://old.reddit.com/r/thesims/comments/5edr3x/the_sims_4_performance_on_intel_hd_620_graphics/

    I myself am using an Intel HD 3000, i5 laptop, and the game more or less runs a tad short of 60 fps on High settings in 720p mode

    That’s an older integrated GPU, about half the score, though I’d guess that you’re running at a higher resolution.

    Okay. I guess I’d probably confirm that the game is using the 3D accelerator, as if that’s not the case, then that’s probably going to be the problem. If it is being used, maybe see what happens with reducing graphics settings. I guess that the simplest way to do that and go through the whole Proton stack is probably to use some HUD that shows the GPU being used.

    Let me install the game myself.

    installs

    Oh, you have to create an EA account to use the game. The hell with that.

    Okay, I won’t do a step-by-step, but in broad terms:

    I’d do whatever it is that got it launching, if you had to do something.

    Then try mangohud and/or DXVK_HUD.

    In the Steam Properties for the game, under General->Launch Options, you’re going to want:

    DXVK_HUD=1 %command%
    

    in the field. If you’re adding any other environment variables besides DXVK_HUD to get it to start successfully — I see some people discussing using them in the ProtonDB page — include those too.

    When it launches, if DXVK works with it, you’ll see text in the upper left corner of the game window. The first line will be the video card being used, if it’s hardware-accelerated. For my system, on a game where DXVK_HUD shows up, I see “Radeon RX 7900 (RADV NAVI31)”.

    That works with some WINE/Proton 3D games, depending upon which 3D subsystem they use.

    For mangohud, for Steam Properties, it’ll be:

    MANGOHUD_CONFIG=full mangohud %command%
    

    That injects itself at various Linux 3D layers. If that works, then in the game window, you’ll see lines reading “RAM”, then “VRAM”, then either “VULKAN” or “OPENGL”, then some version, then the name of your video card. For me, that’s “Radeon RX 7900 XTX”. If I disable hardware acceleration (e.g. if I run LIBGL_ALWAYS_SOFTWARE=1 MANGOHUD_CONFIG=full mangohud glxgears in a terminal, which will run a small 3D test program, glxgears, without hardware acceleration, I’ll see “llvmpipe” there instead).

    Hopefully, one of DXVK_HUD mangohud will work for you, get that text up.

    If it does have something that looks like the name of your integrated GPU, probably "Intel something-or-other*, then it’s using your 3D hardware, but it’s still running slow. In that case, I’d probably:

    • Try flipping all the graphical settings and/or resolution down, see if it runs reasonably, and start flipping them up one at a time to see what’s particularly problematic.

    • Try a different Proton version. It’s also possible to download Proton-GE, (Proton Glorious Eggroll) which is a third-party build of Proton that includes some fixes that Valve doesn’t include in their Steam builds of Proton. This wouldn’t be my first shot for “a game runs slowly”, more “a game doesn’t work correctly”, but it’s a lever that one can at least try pulling. Normally, if this were a solution, I’d expect it to be in ProtonDB.



  • tal@lemmy.todaytoxkcd@lemmy.worldxkcd #3117: Replication Crisis
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    20 hours ago

    Ah, yeah, this sounds like it’s making a similar point, though whatever article I read long post-dated Feynman.

    https://en.wikipedia.org/wiki/Oil_drop_experiment

    In a commencement address given at the California Institute of Technology (Caltech) in 1974 (and reprinted in Surely You’re Joking, Mr. Feynman! in 1985 as well as in The Pleasure of Finding Things Out in 1999), physicist Richard Feynman noted:[21][22]

    We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

    Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that …

    I’ve read Surely You’re Joking, Mr. Feynman!, but that won’t be what I’m recalling, as I’m pretty sure that that didn’t have graphs. I’m thinking of an article that I think was on the Web, and had graphs showing values over time walking toward the correct value. I do think that it dealt with the hard sciences, not social sciences, so it might have included that oil drop experiment, and I think that it had several different experiments.



  • I don’t think that Trump comes out ahead in any legal battle. Even if he’s in the legal right, which I seriously doubt — now you’ve got a huge, high-profile feud going on with the guy that owns a large chunk of the right wing media out there. I mean, you’re gonna lose in the court of public opinion.

    And the standard for defamation specifically in US courts specifically for public figures is very high. Like, let’s even say that the report was in error (which I also kinda doubt). To win a defamation case in the US, the defamation has to be intentionally false, done with the intent of causing harm, and a collection of other things that are gonna be hard to win on.

    And if you lose, now the news headline is “Trump loses/abandons case against News Media X over his involvement in sexy stuff with Epstein”.


  • I liked once reading an article that showed that with some major findings from important scientists that were later shown to have a wrong value, it wasn’t that a second study promptly “snapped” to the correct values. Instead, over time, subsequent studies incrementally moved to the right value.

    On one hand, this is good in that the process does ultimately work, and we got to the right value, though it could take quite some years.

    On the other hand, this is embarrassing, because it suggests that people doing follow-up studies to a prestigious person second-guess their own results (“Doctor So-and-So can’t possibly be wrong…it must be me in error”) and aren’t willing to report the full deviation, so they’ll bang on an experiment until they get a value that isn’t that far off and report that.

    I can’t seem to find reference to it in the explainxkcd Wikipedia articles, but I remember being intrigued.





  • There’s room in the market for a huge number of regular games, but with live-service games, only a handful of winners can ever really succeed, creating an eye-watering risk profile for any new entrant into the market.

    Ehhh. I mean, I agree with the general idea that there have been far too many live-service games chasing too few players, but I think that it’s probably possible to create lower-budget, niche-oriented live service games that appeal very strongly to a particular group rather than trying to get the whole world onboard.

    That’s true of non-live-service games. I like some milsims, like Rule the Waves 3, that are just never going to become a mass market phenomenon. That’s fine, because that’s not what the publisher is aiming to do with the game, and has budgeted accordingly. They’re going after a particular group with specific interests.

    But if you want to do that, that means that the interest in your niche by players has to be sufficient to overwhelm the fact that you aren’t going to have the playerbase and thus budget that a game with more general appeal would.


  • tal@lemmy.todaytoAsk Lemmy@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 days ago

    I’d also add that the Threadverse brought some really new and interesting things to the table.

    • By default with all current Threadiverse software packages, instances are public, and there are many public instances. This means that while an instance might have downtime, it is very, very likely that I can continue to browse content, and if I’m willing to set up an account on a second home instance, even post. Early Reddit had a lot of downtime issues, and when it went down, it was down.

    • There’s a lot more technical advancement on the Threadiverse than was happening on late Reddit.

    • The third-party software ecosystem is very strong. It’s not just the PieFed, Lemmy, and Mbin guys writing all the software. There are a ton of clients, monitoring systems, status dashboards, you name it. Reddit had third party software too, but I feel like people are a lot more willing to commit effort to an open system.

    • I think that having competing instance policy is important. I don’t know yet whether, in the long run, this is going to wind up with largely- or entirely-decoupled Threadiverse “networks” of federated hosts split along defederation fissures, kind of like happened with IRC. I hope that it can remain mostly-connected. But I don’t want to have some party somewhere deciding content policy for all of the Threadiverse. With Twitter, Reddit, Facebook, whatever, there’s some single central controlling authority with monopoly access over the entire system. That doesn’t exist on the Threadiverse, and I am a lot happier for that. There will probably be people out there saying things that I don’t agree with or like, but that’s okay; I don’t have to look at it. The same is true of the Web. I really take issue with someone whose positions I don’t agree with acting as a systemwide censor (I’d also add that while I’m not really enthusiastic about the Lemmy devs admin decisions on lemmy.ml, I have not seen them attempt to do this even Lemmy-wide, much less Threadiverse-wide). That’s a real difference from Reddit. If your instance admin says that tomorrow, all content needs to be posted in all caps, you can migrate your community or home instance or community usage to another instance, and other users who feel the same way can do the same. With any disagreement with Reddit site-wide policy, your option is only to leave Reddit entirely. It’s Spez’s way or the highway. I don’t think that that’s reasonable for a system that aspires to be a system for the whole world.


  • tal@lemmy.todaytoAsk Lemmy@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 day ago

    Reddit ended support for their API which killed off 3rd party apps and the official one sucked.

    Same, though with the modification that I wasn’t going to run the official app regardless of whether it sucked or not.

    There were also some longer-run issues that weren’t enough to make me leave the site, but made it less-preferable than it had been at one point. They just hadn’t broken the camel’s back. I didn’t like the shift to the new Web UI, and there were some minor compatibility breakages between the new and old Web UI. I wasn’t enthusiastic about some of the policy changes that had happened over the years. I thought that the change to how blocking worked was a really bad idea, caused people to severely abuse the thing in conversation threads to prevent people from responding to their points. I was more-interested in the stuff that the earlier userbase had been interested in, though I’ll concede that one could mitigate that by limiting what subreddits one subscribed to.

    I’d also always preferred the federated structure of Usenet to Reddit — but Usenet had crashed into crippling spam problems and hadn’t resolved them. I also think that some decisions that Reddit made were the right ones, like permitting editing of comments. There are some problems with editable comments, and someone could always have grabbed an earlier copy — but people correcting errors and cooling down flamewars where they fired off a kneejerk insult or something and then went back and toned it down wound up being a net positive of Reddit relative to Usenet, Slashdot, and so forth. On the Threadiverse, I could enjoy Usenet-like federation and still have Reddit-like editable comments.

    So when Reddit killed the third-party API stuff off, it was really a “straw that breaks the camel’s back” moment. It wasn’t that my sole concern was killing the third-party API stuff, though I certainly was unhappy about that. I’d expected some eventual changes for monetization, but hadn’t expected it to include trying to mass-shovel users onto the official app. But it was that the sum total of changes combined with the Threadiverse becoming available meant that I’d rather be on the Threadiverse.



  • “AI’s natural limit is electricity, not chips,” Schmidt said, cutting through the industry’s semiconductor obsession with characteristic bluntness.

    I mean, maybe in the very long term that’s a fundamental limit, and you face things like Dyson spheres.

    But right now, I’m personally running one human-level AGI on roughly 100W of power, so I’m just gonna say that as things stand, the prominent limitation is software not being good enough. You’re, like, a software guy.

    Ultimately AI is an optimization problem, and if we don’t know how to solve the software problems fully yet, then, yeah, we can be inefficient and dump some of the heavy lifting on the hardware guys to get a small edge.

    But I’m pretty sure that the real breakthrough that needs to happen isn’t on the hardware side. Like, my existing PC and GPU already are more capable than my brain from a hardware standpoint. The hardware guys have already done their side and then some compared to human biology. It’s that we haven’t figured out the software to run on them to make them do what we want.

    The military or whoever needs AI applications can ask for more hardware money to get an edge relative to competitors. But if you’re the (well, ex-) head of Google, you’re where a lot of those software and computer science guys who need to make the requisite software breakthroughs probably are, or could be. Probably the last people who should be saying “the hardware guys need to solve this”.

    It’s going to be some more profound changes to what we’re doing in software today than just tweaking the parameters on some LLM model, too. There’s probably some hard research work that has to be done. It’s not “we need immense resources dumped into manufacturing more datacenters and powerplants, and chips”. It’s translating money into having some nerdy-looking humans bang away in some office somewhere and figure out the required changes to what needs to be done in software to get us there. Once that happens, then okay, sure, one needs hardware to make use of that software. But in July 2025, we don’t have the software to run on that hardware, not yet.







  • https://thehill.com/homenews/administration/5403496-dana-white-donald-trump-ufc-fight-south-lawn/

    President Trump, who attended a UFC fight earlier this year with allies and members of his Cabinet, floated the idea of a fight on the South Lawn weeks ago as part of the nation’s 250th birthday celebration. The proposal comes as the Trump administration grows the corporate presence on White House grounds.

    Trump held an event for Tesla on the South Lawn earlier this year — when he purchased an electric vehicle alongside then-ally Elon Musk in a statement to those protesting the automaker— in addition to the first corporate-sponsored White House Easter Egg roll.

    “We’re going to be on the South Lawn, and literally, when you’re watching the fight, this will all be the White House here, and then behind me will be, you know, the Washington Monument,” White said during the news conference.

    White confirmed the fight will be pay-per-view but didn’t outline any match-ups ahead of the South Lawn show.

    https://en.wikipedia.org/wiki/Bread_and_circuses

    “Bread and circuses” (or “bread and games”; from Latin: panem et circenses) is a metonymic phrase referring to superficial appeasement. It is attributed to Juvenal (Satires, Satire X), a Roman poet active in the late first and early second century AD, and is used commonly in cultural, particularly political, contexts.

    In a political context, the phrase means to generate public approval, not by excellence in public service or public policy, but by diversion, distraction, or by satisfying the most immediate or base requirements of a populace,[1] by offering a palliative: for example food (bread) or entertainment (circuses). Juvenal originally used it to decry the “selfishness” of common people and their neglect of wider concerns.[2][3][4] The phrase implies a population’s erosion or ignorance of civic duty as a priority.