• interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Same but western digital, 13gb that failed and lost all my data 3 time and 3rd time was outside the warranty! I had paid 500$, the most expensive thing I had ever bought until tgat day.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      That’s how most technology is:

      • combustion engines - early 1900s, earlier if you count steam engines
      • missiles - 13th century China, gunpowder was much earlier
      • wind energy - windmills appeared in the 9th century, potentially as early as the 4th

      Almost everything we have today is due to incremental improvements from something much older.

      • PM_Your_Nudes_Please@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        More like microscopic fidget bubble poppers.

        When the computer wants a bit to be a 1, it pops it down. When it wants it to be a 0, it pops it up.

        If it were like a punch card, it couldn’t be rewritten as writing to it would permanently damage the disc. A CD-RW is basically a microscopic punch card though, because the laser actually burns away material to write the data to the CD.

        • Semi-Hemi-Lemmygod@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          They work through electron tunneling through a semiconductor, so something does go through them like an old punch card reader

          • dual_sport_dork 🐧🗡️@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Current ones also store multiple charge levels per cell, so they’re no longer one bit each. They have multiple levels of “punch” for what used to just be one bit.

    • john89@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      This isn’t unique to computing.

      Just about all of the products and technology we see are the results of generations of innovations and improvements.

      Look at the automobile, for example. It’s really shaped my view of the significance of new industries; we could be stuck with them for the rest of human history.

    • pressanykeynow@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Talking about steam, steam-powered things are 2 thousand years old at least and we still use the technology when we crack atoms to make energy.

      • superkret@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        What the Romans had wasn’t comparable with an industrial steam engine. The working principle of steam pushing against a cylinder was similar, but they lacked the tools and metallurgy to build a steam cauldron that could be pressurized, so their steam engine could only do parlor tricks like opening a temple door once, and not perform real continuous work.

  • veee@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Just one would be a great backup, but I’m not ready to run a server with 30TB drives.

    • mosiacmango@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’m here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.

      This would net around 180TB in that form factor. Thats would go a long way for a long while.

      • Badabinski@kbin.earth
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can’t imagine what it’d be like with 30 TB disks.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Is RAID2 ever the right choice? Honestly, I don’t touch anything outside of 0, 1, 5, 6, and 10.

          Edit: missed the z, my bad. Raidz2 is fine.

        • killabeezio@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Yeah I agree. I just got 20tb in mine. Decided to just z2, which in my case should be fine. But was contemplating the same thing. Going to have to start doing z2 with 3 drives in each vdev lol.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          A few years ago I had a 12 disk RAID6 array and the power distributor (the bit between the redundant PSUs and the rest of the system) went and took 5 drives with them, lost everything on there. Backup is absolutely essential but if you can’t do that for some reason at least use RAID1 where you only lose part of your data if you lose more than 2 drives.

  • Avieshek@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    How can someone without programming skills make a cloud server at home for cheap?

    Lemmy’s Spoiler Doesn’t Make Sense

    (Like connected to WiFi and that’s it)

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Cheapest is probably a Raspberry Pi with a USB external drive. Look up “Raspberry Pi NAS,” there are a bunch of guides.

      Or you can repurpose an old PC, install some NAS distro, and then configure.

      There are a ton of options, very few of which require any programming.

    • bruhduh@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Debian, virtualmin, podman with cockpit, install these on any cheap used pc you find, after install setup all other is gui managed

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Raspberry Pi or an old office PC are the usual methods. It’s not so much programming as Linux sysadmin skills.

      Beyond that, you might consider OwnCloud for an app-like experience, or just Samba if all you want is local network files.

    • WolfLink@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Not programming skills, but sysadmin skills.

      Buy a used server on EBay (companies often sell their old servers for cheap when they upgrade). Buy a bunch of HDDs. Install Linux and set up the HDDs in a ZFS pool.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yes. You’ll have to learn some new things regardless, but you don’t need to know how to program.

      What are you hoping to make happen?

  • Zacryon@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Seagate. The company that sold me an HDD which broke down two days after the warranty expired.

    No thanks.
    laughing in Western Digital HDD running for about 10 years now

    • turmacar@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Had the same experience and opinion for years, they do fine on Backblaze’s drive stats but don’t know that I’ll ever super trust them just 'cus.

      That said, the current home server has a mix of drives from different manufacturers including seagate to hopefully mitigate the chances that more than one fails at a time.

    • zarkanian@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I had the opposite experience. My Seagates have been running for over a decade now. The one time I went with Western Digital, both drives crapped out in a few years.

      • Manifish_Destiny@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I have 10 year old WDs and 8 year old Seagates still kicking. Depends on the year. Some years one is better than others.

    • satans_methpipe@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Funny because I have a box of Seagate consumer drives recovered from systems going to recycling that just won’t quit. And my experience with WD drives is the same as your experience with Seagate.

      Edit: now that I think about it, my WD experience is from many years ago. But the Seagate drives I have are not new either.

      • Zorque@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Survivorship bias. Obviously the ones that survived their users long enough to go to recycling would last longer than those that crap out right away and need to be replaced before the end of the life of the whole system.

        I mean, obviously the whole thing is biased, if objective stats state that neither is particularly more prone to failure than the other, it’s just people who used a different brand once and had it fail. Which happens sometimes.

        • satans_methpipe@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Ah I wasn’t thinking about that. I got the scrappy spinny bois.

          I’m fairly sure me and my friends had a bad batch of Western digitals too.

  • SpaceScotsman@startrek.website
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    “The two models, the 30TB … and the 32TB …, each offer a minimum of 3TB per disk”. Well, yes, I would hope something advertised as being 30TB would offer at least 3TB. Am I misreading this sentence somehow?

    • BoxOfFeet@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I have one Seagate drive. It’s a 500 GB that came in my 2006 Dell Dimension E510 running XP Media Center. When that died in 2011, I put it in my custom build. It ran until probably 2014, when suddenly I was having issues booting and I got a fresh WD 1 TB. Put it in a box, and kept it for some reason. Fast forward to 2022, I got another Dell E510 with only an 80 GB. Dusted off the old 500 GB and popped it in. Back with XP Media Center. The cycle is complete. That drive is still noisy as fuck.

    • Steak@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Not worth the risk for me to find out lol. My granddaddy stored his data on WD drives and his daddy before him, and my daddy after him. Now I store my data on WD drives and my son will to one day. Such is life.

      • kalpol@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        And here I am with HGST drives hitting 50k hours

        Edit: no one ever discusses the Backblaze reliability statistics. Its interesting to see how they stack up against the anecdotes.

  • corroded@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I can’t wait for datacenters to decommission these so I can actually afford an array of them on the second-hand market.

    • quixotic120@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Exactly, my nas is currently made up of decommissioned 18tb exos. Great deal and I can usually still get them rma’d the handful of times they fail

        • quixotic120@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Serverpartdeals has done me well, drives often come new enough that they still have a decent amount of manufacturers warranty remaining (exos is 5yr) and depending on the drive you buy from them spd will rma a drive for 5 years from purchase (but not always, depends on the listing, read the fine print).

          I have gotten 2 bad drives from them out of 18 over 5 years or so. Both bad drives were found almost immediately with basic maintenance steps prior to adding to the array (zeroing out the drives, badblocks) and both were rma’d by seagate within 3-5 days because they were still within the mfr warranty.

          If you’re running a gigantic raid array like me (288tb and counting!) it would be wise to recognize that rotational hard drives are doomed and you need a robust backup solution that can handle gigantic amounts of data long term. I have a tape drive for that because I got it cheap at an electronics recycler sold as not working (thankfully it was an easy fix) but this is typically a super expensive route. If you only have like 20tb then you can look into stuff like cloud services, bluray, redundant hard drive, etc. or do like I did in the beginning and just accept that your pirated anime collection might go poof one day lol

          • corroded@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            What kind of tape drive are you using? My array isn’t as large as yours (120tb physical), but it’s big enough that my only real options for backup are tape or a whole secondary array for just backup.

            Based on what I’ve seen, my options are a prohibitively large number tapes with an older LTO standard or prohibitively expensive tapes with a newer LTO standard.

            My current backup strategy consists of automated backups to Backblaze B2 for the really important stuff like personal documents or projects and hoping my ZFS array doesn’t fail for everything else.

            • quixotic120@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I have an ibm qualstar lto8 drive. I got it because I gambled, it was cheap because it was throwing an error (I forget what the number was) but it was one that indicates an issue in the tape path. I was able to get the price to $150 because I was buying some other stuff and because ultimately if the head was toast it was basically useless. But I got lucky and cleaning the head and tape path brought it back to life. Dunno how long it will last. I’ll live with it though because buying one that’s confirmed working can be thousands

              You’re right that lto8 tapes are pricey but they’re quite a bit cheaper than building an equivalent array for backup that is significantly more reliable long term. A tape is about 12tb and $40-50, although sometimes they pop up cheaper. I generally don’t back up stuff continually with this method, I back up newer files that haven’t been synced to tape once every six weeks or so. It’s also something that you can buy a bit at a time to soften the financial blow of course. Maybe if you get a fancy carousel drive you’d want to fill it up but frankly that just seems like it would break much easier

              More modern tapes have support for ltfs and I can basically use it like an external hard drive that way. So it’s pretty much I pop a tape in, once a week or so I sync new files to said tape, then as it gets full I swap it for a new tape. Towards the end I print a directory of what’s on it because admittedly doing it this way is messy. But my intention with this is to back up my “medium critical” files. Stuff that if I lost I would be frustrated over, but not heartbroken. Movies and TV shows that I did custom muxes of to have my ideal subtitles, audio tracks, etc. all my dockers so stuff like my Jellyfin watch status and komga library stay intact, stuff like that. That takes up the bulk of my nas and my primary concerns are either the array fully failing or significant bit rot, and if either of those occur I would rebuild from scratch and just copy all the tapes back over anyway so the messy filing isn’t really a huge issue.

              I also do sometimes make it a point to copy harder to find files onto at least 2 tapes on the outside chance a tape goes bad. It’s unlikely given I only buy new tapes and store them properly (I even go to the effort to store them offsite just in case my house burns down) but you never know I suppose

              The advertised values of tape capacity is crap for this use. You’ll see like lto 8 has a native capacity of 12tb but a compressed capacity of 30tb per disk! And the disks will frequently just say 30tb on them. That’s nonsense here. Maybe for a more typical server environment where they’re storing databases and text files and shit but compressed movies and music? Not so much. I get some advantage because I keep most of my stuff in archival quality (remux/flac/etc) but even then I still usually dont get anywhere near 30tb

              It’s pretty slow. Not the end of the world but just something to keep in mind. Lto8 is supposed to be 360MBps for uncompressed and 750MBps for compressed data but I don’t seem to hit those speeds at all. I’m not really in a rush though and everything verifies fine and works after copying back over so I’m not too worried. But it can take like 10-14 hours to fill a tape. If I ever do have to rebuild the array it will take AGES

              For my “absolutely priceless” data I have other more robust backup solutions that are basically the same as yours (literally down to using backblaze, ha).

              • corroded@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                You got an incredible deal on your tape drive. For LTO8 drives, I’m seeing “for parts only” drives sold for around $500. I’d be willing to throw away $100 or $200 on the possibility that I could repair a drive; $500 is a bit too much. It looks like LTO6 is more around what my budget would be.; it would require a much larger number of tapes, but not excessively so.

                I remember when BD-R was a reasonable solution for backup. There’s no way that’s true now. It really seems like hard drive capacity has far outpaced removable media. If most people are streaming everything, those of us who actually want to save their data locally are really the minority these days. There’s just not as much of a compelling reason for companies to develop cheap high-capacity removable discs.

                I’m sure I’ll invest in a tape backup solution eventually, but for now, at least I have ZFS with paranoid RAIDZ.

        • shalafi@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          eBay sellers that have tons of sales and specialize. You can learn to read between the lines and see that decom goods are what they do.

          SaveMyServer is a perfect example. Don’t know if they sell drives though.

        • jeansburger@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Way ahead of you… I have a Brocade ICX6650 waiting to be racked up once I’m not limited to just the single 15A circuit my rack runs off of currently 😅

          Hopefully 40G interconnect between it and the main switch everything using now will be enough for the storage nodes and the storage network/VLAN.

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Just a reminder: These massive drives are really more a “budget” version of a proper tape backup system. The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.

    So a decent choice for the big machine you backup all your VMs to in a corporate environment. Not a great solution for all the anime you totally legally obtained on Yahoo.

    Not sure if the general advice has changed, but you are still looking for a sweet spot in the 8-12 TB range for a home NAS where you expect to regularly access and update a large number of small files rather than a few massive ones.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.

      It’s no ssd but is no slower than any other 12TB drive. It’s not shingled but HAMR. The sectors are closer together so it has even better seeking speed than a regular 12TB drive.

      Not a great solution for all the anime you totally legally obtained on Yahoo.

      ???

      It’s absolutely perfect for that. Even if it was shingled tech, that only slows write speeds. Unless you are editing your own video, write seek times are irrelevant. For media playback use only consistent read speed matters. Not even read seek matters except in extreme conditions like comparing tape seek to drive seek. You cannot measure 10 ms difference between clicking a video and it starting to play because of all the other delays caused by media streaming over a network.

      But that’s not even relevant because these have faster read seeking than older drives because sectors are closer together.

    • barkingspiders@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      honestly curious, why the hell was this downvoted? I work in this space and I thought this was still the generally accepted advice?

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Because people are thinking through specific niche use cases coupled with “Well it works for me and I never do anything ‘wrong’”.

        I’ll definitely admit that I made the mistake of trying to have a bit of fun when talking about something that triggers the dunning kruger effect. But people SHOULD be aware of how different use patterns impacts performance, how that performance impacts users, and generally how different use patterns impact wear and tear of the drive.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Come on man, everything, and mean everything you said is wrong.

          Budget tape backup?

          No, you can’t even begin to compare drives to tape. They’re completely different use cases. A hard drive can contain a backup but it’s not physically robust to be unplugged, rotated off site , and put into long term storage like tape. You might as well say a Honda Accord is a budget Semi tractor trailer.

          Then you specifically called out personal downloads of anime as a bad use case. That’s absolutely wrong in all cases.

          It is absurd to imply that everyone else except for you is less knowledgeable and using a niche case except you.

      • CarbonatedPastaSauce@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Not a great solution for all the anime you totally legally obtained on Yahoo.

        Mainly because of that. Spinning rust drives are perfect for large media libraries.

        There isn’t a hard drive made in the last 15 years that couldn’t handle watching media files. Even the SMR crap the manufacturers introduced a while back could do that without issue. For 4k video you’re going to see average transfer speeds of 50MB/s and peak in the low 100MB/s range, and that’s for high quality videos. Write speed is irrelevant for media consumption, and unless your hard drive is ridiculously fragmented, seek speed is also irrelevant. Even an old 5400 RPM SATA drive is going to be able to handle that load 99.99% of the time. And anything lower than 4K video is a slam dunk.

        Everything I just said goes right out the window for a multi-user system that’s streaming multiple media files concurrently, but the vast majority of people never need to worry about that.

    • mosiacmango@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Not sure what you’re going on about here. Even these discs have plenty of performance for read/wrote ops for rarely written data like media. They have the same ability to be used by error checking filesystems like zfs or btrfs, and can be used in raid arrays, which add redundancy for disc failure.

      The only negatives of large drives in home media arrays is the cost, slightly higher idle power usage, and the resilvering time on replacing a bad disc in an array. Your 8-12TB recommendation already has most of these negatives. Adding more space per disc is just scaling them linearly.

      • ricecake@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Additionally, most media is read in a contiguous scan. Streaming media is very much not random access.

        Your typical access pattern is going to be seeking to a chunk, reading a few megabytes of data in a row for the streaming application to buffer, and then moving on. The ~10ms of access time at the start are next to irrelevant. Particularly when you consider that the OS has likely observed that you have unutilized RAM and loads the entire file into the memory cache to bypass the hard drive entirely.

    • CarbonatedPastaSauce@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’m real curious why you say that. I’ve been designing systems with high IOPS data center application requirements for decades so I know enterprise storage pretty well. These drives would cause zero issues for anyone storing and watching their media collection with them.

    • IrateAnteater@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      HDD read rates are way faster than media playback rates, and seek times are just about irrelevant in that use case. Spinning rust is fine for media storage. It’s boot drives, VM/container storage, etc, that you would want to have on an SSD instead of the big HDD.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Not sure whether we’ll arrive there the tech is definitely entering the taper-out phase of the sigmoid. Capacity might very well still become cheaper, also 3x cheaper, but don’t, in any way, expect them to simultaneously keep up with write performance that ship has long since sailed. The more bits they’re trying to squeeze into a single cell the slower it’s going to get and the price per cell isn’t going to change much, any more, as silicon has hit a price wall, it’s been a while since the newest, smallest node was also the cheapest.

      OTOH how often do you write a terabyte in one go at full tilt.

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I don’t think anyone has much issue with our current write speeds, even at dinky old SATA 6/GB levels. At least for bulk media storage. Your OS boot or game loading, whatever, maybe not. I’d be just fine with exactly what we have now, but just pack more chips in there.

        Even if you take apart one of the biggest, meanest, most expensive 8TB 2.5" SSD’s the casing is mostly empty inside. There’s no reason they couldn’t just add more chips even at the current density levels other than artificial market segmentation, planned obsolescence, and pigheadedness. It seems the major consumer manufacturers refuse to allow their 2.5" SSD’s to get out of parity with the capacities on offer in the M.2 form factor drives that everyone is hyperfixated on for some reason, and the pricing structure between 8TB and what few greater than 8 models actually are on offer is nowhere near linear even though the manufacturing cost roughly should be.

        If people are still willing to use a “full size” 3.5" form factor with ordinary hard drives for bulk storage, can you imagine how much solid state storage you could cram into a casing that size, even with current low-cost commodity chips? It’d be tons. But the only options available are “enterprise solutions” which are apparently priced with the expectation you’ll have a Fortune 500 or government expense account.

        It’s bullshit all the way down; there’s nothing new under the sun in that regard.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          the M.2 form factor drives that everyone is hyperfixated on for some reason

          The reason is transfer speeds. SATA is slow, M.2 is a direct PCIe link. And SSDs can saturate it, at least in bursts. Doubling the capacity of a 2.5" SSD is going to double its price as you need twice as many chips, there’s not really a market for 500 buck SATA SSDs, you’re looking for U.2 / U.3 ones. Yes, they’re quite a bit more expensive per TB but look at the difference in TBW to consumer SSDs.

          If you’re a consumer and want a data grave, buy spinning platters. Or even a tape drive. You neither want, nor need, a high-capacity SSD.

          Also you can always RAID them up.

          • dual_sport_dork 🐧🗡️@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            For the context of bulk consumer storage (or even SOHO NAS) that’s irrelevant, though, because people are already happily using spinning mechanical 3.5" hard drives for this purpose, and they’re all already SATA. Therefore there’s no logical reason to worry about the physical size or slower write speeds of packing a bunch of flash chips into the same sized enclosure for those particular use cases.

            There are reasons a big old SSD would be suitable for this. Silence, reliability, no spin up delay, resistance to outside mechanical forces, etc.

  • dragonlobster@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    These things are unreliable, I had 3 seagate HDDs in a row fail on me. Never had an issue with SSDs and never looked back.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      well until you need capacity why not use an SSD. It’s basically mandatory for the operating system drive too

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          I would rather not buy so large SSDs. for most stuff the performance advantage is useless while the price is much larger, and my impression is still that such large SSDs have a shorter lifespan (regarding how many writes will it take to break down). recovering data fron a failing HDD is also easier: SSDs just turn read-only or completely fail at one point, in the latter case often even data recovery companies being unable to recover anything, while HDDs will often give signs that a good monitoring software can detect weeks or months before, so that you know to be more cautious with it

          • prosp3kt@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            How is it easier? Do you open your HDDs and take info from there? Do you have specialized equipment and knowledge? Second, if you detect on smart that you are closer to TBW, change the SSD duh… Smart is a lot more effective on SSDs depending the model it even gives you time to live…

    • vithigar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Seagate in general are unreliable in my own anecdotal experience. Every Seagate I’ve owned has died in less than five years. I couldn’t give you an estimate on the average failure age of my WD drives because it never happened before they were retired due to obsolescence. It was over a decade regularly though.

    • SupraMario@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I stopped buying seagates when I had 4 of their 2TB barracuda drives die within 6 months… constantly was RMAing them. Finally got pissed and sold them and bought WD reds, still got 2 of the reds in my Nas Playing hot backups with nearly 8 years of power time.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I recently had to send back a Barracuda drive as well. I’m seeing if the Ironwolf drive fares any better.

        • SupraMario@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I have heard good things about their ironwolf drives, but that’s a enterprise solution drive, so hopefully it’s worth it

      • kungen@feddit.nu
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I have several WDs with almost 15 years of power on time, not a single failure. Whereas my work bought a bunch of Seagates and our cluster was basically halved after less than 2 years. I have no idea how Seagate can suck so much.

        • SupraMario@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          About 10 years ago now, at a past employer, had a NAS setup that housed a bunch of medical data…all seagate drives. During my xmas PTO…I was lead on DR…yea fuckers all started failing one after another. Took out 14 drives before the storage team said fuck this pulled it offline and had a new NAS brought in from EMC, was a fun xmas restoring all that shit. Seagate used to be my go to, but it seems like every single interaction I have with them ends in disaster.

          • kalleboo@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Seagate was my go-to after I had bought those original IBM DeathStars and had to RMA the RMA replacement drive after a few months. But brand loyalty is for suckers. It seemed Seagate had a really bad run after they acquired Maxstor who always had a bad reputation.

      • Cornelius_Wangenheim@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        They seem to be real hit or miss. I also have 2 6TB barracudas that have 70,000 power on hours (8 yrs) that are still going fine.

        • john89@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          “Hit or miss” is unfortunately not good enough for consumer electronics.

          It means you’re essentially gambling with bad odds so the business you’re giving money to can get away with cutting corners.