TBW readings for VSAN disks (update!)
As I just removed two of my VSAN capacity disks from my cluster nodes I took the chance to have a look on the TBW data of the disks.
Funny enough I removed the 2 newest disks (without intention). One of them I just bought when I built the VSAN cluster (#2) and the other one came as a replacement for a failed disk (#1). This means both disks have only been in use within the VSAN environment and not in the RAID5 environment I wrote before.
Disk | Power On Time | Total TB Written | Average writes/day |
---|---|---|---|
Samsung Evo 870 4TB #1 (…87W) | 79 days | 5,41 TB | 70 GB |
Samsung Evo 870 4TB #2 (…63H) | 113 days | 6,86 TB | 62 GB |
These 70GB/day are just half of the 150GB/day which I saw in my previous RAID5 setup with the same workload.
Summing this up again for a 5 year usage I would be okay with 125 TBW endurance on each of the capacity SSDs. With the Samsung EVO 870s having 2.4 Petabyte TBW endurance for the 4TB model I could theoretically live about 98 years. Well, I hope I’ll see that happening 🙂
While this is specific for my workload it shows that TBW should not be overestimated for home lab usage. I might not have the heaviest of all workloads, but I do run a decent lab and even some (lab) backups on the VSAN datastore.
Update 05.08.2022
I exchanged some hardware in my node #3 on 22.07.2022 so I took the chance and looked at the written TB again after the VSAN migration. In brackets is the difference to the readings before I added the node to the VSAN cluster. Also I added the boot device (was active in another system before) and the cache disk (brand new with all 0 values before).
Disk | Power On Time | Total TB Written | Average writes/day |
---|---|---|---|
Samsung Evo 870 4TB #1 (…87W) | 82 days (+3) | 8,85 TB (+3,44) | 111 GB (+41) |
Samsung Evo 870 4TB #2 (…63H) | 116 days (+3) | 7,23 TB (+0,37) | 64 GB (+2) |
PNY XLR8 CS3030 M.2 250GB (ESXi Boot) | 93 days | 1,34 TB | 15 GB |
Gigabyte AORUS Gen 4 M.2 1TB (Cache) | 3 days | 7 TB | 2389 GB |
As you can see from the cache disks there have been a lot of data moves when I added the system to the VSAN cluster. I was a little shocked on the difference in change for the two capacity disks. This is factor 10 difference in TBW.
I don’t stress these values too much though, as this is looking on the first 3 days when all the VSAN migration work started. I’ll check to get some more TBW values from this system in the future to paint a more realistic picture.