[09:00:38] Morning. It looks like the disks are full on 10 of 24 backup hosts, see https://phabricator.wikimedia.org/T413853. I know j.ynus is out until tomorrow but giving you a heads up. [09:12:41] thanks, I worried about this before the break, but I think the answer is there's not much to be done until j.ynus is back (and maybe there is more hardware) [14:32:42] Emperor: disks fail fairly regularly on Swift, how are we these days wrt spares? what is turn around on replacement like? [14:37:35] urandom: we should typically have spares in (complicated by the variety of disks we have deployed), turnaround usually less than week. [14:37:46] we've not had to swap one in a bit though (I say, tempting fate) [14:38:46] and is a week a tolerable period? [14:39:15] is that within bounds for safety/risk? [14:39:31] s/is that/is that comfortably/ [14:40:21] safety-wise, it's fine. It can be a problem if we're in the middle of a load/drain process, because the ring manager won't make changes with a failed disk [14:43:53] How does this work wrt warrant? For other hosts, it seems we keep no spares, and have to request a replacement under warranty for replacements. And, once a host is out of warranty, we get devices that removed from decommissioned hosts (subject to availability). Did we work out some sort of exception here? [14:44:16] s/for replacements//g [14:47:31] Willy would know for sure; AIUI there's not a warranty issue with disk swaps for in-warranty systems (out-of-warranty ones still tend to end up with scavenged spares). At my previous place we got our h/w to send us a bunch of warranty-replacement disks in advance, but I don't know how it works here. [14:55:17] Thanks, I’ll follow up with Willy and kwakuofor.i