[00:00:47] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.19, 6.21, 4.79 [00:02:28] PROBLEM - graylog2 Puppet on graylog2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [00:02:47] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 6.21, 6.12, 4.93 [00:06:48] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 5.00, 6.84, 5.61 [00:08:48] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 4.29, 5.96, 5.43 [00:30:26] RECOVERY - graylog2 Puppet on graylog2 is OK: OK: Puppet is currently enabled, last run 40 seconds ago with 0 failures [00:39:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 136s [00:41:10] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [03:13:11] PROBLEM - services3 APT on services3 is CRITICAL: APT CRITICAL: 31 packages available for upgrade (3 critical updates). [03:41:42] PROBLEM - services4 APT on services4 is CRITICAL: APT CRITICAL: 31 packages available for upgrade (3 critical updates). [03:50:38] PROBLEM - techwiki.techboyg5blog.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for techwiki.techboyg5blog.com could not be found [05:45:15] !log change privacy@ to redirect to trustandsafety@ [05:45:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [05:51:23] !log reception@jobrunner3:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/moveBatch.php --wiki tuscriaturaswiki -r "[[phab:T7225|Requested]]" --noredirects /home/reception/tuscriaturias.txt [05:51:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [05:57:40] !log reception@jobrunner4:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/deleteBatch.php --wiki wintergatancommunitywiki --r "[[phab:T7231|Requested]]" /home/reception/templatepages.txt [05:57:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [06:09:03] !log reception@jobrunner3:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki uo2wikiwiki --username-prefix="ultimate-obby-20-roblox" /home/reception/ultimateobby20roblox_pages_full.xml [06:09:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [06:13:41] RECOVERY - services4 APT on services4 is OK: APT OK: 28 packages available for upgrade (0 critical updates). [06:55:12] RECOVERY - services3 APT on services3 is OK: APT OK: 28 packages available for upgrade (0 critical updates). [08:34:08] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 499.38 ms [08:36:06] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 334.94 ms [08:38:05] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 357.23 ms [08:40:05] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 347.34 ms [08:46:05] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 0%, RTA = 386.07 ms [08:48:05] PROBLEM - ping6 on cp3 is WARNING: PING WARNING - Packet loss = 0%, RTA = 324.96 ms [08:52:04] RECOVERY - ping6 on cp3 is OK: PING OK - Packet loss = 0%, RTA = 243.30 ms [13:20:45] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.03, 5.14, 3.78 [13:21:01] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 6.84, 6.25, 4.88 [13:21:05] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2001:41d0:800:178a::5/cpweb, 2001:41d0:800:1bbd::4/cpweb, 51.222.25.132/cpweb [13:21:12] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 25.16, 19.79, 15.33 [13:21:14] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 3 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 51.195.236.219/cpweb, 51.195.236.250/cpweb [13:21:52] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.56, 7.12, 5.02 [13:22:42] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 4.06, 4.80, 3.82 [13:22:53] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 5.71, 6.97, 5.19 [13:22:56] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.23, 5.54, 4.79 [13:23:03] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:23:06] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 12.26, 17.07, 14.88 [13:23:09] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [13:23:48] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 4.74, 6.22, 4.94 [13:24:56] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 3.49, 5.69, 4.94 [15:17:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 243s [15:23:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 122s [15:25:12] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [15:25:45] PROBLEM - mw11 Current Load on mw11 is CRITICAL: CRITICAL - load average: 8.87, 6.61, 5.49 [15:27:41] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.27, 6.72, 5.65 [15:33:39] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 5.59, 6.73, 6.08 [15:34:53] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.84, 6.78, 5.97 [15:36:52] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.38, 6.29, 5.89 [16:07:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 204s [16:08:14] !log sudo puppet agent -tv && sudo -u www-data php /srv/mediawiki/w/maintenance/mergeMessageFileList.php --output /srv/mediawiki/config/ExtensionMessageFiles.php --wiki loginwiki && sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildLocalisationCache.php --wiki loginwiki on mw* [16:08:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:11:11] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 4.61, 3.59, 2.38 [16:13:11] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.95, 3.26, 2.41 [16:19:06] Was that Doug's request Reception123 [16:21:11] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.40, 3.72, 2.95 [16:23:10] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.83, 3.38, 2.92 [16:24:48] RhinosF1: yeah, for the T&S i18n [16:24:52] I thought I did it before but guess not [16:30:03] Reception123: ack, he asked last night but I was already half asleep and about to close my eyes [16:31:10] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.07, 3.92, 3.38 [16:31:49] yeah I know why it didn't work before, because as I told you before the && doesn't work after puppet runs [16:32:00] Oh ye [16:33:11] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.10, 3.24, 3.19 [16:33:52] Reception123: try echo $? after running puppet [16:34:15] Hang on [16:34:20] Type RC=$? [16:34:32] Then echo $RC [16:37:12] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [16:38:31] Reception123: `puppet agent -t` has `--detailed-exit-codes` on by default, which means that it will return exit code 0 (success) only when there were no changes made [16:38:49] afaik you have some execs like git pulls on every puppet run, which means that it will never return 0 [16:38:49] Majavah: oh, that would make sense then yeah. Thanks :) [16:39:04] I haven't really been trying to use && after puppet until recently so that's why I didn't know [16:39:42] and yeah, we do have quite a few things on every run [16:41:56] Majavah: how do you make it safe for whacking with && [16:44:10] RhinosF1: on wmf systems there is a wrapper script `run-puppet-agent` which does some magic related to that (and other things), but at least it uses `puppet agent --onetime --no-daemonize` instead of `puppet agent --test` and so does not include the detailed exit codes flag [16:44:52] Majavah: that's fancy [16:46:53] Reception123: do we want to consider that [16:47:13] we also don't like doing things every puppet run and prefer to use systemd timers or crons for that [16:47:24] I think prod has alerts for that kind of stuff even [16:48:52] I think git module should know when there's a change or not [16:48:57] topic fixed [16:49:00] Because the refresh stuff don't always go off [16:50:08] RhinosF1: we could consider it yeah [16:50:20] https://github.com/wikimedia/puppet/search?q=git%3A%3Aclone&type= [16:50:21] [ Search · git::clone · GitHub ] - github.com [16:50:47] Reception123: I think using simply puppet agent -v should be fine as it's the -t that's an issue [16:51:24] PROBLEM - monarchists.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for monarchists.wiki could not be found [16:51:26] PROBLEM - tep.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for tep.wiki could not be found [16:51:28] PROBLEM - wiki.redeemer.live - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.redeemer.live could not be found [16:51:28] PROBLEM - heavyironmodding.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for heavyironmodding.org could not be found [16:51:29] PROBLEM - www.burnout.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.burnout.wiki could not be found [16:51:29] PROBLEM - www.trollpasta.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.trollpasta.com could not be found [16:51:30] PROBLEM - wiki.ct777.cf - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.ct777.cf could not be found [16:51:31] PROBLEM - archive.a2b2.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for archive.a2b2.org could not be found [16:51:49] yeah, probably, though it is a bit minor anyway [16:58:11] RECOVERY - monarchists.wiki - reverse DNS on sslhost is OK: rDNS OK - monarchists.wiki reverse DNS resolves to cp10.miraheze.org [16:58:16] RECOVERY - tep.wiki - reverse DNS on sslhost is OK: rDNS OK - tep.wiki reverse DNS resolves to cp10.miraheze.org [16:58:16] RECOVERY - wiki.redeemer.live - reverse DNS on sslhost is OK: rDNS OK - wiki.redeemer.live reverse DNS resolves to cp11.miraheze.org [16:58:20] RECOVERY - heavyironmodding.org - reverse DNS on sslhost is OK: rDNS OK - heavyironmodding.org reverse DNS resolves to cp11.miraheze.org [16:58:20] RECOVERY - www.burnout.wiki - reverse DNS on sslhost is OK: rDNS OK - www.burnout.wiki reverse DNS resolves to cp11.miraheze.org [16:58:21] RECOVERY - www.trollpasta.com - reverse DNS on sslhost is OK: rDNS OK - www.trollpasta.com reverse DNS resolves to cp11.miraheze.org [16:58:27] RECOVERY - archive.a2b2.org - reverse DNS on sslhost is OK: rDNS OK - archive.a2b2.org reverse DNS resolves to cp11.miraheze.org [16:58:28] RECOVERY - wiki.ct777.cf - reverse DNS on sslhost is OK: rDNS OK - wiki.ct777.cf reverse DNS resolves to cp11.miraheze.org [17:02:21] Reception123, ty for running rebuildLC :) [17:14:38] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 2.78, 3.78, 2.20 [17:16:38] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.05, 3.01, 2.12 [17:21:52] no problem (I sent this before but my internet crashed so I don't think it got sent) [17:25:34] oh [17:26:17] is your router okay? Hope you don't have to buy a new router. I'm on my third router, mind you third router in like almost 20 years, I think, so that's not bad [17:34:52] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.89, 5.75, 4.90 [17:36:53] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 4.96, 5.59, 4.96 [17:41:23] dmehus: don't you get ISP ones [17:48:51] RhinosF1, yeah I have an ISP one, but I still like my D-Link router, so I have maybe a bit different network setup. PCs and mobile/streaming devices connect through D-Link router that, in turn, connects to the Telus router/modem. Television boxes from ISP/telecom provider connect directly to the ISP-provided router/modem [17:49:49] dmehus: we just use the ISP one. I don't think I've ever had to replace that but then again we probably never have them that long. [17:51:00] RhinosF1, yeah, ISP ones are pretty good actually. We only had ours replaced recently when we renewed our contract and had an unrelated technical issue. Service tech said Telus has an order for them to retrieve outmoded models on sight [17:51:35] dmehus: yeah only time we change is upgrades at contract renewal time [17:51:49] yeah [17:52:06] weird it's branded with the ISP's logo, there's not even a manufacturer name on it [17:52:12] model number is T3200-M [17:52:18] * dmehus is googling that [17:53:12] ah, they still use Actiontec it seems, https://www.actiontec.com/products/wifi-routers-gateways/vdsl/t3200m/ [17:53:12] [ T3200M - MoCA 2.0 Bonded VDSL2 802.11ac vectoring G.fast/PON 4 Port GigE - Actiontec.com ] - www.actiontec.com [17:57:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 230s [17:58:03] PROBLEM - wiki.velaan.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.velaan.org could not be found [17:58:03] PROBLEM - hr.petrawiki.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for hr.petrawiki.org could not be found [17:58:04] PROBLEM - www.lab612.at - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.lab612.at could not be found [17:58:06] PROBLEM - kk.uncyc.tk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for kk.uncyc.tk could not be found [17:58:07] PROBLEM - wikilukas.tk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wikilukas.tk could not be found [17:58:09] PROBLEM - www.hoolehistoryheritagesociety.org.uk - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.hoolehistoryheritagesociety.org.uk could not be found [17:58:09] PROBLEM - wiki.patriam.cc - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.patriam.cc could not be found [17:58:09] PROBLEM - opendatascot.org - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for opendatascot.org could not be found [17:58:14] PROBLEM - wiki.thehall.xyz - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.thehall.xyz could not be found [17:59:30] dmehus: ah [18:01:12] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.43, 3.34, 2.29 [18:02:44] as long as you dont pay rent for the router... ridiculous things that are happening. [18:03:12] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 1.98, 2.87, 2.25 [18:03:13] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [18:04:48] RECOVERY - wikilukas.tk - reverse DNS on sslhost is OK: rDNS OK - wikilukas.tk reverse DNS resolves to cp10.miraheze.org [18:04:50] RECOVERY - www.hoolehistoryheritagesociety.org.uk - reverse DNS on sslhost is OK: rDNS OK - www.hoolehistoryheritagesociety.org.uk reverse DNS resolves to cp11.miraheze.org [18:04:51] RECOVERY - wiki.patriam.cc - reverse DNS on sslhost is OK: rDNS OK - wiki.patriam.cc reverse DNS resolves to cp10.miraheze.org [18:04:52] RECOVERY - opendatascot.org - reverse DNS on sslhost is OK: rDNS OK - opendatascot.org reverse DNS resolves to cp11.miraheze.org [18:05:00] RECOVERY - wiki.thehall.xyz - reverse DNS on sslhost is OK: rDNS OK - wiki.thehall.xyz reverse DNS resolves to cp11.miraheze.org [18:05:01] RECOVERY - wiki.velaan.org - reverse DNS on sslhost is OK: rDNS OK - wiki.velaan.org reverse DNS resolves to cp10.miraheze.org [18:05:01] RECOVERY - hr.petrawiki.org - reverse DNS on sslhost is OK: rDNS OK - hr.petrawiki.org reverse DNS resolves to cp11.miraheze.org [18:05:03] RECOVERY - www.lab612.at - reverse DNS on sslhost is OK: rDNS OK - www.lab612.at reverse DNS resolves to cp11.miraheze.org [18:05:05] RECOVERY - kk.uncyc.tk - reverse DNS on sslhost is OK: rDNS OK - kk.uncyc.tk reverse DNS resolves to cp10.miraheze.org [18:12:41] @Kozd, yeah [18:13:24] !log removed mrjarsolavik from CVT mail list [18:13:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:31:12] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 296s [18:33:12] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 179s [18:35:11] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [18:44:49] paladox, do you know what the webmail session timeout is set to? It seems to be, like, less than one hour? Wondering if we could change that. [18:45:07] Oh wait, there's probably a mail session length setting I can change in my mail [18:46:24] oh, nope I don't see a setting [18:47:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 226s [18:51:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 136s [18:53:11] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [18:55:03] @owen: can you update https://staff.miraheze.org/wiki/Threat_Response_Protocol [18:55:04] [ Permission error - Miraheze Staff Wiki ] - staff.miraheze.org [18:55:21] I assume T&S rather than you is the contact now [18:55:36] Not sure if dmehus wants his contact info on their but that's up to him [19:00:28] RhinosF1: the page can be deleted, it’s of no use now [19:01:10] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 229s [19:01:26] RhinosF1: linking to [[Help]] in https://meta.miraheze.org/wiki/Responding_to_threats is also probably not great [19:01:27] [ Responding to threats - Miraheze Meta ] - meta.miraheze.org [19:01:47] I'd propose to change that to SN, as "other concerns" would likely be a Steward/GS concern [19:03:10] @Reception123 Wouldn't that be made on the Requests for Comments page? [19:03:32] Hm? I'm very confused [19:03:45] How is RfC related to this in any way? [19:04:07] Oh, I thought it could've been brought up on the RfC page. Guess not. [19:05:11] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 1s [19:05:11] What could be brought up there? [19:05:24] I simply meant instead of directing users to a redlink Help page we should rather direct them to SN [19:05:48] Hmmmm, that ain't a bad idea. [19:06:09] * Reception123 is still confused but okay [19:17:12] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 193s [19:19:12] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 259s [19:19:51] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.49, 3.23, 2.60 [19:21:11] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 68s [19:21:48] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 3.27, 3.23, 2.68 [19:25:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 261s [19:27:40] PROBLEM - dbbackup2 Current Load on dbbackup2 is CRITICAL: CRITICAL - load average: 5.08, 4.34, 3.29 [19:28:31] Reception123, I think we can just remove the [[Help]] redlink on [[Responding to threats]]. It's like a holdover from a Wikimedia page import [19:29:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 120s [19:29:38] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 2.72, 3.65, 3.15 [19:29:40] dmehus: well yeah, but I do want to suggest a page for less serious concerns [19:29:43] and my preference would be SN [19:29:58] Reception123, oh true, yeah, I'd have no objection to that if @Owen doesn't [19:30:36] "For [[Code of Conduct]] or other community-related concerns, please see [[Stewards' noticeboard]]" or something similar [19:30:49] yeah, we don't want people to think T&S deals with every single minor complaint. Of course in doubt it's always best to contact T&S, but we do want to provide a link to community venues too for matters that don't require ToU action, which realistically are > 95% of them [19:30:53] yeah [19:33:12] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 285s [19:33:37] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 3.08, 3.34, 3.14 [19:37:33] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.30, 3.82, 3.42 [19:39:12] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [19:39:32] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 1.45, 2.98, 3.16 [21:29:10] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 222s [21:31:10] PROBLEM - dbbackup2 Current Load on dbbackup2 is WARNING: WARNING - load average: 3.91, 3.27, 2.16 [21:33:10] RECOVERY - dbbackup2 Current Load on dbbackup2 is OK: OK - load average: 2.84, 3.14, 2.25 [21:41:10] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 156s [21:43:10] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [23:39:10] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 187s [23:41:11] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 253s [23:43:12] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 54s