[01:00:10] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je230 [01:00:12] [02miraheze/services] 07MirahezeSSLBot 03a0fbd8f - BOT: Updating services config for wikis [01:22:31] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.34, 3.50, 2.16 [01:24:30] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 3.16, 3.32, 2.26 [01:27:30] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is WARNING: WARNING: Diff, 46119 files, 36.42GB, 2019-10-20 01:25:00 (2.1 weeks ago) [01:28:26] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 3.71, 4.14, 2.87 [01:30:24] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.92, 3.15, 2.65 [02:08:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.97, 3.33, 2.11 [02:12:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.23, 2.82, 2.25 [02:20:46] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.79, 3.64, 2.22 [02:22:48] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.52, 3.39, 2.29 [02:24:49] PROBLEM - bacula1 Bacula Databases db5 on bacula1 is WARNING: WARNING: Diff, 306 files, 50.50GB, 2019-10-20 02:22:00 (2.1 weeks ago) [02:26:53] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.10, 4.59, 3.02 [02:28:54] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.33, 3.90, 2.97 [02:30:52] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.18, 3.01, 2.76 [03:12:36] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.19, 3.50, 2.28 [03:18:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.29, 3.06, 2.63 [03:37:00] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.90, 4.02, 2.51 [03:39:00] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.08, 3.37, 2.46 [04:12:36] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.40, 4.02, 2.34 [04:14:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.58, 3.18, 2.25 [04:54:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.99, 3.69, 2.33 [04:56:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.24, 2.87, 2.21 [06:05:31] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.46, 3.43, 2.18 [06:07:31] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.48, 3.75, 2.47 [06:09:37] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.13, 2.70, 2.24 [06:25:54] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2818 MB (11% inode=94%); [06:48:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.98, 5.00, 3.00 [06:51:11] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 10.11, 4.75, 2.58 [06:59:12] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.10, 3.15, 2.89 [07:00:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.26, 3.06, 3.67 [07:02:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.10, 2.32, 3.32 [07:38:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.61, 4.18, 2.76 [07:45:08] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2cv [07:45:10] [02miraheze/services] 07MirahezeSSLBot 0391a60b8 - BOT: Updating services config for wikis [07:46:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.10, 3.48, 3.17 [07:48:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.51, 2.76, 2.94 [09:21:07] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.63, 3.14, 2.17 [09:23:01] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.90, 2.58, 2.08 [09:43:27] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.40, 3.60, 2.72 [09:45:24] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.87, 3.68, 2.86 [09:49:14] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.87, 4.61, 3.38 [09:54:44] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.55, 3.65, 2.89 [09:58:41] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 3.37, 3.36, 2.94 [10:04:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.82, 3.27, 3.82 [10:08:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.75, 1.92, 3.13 [10:12:30] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.17, 3.50, 2.84 [10:14:28] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.00, 3.49, 2.93 [10:16:27] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.28, 2.72, 2.72 [10:22:20] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2649 MB (10% inode=94%); [10:40:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.60, 3.48, 2.75 [10:46:36] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.21, 3.85, 3.25 [10:48:38] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.19, 2.86, 2.96 [10:54:20] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.26, 3.36, 2.24 [10:56:18] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.88, 2.90, 2.22 [11:42:24] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 5.06, 2.98, 1.95 [11:44:22] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 3.25, 3.05, 2.10 [11:48:24] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.65, 4.19, 2.80 [11:56:22] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.16, 3.55, 3.33 [11:58:21] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.65, 2.62, 3.01 [12:04:57] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.58, 3.73, 2.78 [12:06:23] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 3.99, 4.15, 3.51 [12:06:51] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.89, 3.22, 2.72 [12:08:22] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.57, 3.54, 3.36 [12:10:21] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.43, 2.75, 3.09 [12:20:42] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 8.36, 4.65, 3.18 [12:24:44] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.57, 3.75, 3.23 [12:26:38] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.00, 2.85, 2.96 [13:24:26] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.97, 6.63, 5.36 [13:30:30] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.56, 6.77, 5.84 [13:32:18] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.14, 3.53, 2.47 [13:34:17] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.72, 2.78, 2.31 [13:35:21] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2B3 [13:35:23] [02miraheze/mw-config] 07paladox 03ddac2e0 - Update LocalSettings.php [13:38:40] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.46, 3.60, 2.40 [13:42:37] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 6.99, 6.81, 6.30 [13:42:40] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.48, 3.29, 2.64 [13:44:36] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 6.10, 6.71, 6.33 [13:45:29] .time Zppix [13:45:30] 2019-11-04 - 07:45:29CST [13:45:33] .time paladox [13:45:34] 2019-11-04 - 13:45:34GMT [13:45:43] .t paladox [13:45:43] 2019-11-04 - 13:45:43GMT [13:48:14] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.93, 3.42, 2.55 [13:52:12] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.73, 3.33, 2.79 [14:05:44] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 7.26, 6.60, 5.97 [14:07:45] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 5.56, 6.32, 5.95 [14:08:54] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [14:09:58] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [14:10:29] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:10:32] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 2 backends are down. mw1 mw2 [14:10:43] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:12:26] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:12:31] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [14:12:42] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:12:53] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:13:57] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [14:15:00] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2R8 [14:15:02] [02miraheze/puppet] 07paladox 03b5fc862 - varnish: Send parsoid to mw2 when calling api.php [14:20:14] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw2 mw3 [14:20:35] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [14:20:37] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:20:50] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.41, 4.53, 2.89 [14:20:51] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [14:21:14] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw3 [14:23:12] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2Ry [14:23:13] [02miraheze/puppet] 07paladox 03ce5fca4 - Revert "varnish: Send parsoid to mw2 when calling api.php" This reverts commit b5fc862451d7d5d3d3017f551d1ce08fe51ef212. [14:23:34] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.75, 3.33, 2.46 [14:25:36] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.44, 2.99, 2.44 [14:27:45] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:29:42] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.389 second response time [14:31:23] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:32:37] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [14:34:15] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.13, 4.65, 3.26 [14:35:44] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw1 mw3 [14:36:14] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.05, 3.51, 3.01 [14:36:55] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [14:37:46] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [14:38:21] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.08, 4.24, 3.37 [14:39:48] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 6.834 second response time [14:41:54] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:45:49] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 2 backends are down. mw2 mw3 [14:46:36] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.09, 3.75, 3.69 [14:47:28] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:48:34] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.42, 2.86, 3.37 [14:49:03] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:49:08] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [14:49:34] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [14:49:44] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:51:41] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_33 [+0/-0/±1] 13https://git.io/Je20P [14:51:43] [02miraheze/mediawiki] 07paladox 0327a3bbc - Update ResourceLoader.php [14:56:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.07, 2.27, 3.83 [14:59:22] [02miraheze/ManageWiki] 07translatewiki pushed 031 commit to 03master [+1/-0/±0] 13https://git.io/Je20N [14:59:24] [02miraheze/ManageWiki] 07translatewiki 03eeaedf7 - Localisation updates from https://translatewiki.net. [14:59:24] [ Main page - translatewiki.net ] - translatewiki.net. [15:00:40] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.52, 3.03, 3.75 [15:04:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.68, 2.79, 3.51 [15:06:37] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.46, 2.32, 3.25 [15:15:18] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2E3 [15:15:19] [02miraheze/puppet] 07paladox 03ccb2a20 - Update config.yaml [15:16:54] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:17:18] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 2 backends are down. mw1 mw3 [15:17:27] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [15:17:40] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [15:17:50] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw2 [15:17:52] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [15:18:03] !log restart php-fpm on mw[123] [15:19:09] !log restart php-fpm on mw[123] [15:19:14] !log restart nginx on mw[123] [15:19:42] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.75, 3.04, 2.69 [15:19:51] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 4.551 second response time [15:19:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:20:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:20:41] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 3.67, 4.11, 3.41 [15:20:59] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:21:17] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [15:21:35] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [15:21:41] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.95, 3.10, 2.76 [15:21:51] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [15:22:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.04, 3.59, 3.32 [15:22:44] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:22:48] paladox: pls send a message in here [15:22:54] ? [15:22:58] My client has gone weird [15:23:06] i doin't understand [15:23:07] ? [15:23:11] And your messages earlier are missing [15:23:23] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:23:34] Or LogBot is very slow [15:23:56] !log [15:18:03] <+paladox> !log restart php-fpm on mw[123] [15:24:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:24:01] !log [15:19:14] <+paladox> !log restart nginx on mw[123] [15:24:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:24:39] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.19, 3.18, 3.20 [15:33:57] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je2ES [15:33:58] [02miraheze/puppet] 07paladox 039ec8dfd - varnish: Increase cache ttl to 30mins [15:36:04] .status mhtest testing [15:36:09] RhinosF1 updating User:RhinosF1/Status! [15:36:21] RhinosF1: Done! [15:42:12] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 7.84, 6.04, 4.05 [15:42:51] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:6180:0:d0::403:f001/cpweb [15:44:47] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [15:48:09] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.41, 3.45, 3.50 [15:49:49] !log restart php-fpm on mw1 [15:52:07] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 2.56, 3.21, 3.40 [15:52:18] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 107.191.126.23/cpweb [15:52:21] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw1 [15:52:27] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [15:53:21] PROBLEM - mw1 Puppet on mw1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 10 minutes ago with 0 failures [15:56:21] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [15:56:33] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [15:56:33] paladox: let me know if any ZppixBot testing is failing things w/ the api [15:57:04] I wouldn't know :) [15:57:08] i doin't maintain the bot [15:58:18] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:58:19] paladox: I mean causing us to have issues! [15:58:29] i have access to the logs chanel [16:00:02] !log restart php-fpm on mw1 [16:00:30] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [16:00:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:03:29] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw2 [16:04:34] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [16:04:34] !log restart php-fpm on mw2 [16:04:39] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:05:29] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [16:06:21] !log restart php-fpm on mw3 [16:06:54] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:07:33] PROBLEM - mw2 Puppet on mw2 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 4 minutes ago with 0 failures [16:09:17] PROBLEM - mw3 Puppet on mw3 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 6 minutes ago with 0 failures [16:13:03] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 107.191.126.23/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [16:13:26] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw3 [16:13:35] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [16:15:00] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:15:27] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 3 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb [16:15:35] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [16:17:09] !log restart php-fpm on mw[123] [16:17:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:17:23] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:17:28] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [16:20:58] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 5.61, 4.15, 3.03 [16:22:53] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 2.37, 3.57, 2.96 [16:22:58] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 2400:6180:0:d0::403:f001/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:24:56] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:26:42] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.13, 3.01, 2.89 [16:27:08] !log restart php-fpm on mw3 [16:28:02] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:03:30] !log restart php-fpm on mw[123] [17:03:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [17:40:26] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:41:08] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: misc4-fd [17:42:41] PROBLEM - bacula1 Bacula Databases db4 on bacula1 is WARNING: WARNING: Diff, 46119 files, 36.42GB, 2019-10-20 01:25:00 (2.2 weeks ago) [17:43:06] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is WARNING: WARNING: Full, 81004 files, 2.632GB, 2019-10-11 03:03:00 (3.5 weeks ago) [17:43:20] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [17:48:06] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 3.66, 2.78, 1.53 [17:50:06] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.84, 1.96, 1.38 [17:53:03] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:12:34] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 5 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [18:13:21] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 1 backends are down. mw1 [18:13:47] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:14:01] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:14:31] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [18:15:05] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:15:20] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [18:27:37] !log reboot test1 [18:27:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [18:30:18] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/var/lib/glusterd/secure-access] [18:36:43] mutante: around? [18:39:37] https://err.no/personal/blog/tech/2006-10-10-12-05_contentless_pings/ :) [18:39:38] [ Contentless ping annoying ] - err.no [20:02:37] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.98, 2.73, 1.43 [20:04:39] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.37, 2.27, 1.43 [20:22:57] paladox: https://phabricator.miraheze.org/T4857#92568 [20:22:58] [ ⚓ T4857 Thumbnails of recently uploaded images not showing up. Actual image opens up when navigated to. ] - phabricator.miraheze.org [20:45:36] PROBLEM - lizardfs6 GlusterFS port 49152 on lizardfs6 is CRITICAL: connect to address 54.36.165.161 and port 49152: Connection refused [21:07:06] Uploaded file: https://uploads.kiwiirc.com/files/eae3533ea565025f48772c8b29989135/Anotaci%C3%B3n%202019-11-04%20150545.png [21:07:17] paladox see file [21:07:42] Because they still show up if it's been months? curiosity. [21:07:55] hispano76: it’s still likely somewhere in our system [21:08:29] hispano76 for CentralAuth? [21:08:41] yes [21:08:54] i think all we do is remove it from CA> [21:08:56] *. [21:11:55] hum [21:12:41] that'll be us hispano76 [21:12:52] needs doing in the db [21:13:10] ok [21:21:46] RhinosF1 ping [21:23:48] Have you tracked Penarc1 files? I don't see an administrative response on the corresponding page. RhinosF1 I'm asking to see if I can explain in detail about licenses and authorship. [21:27:08] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw1 [21:28:14] hispano76: I deleted some of them I think but I haven’t kept an eye [21:29:06] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [21:31:44] oh, ok [21:43:32] can you turn me into Consul? AlvaroMolina is inactive for months and the wiki has no one to monitor and remove permissions from inactive users. https://es.publictestwiki.com/wiki/ JohnLewis paladox PuppyKun RhinosF1 Voidwalker Zppix [21:43:33] [ PruebaWiki ] - es.publictestwiki.com [21:46:05] hispano76: that sounds like a steward issue [21:46:39] hispano76 i carn't [21:46:43] hispano76: has there been a request in line with local policy? [21:46:44] it' a stewards issue [21:46:50] is that I'm out of date on who Stewards is. XD RhinosF1 [21:46:51] (or whatever local policy asks/says/dictates) [21:48:04] https://es.publictestwiki.com/wiki/PruebaWiki:C%C3%B3nsules There is no inactivity policy for Consules according to that page and https://es.publictestwiki.com/wiki/PruebaWiki:Inactividad/Excepciones [21:48:05] [ PruebaWiki:Cónsules - PruebaWiki ] - es.publictestwiki.com [21:48:06] [ PruebaWiki:Inactividad/Excepciones - PruebaWiki ] - es.publictestwiki.com [21:48:38] I haven't applied locally for permission for obvious reasons. [21:50:52] hispano76: if there's no policy, stick a request up, link me and I'll take a look later/decide how long to wait :P [22:09:11] .status mhtest offline [22:09:13] RhinosF1: Updating User:RhinosF1/Status to offline! [22:09:22] RhinosF1: Updated! [22:10:09] Voidwalker, Zppix: could one of you close the task on ZppixBot phab that talks about public test wiki test as it should be complete now but I don’t want to sign off my own code [22:10:20] Apart from minor cosmetic stuff [22:11:19] I'll take a look later [22:13:11] Voidwalker: ok [22:20:49] hispano76: you have 3 edits on es.testwiki and you're not a crat. Although it seems remarkably dead. I'm not really opposed, but maybe you'd want crat or somethin for a bit? [22:21:13] Although I'm not sure I should even comment :p just because es.testwiki adopted the same policies I'm not sure if en.testwiki consuls have any jurisdiction [22:22:44] PuppyKun 3 + 43 https://es.publictestwiki.com/wiki/Especial:Contribuciones/Wiki1776 [22:22:45] [ Contribuciones del usuario Wiki1776 - PruebaWiki ] - es.publictestwiki.com [22:24:14] And I don't have any more editions because I already have in theory my test Wiki: PrivadoWiki Anyway, I'm interested in taking care of pruebawiki unless you want to merge it with testwiki. PuppyKun [22:29:01] I didn't see any mention of Jurisdictions, but it seems only the Consuls, Stewards and Staff have "jurisdiction". https://es.publictestwiki.com/wiki/PruebaWiki:Inactividad/Excepciones [22:29:02] [ PruebaWiki:Inactividad/Excepciones - PruebaWiki ] - es.publictestwiki.com [22:43:19] JohnLewis https://es.publictestwiki.com/w/index.php?title=PruebaWiki:Portal_de_la_comunidad&diff=857&oldid=653 [22:43:21] [ Diferencia entre revisiones de «PruebaWiki:Portal de la comunidad» - PruebaWiki ] - es.publictestwiki.com [22:43:40] You'll say [22:50:10] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Je26K [22:50:11] [02miraheze/services] 07MirahezeSSLBot 03a7bfdbc - BOT: Updating services config for wikis [23:00:38] hi [23:05:58] hispano76: what is privadowiki? And I'm not sure how much of a need there is for the ES branch but if you want to help take care of it since alvaro's less active I won't complain. I just don't know what there is to take care of :P I forgot it existed [23:12:02] PROBLEM - cp3 Disk Space on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [23:13:00] PuppyKun privado.miraheze.org is a private personal wiki where I add content to have, remember and use in the future. I could also use it for testing although I don't use it for that at the moment. [23:13:57] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2202 MB (9% inode=94%); [23:34:39] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.83, 3.11, 1.63 [23:36:40] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.24, 2.40, 1.56