[00:45:07] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeV95 [00:45:09] [02miraheze/services] 07MirahezeSSLBot 035fd71ec - BOT: Updating services config for wikis [03:04:21] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is CRITICAL: CRITICAL: Full, 81004 files, 2.632GB, 2019-10-11 03:03:00 (4.3 weeks ago) [06:26:26] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2848 MB (11% inode=94%); [08:05:01] Hello chris87! If you have any questions, feel free to ask and someone should answer soon. [08:12:31] Hi all. I'd like to allow upload of additional file types (such as MS Office .xlsx) but the page that shows that (https://xxxx.miraheze.org/wiki/Special:ManageWiki/settings#mw-section-restricted) is greyed out. Does this need a request via Steward's Noticeboard? [08:12:32] [ Wiki not Found ] - xxxx.miraheze.org [10:06:25] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2648 MB (10% inode=94%); [10:10:25] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2669 MB (11% inode=94%); [10:35:18] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2649 MB (10% inode=94%); [11:19:29] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/JeVbT [11:19:30] [02miraheze/puppet] 07paladox 038bc5d92 - varnish: Fix regex [11:19:32] [02puppet] 07paladox created branch 03paladox-patch-6 - 13https://git.io/vbiAS [11:19:33] [02puppet] 07paladox opened pull request 03#1141: varnish: Fix regex - 13https://git.io/JeVbI [11:20:11] [02puppet] 07paladox edited pull request 03#1141: varnish: Fix regex - 13https://git.io/JeVbI [11:21:58] [02puppet] 07paladox closed pull request 03#1141: varnish: Fix regex - 13https://git.io/JeVbI [11:21:59] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeVbq [11:22:01] [02miraheze/puppet] 07paladox 0318fc57b - varnish: Fix regex (#1141) [11:22:02] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-6 [11:22:04] [02puppet] 07paladox deleted branch 03paladox-patch-6 - 13https://git.io/vbiAS [11:28:08] paladox: Can You check the new file extensions PR on mw-config [11:30:09] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/JeVb8 [11:30:10] [02miraheze/puppet] 07paladox 03a4af9c9 - Varnish: Fix regex part 2 Bug: T4880 [11:30:12] [02puppet] 07paladox created branch 03paladox-patch-6 - 13https://git.io/vbiAS [11:30:15] [02puppet] 07paladox opened pull request 03#1142: Varnish: Fix regex part 2 - 13https://git.io/JeVb4 [11:32:02] [02mw-config] 07RhinosF1 commented on pull request 03#2796: Propose more extensions - 13https://git.io/JeVbB [11:32:17] [02puppet] 07paladox synchronize pull request 03#1142: Varnish: Fix regex part 2 - 13https://git.io/JeVb4 [11:32:18] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/JeVbR [11:32:20] [02miraheze/puppet] 07paladox 037bc740b - Update default.vcl [11:38:03] [02puppet] 07paladox closed pull request 03#1142: Varnish: Fix regex part 2 - 13https://git.io/JeVb4 [11:38:05] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JeVbu [11:38:06] [02miraheze/puppet] 07paladox 035def801 - Varnish: Fix regex part 2 (#1142) * Varnish: Fix regex part 2 Bug: T4880 * Update default.vcl [11:38:08] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-6 [11:38:10] [02puppet] 07paladox deleted branch 03paladox-patch-6 - 13https://git.io/vbiAS [11:47:55] paladox: Can You check the PR I commented on [11:50:14] Yup, saw. Need to discuss that with John. [11:50:42] paladox: ok [11:51:04] I wonder whether they are all *needed* [11:52:54] Yup [11:53:37] paladox: I'm guessing you have access so can you update ManageWiki as there's been i18n changes [11:54:25] [02mw-config] 07RhinosF1 commented on pull request 03#2796: Propose more extensions - 13https://git.io/JeVbM [12:01:59] RhinosF1: I’m mobile :( [12:02:00] I did that change on mobile seeing as it was simple :) [12:04:22] paladox: heh, I'm same. I don't mind mobile for a lot but don't have server access or git to do anything [12:11:14] Heh [15:22:28] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:22:30] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:22:45] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:22:56] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:22:56] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:22:57] PROBLEM - lizardfs6 Puppet on lizardfs6 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:23:00] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:23:02] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:23:05] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:23:09] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:23:09] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:23:10] PROBLEM - lizardfs5 Puppet on lizardfs5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:23:17] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:23:18] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:23:56] PROBLEM - db4 Puppet on db4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:24:05] PROBLEM - db5 Puppet on db5 is CRITICAL: CRITICAL: Puppet has 17 failures. Last run 3 minutes ago with 17 failures. Failed resources (up to 3 shown) [15:24:23] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:24:24] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:24:48] paladox: ^ gc? [15:26:54] Yup [15:27:39] paladox: thought so just wanted to make sure world wasnt ending [15:28:48] Ok :) [15:33:54] RECOVERY - db4 Puppet on db4 is OK: OK: Puppet is currently enabled, last run 34 seconds ago with 0 failures [15:34:03] RECOVERY - db5 Puppet on db5 is OK: OK: Puppet is currently enabled, last run 48 seconds ago with 0 failures [15:34:29] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:34:30] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:34:31] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:34:31] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 52 seconds ago with 0 failures [15:34:50] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 41 seconds ago with 0 failures [15:34:57] RECOVERY - cp4 Puppet on cp4 is OK: OK: Puppet is currently enabled, last run 52 seconds ago with 0 failures [15:34:59] RECOVERY - lizardfs6 Puppet on lizardfs6 is OK: OK: Puppet is currently enabled, last run 48 seconds ago with 0 failures [15:35:05] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 43 seconds ago with 0 failures [15:35:08] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 12 seconds ago with 0 failures [15:35:13] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:35:14] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:35:15] RECOVERY - lizardfs5 Puppet on lizardfs5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:35:15] RECOVERY - misc3 Puppet on misc3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:35:24] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 38 seconds ago with 0 failures [15:35:32] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:37:25] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:01:14] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 24.62, 15.50, 6.69 [16:01:20] 503 :/ [16:01:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:01:56] PROBLEM - mw2 MediaWiki Rendering on mw2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:02:03] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 19.63, 11.39, 5.19 [16:02:20] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [16:02:20] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [16:02:46] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4108 bytes in 0.024 second response time [16:02:57] PROBLEM - cp4 Stunnel Http for mw1 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:02:59] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 309 bytes in 0.295 second response time [16:03:00] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 309 bytes in 0.002 second response time [16:03:08] JohnLewis: ^ [16:03:09] Oh [16:03:10] Right nvm [16:03:11] Reception123: ^ [16:03:14] PROBLEM - test1 MediaWiki Rendering on test1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4110 bytes in 0.024 second response time [16:03:22] PROBLEM - cp2 Stunnel Http for mw1 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:03:35] PROBLEM - cp4 Stunnel Http for mw3 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:03:40] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 4 backends are down. lizardfs6 mw1 mw2 mw3 [16:03:44] PROBLEM - cp3 Stunnel Http for mw3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:03:48] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4110 bytes in 0.021 second response time [16:03:55] uh [16:04:04] PROBLEM - cp3 Stunnel Http for mw1 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:04:05] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 4 backends are down. lizardfs6 mw1 mw2 mw3 [16:04:05] PROBLEM - lizardfs6 MediaWiki Rendering on lizardfs6 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 4110 bytes in 0.043 second response time [16:04:05] PROBLEM - cp2 Stunnel Http for mw3 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:04:45] paladox: what's happening [16:05:01] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 6.805 second response time [16:05:01] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 4.552 second response time [16:05:02] RECOVERY - cp4 Stunnel Http for mw1 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.006 second response time [16:05:43] RECOVERY - cp4 Stunnel Http for mw3 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.004 second response time [16:05:50] RECOVERY - cp3 Stunnel Http for mw3 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.644 second response time [16:06:06] Seems this may be caused by lizard... [16:06:18] RECOVERY - lizardfs6 MediaWiki Rendering on lizardfs6 is OK: HTTP OK: HTTP/1.1 200 OK - 19025 bytes in 0.311 second response time [16:06:19] RECOVERY - cp2 Stunnel Http for mw3 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.406 second response time [16:06:27] paladox: first hit was lfs5 load [16:06:40] 16:01:15 PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 24.62, 15.50, 6.69 [16:06:54] ^ then datacentres went down [16:06:55] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 19007 bytes in 0.830 second response time [16:07:10] And everything fell over [16:07:54] RECOVERY - cp2 Stunnel Http for mw1 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24655 bytes in 0.390 second response time [16:08:07] PROBLEM - lizardfs5 Puppet on lizardfs5 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:08:31] Yup [16:08:48] paladox: it's fell again [16:08:52] Stunnep [16:08:59] s/p/l [16:08:59] RhinosF1 meant to say: Stunnel [16:09:10] We're down [16:09:44] Data centre went down due to all mw going down [16:10:00] RECOVERY - test1 MediaWiki Rendering on test1 is OK: HTTP OK: HTTP/1.1 200 OK - 19014 bytes in 1.077 second response time [16:10:17] RECOVERY - lizardfs5 Puppet on lizardfs5 is OK: OK: Puppet is currently enabled, last run 23 seconds ago with 0 failures [16:10:18] PROBLEM - cp4 Stunnel Http for mw2 on cp4 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:10:28] PROBLEM - cp2 Stunnel Http for mw2 on cp2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [16:10:29] hum, ucronias.miraheze.org work [16:10:33] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [16:10:35] paladox: that was alert 2 though. Then it continued to moan about everything. We need to get it back up and staying up [16:10:45] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 19020 bytes in 0.699 second response time [16:10:46] RECOVERY - mw2 MediaWiki Rendering on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 19014 bytes in 1.044 second response time [16:10:57] Yup, it’s lizard [16:10:57] Hispano76: meta is slower but recovering now [16:10:58] At least matches up [16:10:59] But nothing I can do :( [16:11:00] https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?orgId=1&var-job=node&var-node=lizardfs4.miraheze.org&var-port=9100 [16:11:03] [ Grafana ] - grafana.miraheze.org [16:11:08] RECOVERY - cp3 Stunnel Http for mw1 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 24639 bytes in 0.683 second response time [16:11:17] ok [16:11:17] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [16:11:19] paladox: shit that load is high [16:11:25] :) [16:11:27] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 6 backends are healthy [16:12:14] RECOVERY - cp4 Stunnel Http for mw2 on cp4 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 1.683 second response time [16:12:24] RECOVERY - cp2 Stunnel Http for mw2 on cp2 is OK: HTTP OK: HTTP/1.1 200 OK - 24661 bytes in 0.535 second response time [16:12:26] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 6 backends are healthy [16:12:46] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 6 backends are healthy [16:13:25] Indeed [16:13:26] That’s why we have lizardfs6 :P [16:13:27] Handled the load better [16:13:42] paladox: it's been better until now [16:14:52] Yup, this was lixardfs5... [16:14:53] Some of the data is still on that [16:14:54] And lizardfs4 [16:19:02] paladox: planning to move [16:19:07] ??? [16:22:02] RhinosF1: huh?? [16:23:14] paladox: is that data coming off eventually [16:25:28] You mean lizardfs 4 and 5? [16:25:29] Yes [16:25:52] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 0.08, 0.87, 3.79 [16:26:04] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 0.30, 0.98, 4.00 [16:26:55] paladox: what’s the timescale? [16:27:49] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.26, 0.69, 3.39 [16:29:54] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.32, 0.61, 3.18 [16:36:49] RhinosF1: there’s none, just watching as it does it thing [16:37:21] paladox: so it’s just a slow running thing [16:43:45] Yup [16:45:16] paladox: not MySQL but slightly relevant https://tools.wmflabs.org/bash/quip/AVi2Z1eBQMK9DA-FJpXK [16:45:17] [ quip - Quips ] - tools.wmflabs.org [16:46:24] Heh [16:47:19] Quips has quite a bank of ‘funny’ and random quotes [16:49:44] Yup [17:12:01] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.44, 4.24, 2.36 [17:12:29] PROBLEM - mw1 MediaWiki Rendering on mw1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:13:12] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 4.33, 3.88, 2.20 [17:13:38] Hello apoop! If you have any questions, feel free to ask and someone should answer soon. [17:14:01] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 1.78, 3.38, 2.28 [17:14:27] RECOVERY - mw1 MediaWiki Rendering on mw1 is OK: HTTP OK: HTTP/1.1 200 OK - 19014 bytes in 1.201 second response time [17:15:09] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.77, 3.46, 2.22 [17:17:04] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.51, 3.33, 2.33 [17:18:09] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 1.25, 3.55, 2.72 [17:20:11] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 6.63, 3.98, 2.93 [17:22:07] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.92, 3.53, 2.90 [17:24:03] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.59, 2.47, 2.58 [17:27:22] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 2.90, 4.19, 3.18 [17:29:17] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.13, 3.11, 2.89 [17:30:31] paladox: why is it taking forever to save edits and page load all of a sudden [17:33:09] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 1.69, 3.87, 3.41 [17:33:12] Zppix: I bet it’s because of lizard [17:35:04] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 3.14, 3.35, 3.25 [19:50:06] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jewfu [19:50:08] [02miraheze/services] 07MirahezeSSLBot 031586aca - BOT: Updating services config for wikis [20:06:50] PROBLEM - lizardfs4 Current Load on lizardfs4 is WARNING: WARNING - load average: 3.81, 3.42, 2.56 [20:08:25] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JewfP [20:08:27] [02miraheze/puppet] 07paladox 03ea5c2a4 - Update mfsmaster.cfg.erb [20:08:45] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 1.00, 2.49, 2.32 [20:08:58] !log restart lizardfs-master on lizardfs6 [20:09:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:40:07] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JewJs [20:40:09] [02miraheze/services] 07MirahezeSSLBot 03288f270 - BOT: Updating services config for wikis [20:43:12] [02mw-config] 07Reception123 commented on pull request 03#2796: Propose more extensions - 13https://git.io/JewJC [20:58:14] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 3.30, 4.26, 2.90 [20:59:03] PROBLEM - lizardfs5 Current Load on lizardfs5 is WARNING: WARNING - load average: 2.10, 3.77, 3.16 [21:00:12] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 2.33, 3.38, 2.73 [21:01:17] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 3.08, 3.35, 3.06 [21:22:17] PROBLEM - lizardfs5 Current Load on lizardfs5 is CRITICAL: CRITICAL - load average: 4.25, 3.91, 2.93 [21:22:20] PROBLEM - lizardfs4 Current Load on lizardfs4 is CRITICAL: CRITICAL - load average: 6.62, 5.82, 3.73 [21:24:15] RECOVERY - lizardfs5 Current Load on lizardfs5 is OK: OK - load average: 0.84, 2.72, 2.60 [21:26:10] RECOVERY - lizardfs4 Current Load on lizardfs4 is OK: OK - load average: 0.62, 2.91, 2.99 [21:29:36] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/JewUC [21:29:38] [02miraheze/puppet] 07paladox 038186180 - Add void and owen to staff@ [21:29:39] [02puppet] 07paladox created branch 03paladox-patch-6 - 13https://git.io/vbiAS [21:29:41] [02puppet] 07paladox opened pull request 03#1143: Add void and owen to staff@ - 13https://git.io/JewUW [22:28:52] PROBLEM - lizardfs6 Puppet on lizardfs6 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 2 minutes ago with 0 failures [22:33:11] [02puppet] 07JohnFLewis commented on pull request 03#1143: Add void and owen to staff@ - 13https://git.io/JewTu [22:33:12] [02puppet] 07JohnFLewis closed pull request 03#1143: Add void and owen to staff@ - 13https://git.io/JewUW [22:33:25] [02puppet] 07JohnFLewis edited a comment on pull request 03#1143: Add void and owen to staff@ - 13https://git.io/JewTu [22:36:32] [02puppet] 07paladox deleted branch 03paladox-patch-6 - 13https://git.io/vbiAS [22:36:33] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-6 [22:40:52] RECOVERY - lizardfs6 Puppet on lizardfs6 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:45:59] RECOVERY - lizardfs6 GlusterFS port 49152 on lizardfs6 is OK: TCP OK - 0.013 second response time on 54.36.165.161 port 49152 [22:54:29] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/var/lib/glusterd/secure-access] [22:58:06] !log reboot test1 [22:58:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:00:52] paladox: hi [23:00:56] hi [23:01:55] have u seen my IRC bot? [23:02:06] nope [23:02:13] you shoudl come see it [23:02:18] ##ExamBot [23:05:49] no thanks [23:07:42] !log userdel macfan on test1 [23:07:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [23:23:18] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 51 seconds ago with 0 failures [23:27:58] PROBLEM - lizardfs6 GlusterFS port 49152 on lizardfs6 is CRITICAL: connect to address 54.36.165.161 and port 49152: Connection refused [23:37:51] RECOVERY - lizardfs6 GlusterFS port 49152 on lizardfs6 is OK: TCP OK - 0.013 second response time on 54.36.165.161 port 49152 [23:39:02] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JewkN [23:39:03] [02miraheze/puppet] 07paladox 03aa420b5 - Update mount.pp [23:39:31] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/Jewkh [23:39:32] [02miraheze/puppet] 07paladox 03f94f1a1 - Update init.pp [23:54:43] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://git.io/JewIc [23:54:44] [02miraheze/puppet] 07paladox 03a01ff3e - gluster: Add secure-access file What this does is: * Sets ssl certificate, CA and private key instead of using a default file. * Sets ssl-cert-depth to 2. [23:54:46] [02puppet] 07paladox created branch 03paladox-patch-6 - 13https://git.io/vbiAS [23:54:47] [02puppet] 07paladox opened pull request 03#1144: gluster: Add secure-access file - 13https://git.io/JewIC [23:55:08] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+1/-0/±0] 13https://git.io/JewIW [23:55:10] [02miraheze/puppet] 07paladox 03147e8a1 - Create secure-access [23:55:11] [02puppet] 07paladox synchronize pull request 03#1144: gluster: Add secure-access file - 13https://git.io/JewIC