[00:03:35] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 21.66, 27.07, 20.97 [00:05:34] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 14.11, 22.63, 20.10 [00:07:30] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 14.40, 20.34, 19.59 [02:42:33] @k6ka May i Private message you i would like to know why you blocked me on discord [02:43:22] No, and if I blocked you on one platform, please do not evade the block by pestering me on another. Thank you. [08:07:21] PROBLEM - cp7 Current Load on cp7 is CRITICAL: CRITICAL - load average: 8.38, 6.90, 3.58 [08:08:02] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.47, 21.97, 17.90 [08:09:21] RECOVERY - cp7 Current Load on cp7 is OK: OK - load average: 1.63, 4.82, 3.22 [08:10:00] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.78, 19.57, 17.47 [11:53:17] !log reception@jobrunner1:/srv/mediawiki/w/maintenance$ sudo -u www-data php deleteBatch.php --wiki crappygameswiki --r "Requested - T6028" /home/reception/cgwdel2.txt [11:53:23] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:41:57] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 34.20, 26.86, 20.28 [12:43:42] Hello anyone here????? [12:43:53] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 16.62, 22.95, 19.64 [12:44:17] good morning (: [12:46:59] for my wiki is there anyway to restore a template on a page that got deleted and so i can also restore the pages information with it aswell?????? [12:49:02] ????? [12:49:49] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 11.46, 17.51, 18.56 [12:50:42] i would like to restore information on a template but deleted the template? it it possible? [13:03:11] Special:Undelete [13:13:59] hello. I have read this: "To change the logo and favicon of your wiki, you need to add a specific URL to the textbox below the option" And I don't find the "textbox below the option". The text is assuming you already know that. (other admins of my pages are the ones whoe managed logos until now) [13:14:52] ok. Is in https://intercriaturas.miraheze.org/wiki/Special:ManageWiki/settings#mw-section-styling [13:14:54] [ Manage this wiki's additional settings - InterCriaturas ] - intercriaturas.miraheze.org [13:15:07] I think it would be a nice touch to specify in the documentation, Thanks [13:26:00] a real question now [13:26:13] is there a way to download a sitemap.xml to upload it to google? [13:42:28] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 8 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.77.107.210/cpweb, 51.89.160.142/cpweb, 2001:41d0:800:1056::2/cpweb, 2001:41d0:800:105a::10/cpweb, 51.222.27.129/cpweb, 2607:5300:205:200::2ac4/cpweb [13:42:49] PROBLEM - mw7 Current Load on mw7 is CRITICAL: CRITICAL - load average: 12.67, 7.44, 4.59 [13:43:18] PROBLEM - mw6 Current Load on mw6 is CRITICAL: CRITICAL - load average: 12.11, 7.80, 4.81 [13:44:08] PROBLEM - mw5 Current Load on mw5 is CRITICAL: CRITICAL - load average: 8.36, 6.52, 3.97 [13:44:20] if i restore a template will the page information for that template be restored aswell? [13:44:28] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [13:44:45] PROBLEM - mw7 Current Load on mw7 is WARNING: WARNING - load average: 6.18, 7.18, 4.86 [13:45:16] PROBLEM - mw6 Current Load on mw6 is WARNING: WARNING - load average: 4.99, 7.11, 4.95 [13:46:08] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 3.09, 5.14, 3.77 [13:46:44] RECOVERY - mw7 Current Load on mw7 is OK: OK - load average: 2.54, 5.59, 4.56 [13:47:16] RECOVERY - mw6 Current Load on mw6 is OK: OK - load average: 2.66, 5.60, 4.66 [13:49:12] Yes? Restoring the template will restore it just as it was prior to it's deletion. [13:51:58] There is a way to download a sitemap.xmp file. Go to Your site/site map.xml or just paste that link into the site map link within the Google search console also. [14:15:05] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 28.15, 21.77, 17.74 [14:17:02] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 14.07, 18.45, 16.99 [14:18:05] Hello? [14:48:15] Hey, we're allowed to have two wikis, right? [14:49:34] Yep [14:49:56] Awesome, thanks! [15:32:08] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.80, 20.84, 18.35 [15:34:05] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 19.00, 20.15, 18.39 [16:14:54] PROBLEM - bh.gyaanipedia.co.in - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'bh.gyaanipedia.co.in' expires in 15 day(s) (Tue 25 Aug 2020 16:06:45 GMT +0000). [16:20:43] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMkZ [16:20:44] [02miraheze/ssl] 07MirahezeSSLBot 03c9500ee - Bot: Update SSL cert for bh.gyaanipedia.co.in [16:21:37] hello! two questions [16:22:00] the last of yesterday, I don't know if somebody reply, had to leave computer: is there a way to download a sitemap.xml to upload it to google? [16:22:42] second: is there a way to check the parameters of a Namespace? I want to create a new Namespace and asign the same values of other Namespaces of our own wiki. [16:23:46] Jakeukalane: we automatically upload site maps to google for all public wikis [16:23:59] And what do you mean by parameters [16:24:48] there are many options in Especial:ManageWiki/namespaces/3002 [16:24:57] I don't know which options they choose [16:25:14] so I want to copy previous options of other namespace [16:25:25] maybe if I know the namespace number it will appear there [16:25:32] how can I see the namespace numberS? [16:26:33] google still doesn't found most content of our wiki :/ even citing entire several words between "" [16:26:42] Jakeukalane: namespace number is shown in the url of Special:AllPages or in the ManageWiki page for it [16:26:58] ok, I will found [16:26:59] So for that namespace 3002 is the number [16:27:01] *search [16:27:09] Jakeukalane: also, they are not fast at indexing [16:27:40] I think there is discrimination towards miraheze [16:28:00] :( [16:28:01] when I post something in my deviantart get indexed in a matter of minutes [16:29:38] the new namespace got solved, thank you again RhinosF1 [16:31:05] Jakeukalane: deviant art are bigger and will put a lot into SEO [16:31:50] that makes sense too [16:32:04] I was not complaining btw [16:34:09] SPF|Cloud: we need to review the 5xx's google see [16:35:28] RECOVERY - bh.gyaanipedia.co.in - LetsEncrypt on sslhost is OK: OK - Certificate 'bh.gyaanipedia.co.in' will expire on Sat 07 Nov 2020 15:20:36 GMT +0000. [16:38:54] PROBLEM - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is WARNING: WARNING - NGINX Error Rate is 40% [16:40:54] RECOVERY - cp6 HTTP 4xx/5xx ERROR Rate on cp6 is OK: OK - NGINX Error Rate is 8% [16:54:14] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 26.49, 23.41, 20.27 [16:54:39] 26.49 of load average is very much [16:56:11] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 17.21, 20.78, 19.67 [16:58:07] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 17.67, 19.62, 19.38 [17:00:22] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMLW [17:00:24] [02miraheze/services] 07MirahezeSSLBot 03a77d91a - BOT: Updating services config for wikis [17:04:05] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.47, 20.57, 19.92 [17:06:01] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 16.21, 19.21, 19.50 [17:11:55] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 41.40, 25.84, 21.63 [17:15:50] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 16.29, 21.89, 21.16 [17:17:46] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 26.43, 22.76, 21.49 [17:19:43] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 21.63, 21.50, 21.15 [17:21:41] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.85, 22.85, 21.67 [17:25:36] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 17.92, 21.02, 21.24 [17:29:31] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 11.82, 17.14, 19.72 [18:04:54] PROBLEM - ns1 Current Load on ns1 is CRITICAL: CRITICAL - load average: 1.20, 3.70, 2.29 [18:06:51] PROBLEM - ns1 Current Load on ns1 is WARNING: WARNING - load average: 0.03, 1.66, 1.77 [18:08:50] RECOVERY - ns1 Current Load on ns1 is OK: OK - load average: 0.00, 0.75, 1.37 [19:36:13] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMs0 [19:36:14] [02miraheze/dns] 07paladox 03aed887f - Depool cp6 [19:39:00] !log depool cp6 [19:39:04] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:43:48] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMGJ [19:43:50] [02miraheze/dns] 07paladox 03484e9ae - Fix depooling cp6 Also depools cp7 [19:44:17] !log depool cp7 [19:44:20] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:46:43] !log upgrade cp[67] to debian 10.5 [19:46:48] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:47:36] !log reboot cp[67] [19:47:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:48:17] paladox: do we have to use the country code now? [19:48:28] It appears so [19:48:37] i was monitoring the traffic to make sure it was decomed [19:49:02] https://github.com/miraheze/dns/blob/master/config#L54 [19:49:02] [ dns/config at master · miraheze/dns · GitHub ] - github.com [19:49:05] paladox: I'm trying to think back to last time we did it. I'm sure we just depooled one [19:49:17] RhinosF1 john changed the dc names [19:49:27] Yeah that was after [19:49:33] so like sg is cp3 [19:49:50] Ye I know [19:50:19] paladox: https://github.com/miraheze/dns/commit/7dfb08ce65c27e1fe73c9a23d542390f541d744f#diff-1a2e67902aece2986aff8f3d8f7b8cd8 was after though [19:50:20] [ depool cp6 · miraheze/dns@7dfb08c · GitHub ] - github.com [19:50:37] that wouldn't have worked [19:50:37] I think [19:50:58] * RhinosF1 confused as it I'm sure it did [19:51:00] i was monitoring the traffic, and i only saw a drop after i used "gb" rather than "cp6" [19:51:07] Weird [19:52:19] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMGl [19:52:21] [02miraheze/dns] 07paladox 03c81092d - Repool cp[67] [19:53:57] !log repool cp[67] [19:54:04] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:54:26] !log upgrade ns[12] to debian 10.5 [19:54:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:55:00] PROBLEM - cp7 Puppet on cp7 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown) [19:55:42] i don't believe i have to restart ns1 as i don't really see any packages that require restarts [19:56:52] Paladox: there should be a file you can cat to tell you [19:57:03] !log reboot ns2 [19:57:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:57:23] @RhinosF1 which file? [19:57:32] paladox: it's on the task [19:58:10] paladox: https://askubuntu.com/a/28537 [19:58:10] [ How can I tell what package requires a reboot of my system? - Ask Ubuntu ] - askubuntu.com [19:58:11] root@ns1:/etc/gdnsd# cat /var/run/reboot-required.pkgs [19:58:11] cat: /var/run/reboot-required.pkgs: No such file or directory [20:00:58] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMGy [20:00:59] [02miraheze/dns] 07paladox 038a68f21 - Depool ca [20:01:05] !log depool ca (cp9) [20:01:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:02:58] RECOVERY - cp7 Puppet on cp7 is OK: OK: Puppet is currently enabled, last run 59 seconds ago with 0 failures [20:04:30] !log upgrade cp9 to debian 10.5 & reboot [20:04:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:06:35] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMZs [20:06:36] [02miraheze/dns] 07paladox 03c36a257 - Repool ca [20:06:49] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMZW [20:06:50] [02miraheze/dns] 07paladox 03a8f40a8 - Depool sg [20:08:03] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 51.222.27.129/cpweb, 2607:5300:205:200::2ac4/cpweb [20:09:08] PROBLEM - cp9 HTTPS on cp9 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 344 bytes in 0.316 second response time [20:09:13] PROBLEM - cp9 HTTP 4xx/5xx ERROR Rate on cp9 is CRITICAL: CRITICAL - NGINX Error Rate is 99% [20:09:26] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 3 datacenters are down: 2001:41d0:800:1056::2/cpweb, 51.222.27.129/cpweb, 2607:5300:205:200::2ac4/cpweb [20:09:27] PROBLEM - cp9 Varnish Backends on cp9 is WARNING: No backends detected. If this is an error, see readme.txt [20:11:09] RECOVERY - cp9 HTTPS on cp9 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 1936 bytes in 0.408 second response time [20:11:13] PROBLEM - cp9 HTTP 4xx/5xx ERROR Rate on cp9 is WARNING: WARNING - NGINX Error Rate is 52% [20:11:25] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:11:27] RECOVERY - cp9 Varnish Backends on cp9 is OK: All 7 backends are healthy [20:11:43] !log upgrade cp3 to debian 10.5 & reboot [20:11:53] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:12:03] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [20:12:12] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 25.24, 21.38, 18.30 [20:13:13] RECOVERY - cp9 HTTP 4xx/5xx ERROR Rate on cp9 is OK: OK - NGINX Error Rate is 7% [20:14:10] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.94, 20.80, 18.48 [20:15:40] paladox: strange, anything in /var/run/ [20:16:12] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [20:16:14] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 17.67, 19.52, 18.28 [20:16:47] PROBLEM - cp3 Stunnel Http for mw5 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:16:49] PROBLEM - cp3 SSH on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:17:08] RhinosF1 it makes sense, there is no kernel needing updating on ns1, thus it wouldn't need to be restarted, thus the file wouldn't exist [20:17:31] paladox: ok [20:17:52] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMZX [20:17:54] [02miraheze/dns] 07paladox 03c46538d - Repool cp3 [20:18:08] !log repool sg (cp3) [20:18:09] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [20:18:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:18:44] RECOVERY - cp3 SSH on cp3 is OK: SSH OK - OpenSSH_7.9p1 Debian-10+deb10u2 (protocol 2.0) [20:18:48] RECOVERY - cp3 Stunnel Http for mw5 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15667 bytes in 1.140 second response time [20:23:15] !log reboot jobrunner1 [20:23:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:25:10] !log upgrade debian to 10.5 on services[12] & reboot [20:25:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:25:39] PROBLEM - jobrunner1 JobRunner Service on jobrunner1 is CRITICAL: PROCS CRITICAL: 0 processes with args 'redisJobRunnerService' [20:25:50] PROBLEM - jobrunner1 JobChron Service on jobrunner1 is CRITICAL: PROCS CRITICAL: 0 processes with args 'redisJobChronService' [20:26:21] PROBLEM - jobrunner1 MirahezeRenewSsl on jobrunner1 is CRITICAL: connect to address 51.89.160.135 and port 5000: Connection refused [20:27:38] RECOVERY - jobrunner1 JobRunner Service on jobrunner1 is OK: PROCS OK: 1 process with args 'redisJobRunnerService' [20:27:50] RECOVERY - jobrunner1 JobChron Service on jobrunner1 is OK: PROCS OK: 1 process with args 'redisJobChronService' [20:28:10] !log upgrade debian to 10.5 on test2 & reboot [20:28:13] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:28:19] RECOVERY - jobrunner1 MirahezeRenewSsl on jobrunner1 is OK: TCP OK - 0.000 second response time on 51.89.160.135 port 5000 [20:28:59] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 31.54, 23.85, 20.45 [20:31:20] PROBLEM - services2 proton on services2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:31:57] PROBLEM - services2 restbase on services2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:31:57] PROBLEM - services2 Current Load on services2 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [20:33:23] RECOVERY - services2 proton on services2 is OK: TCP OK - 0.000 second response time on 51.89.160.141 port 3030 [20:33:35] PROBLEM - cp3 Stunnel Http for test2 on cp3 is CRITICAL: HTTP CRITICAL - No data received from host [20:33:35] PROBLEM - test2 php-fpm on test2 is CRITICAL: connect to address 51.77.107.211 port 5666: Connection refusedconnect to host 51.77.107.211 port 5666: Connection refused [20:33:50] PROBLEM - test2 NTP time on test2 is CRITICAL: connect to address 51.77.107.211 port 5666: Connection refusedconnect to host 51.77.107.211 port 5666: Connection refused [20:33:52] RECOVERY - services2 restbase on services2 is OK: TCP OK - 0.004 second response time on 51.89.160.141 port 7231 [20:33:52] RECOVERY - services2 Current Load on services2 is OK: OK - load average: 0.94, 0.39, 0.15 [20:34:47] PROBLEM - test2 Puppet on test2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [20:35:31] RECOVERY - test2 php-fpm on test2 is OK: PROCS OK: 27 processes with command name 'php-fpm7.3' [20:35:33] RECOVERY - cp3 Stunnel Http for test2 on cp3 is OK: HTTP OK: HTTP/1.1 200 OK - 15663 bytes in 0.998 second response time [20:35:49] RECOVERY - test2 NTP time on test2 is OK: NTP OK: Offset -0.001660108566 secs [20:36:26] !log upgrade debian to 10.5 on ldap1 & reboot [20:36:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:36:53] RECOVERY - test2 Puppet on test2 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [20:43:10] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.49, 22.90, 23.22 [20:43:52] !log upgrade debian to 10.5 on mon1 & reboot [20:43:57] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:44:00] !log upgrade icinga2 to 2.12 on mon1 [20:44:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:45:42] !log upgrade grafana to 7.1 on mon1 [20:45:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:47:23] Alert to Miraheze Staff: It looks like the icinga-miraheze bot has stopped! Ping !sre. [20:47:24] https://meta.miraheze.org is 03UP03 [20:47:25] [ Meta ] - meta.miraheze.org [20:47:25] Alert to Miraheze Staff: It looks like the MirahezeRC bot has stopped! Recent Changes are no longer available from IRC. [20:47:37] * RhinosF1 here [20:47:54] * RhinosF1 looks up [20:49:06] PROBLEM - mw5 MediaWiki Rendering on mw5 is CRITICAL: connect to file socket /wiki/Main_Page: No such file or directoryHTTP CRITICAL - Unable to open TCP socket [20:49:18] PROBLEM - jobrunner2 MediaWiki Rendering on jobrunner2 is CRITICAL: connect to file socket /wiki/Main_Page: No such file or directoryHTTP CRITICAL - Unable to open TCP socket [20:49:37] PROBLEM - mw7 MediaWiki Rendering on mw7 is CRITICAL: connect to file socket /wiki/Main_Page: No such file or directoryHTTP CRITICAL - Unable to open TCP socket [20:49:50] PROBLEM - jobrunner1 MediaWiki Rendering on jobrunner1 is CRITICAL: connect to file socket /wiki/Main_Page: No such file or directoryHTTP CRITICAL - Unable to open TCP socket [20:50:01] PROBLEM - mw4 MediaWiki Rendering on mw4 is CRITICAL: connect to file socket /wiki/Main_Page: No such file or directoryHTTP CRITICAL - Unable to open TCP socket [20:50:17] PROBLEM - mw6 MediaWiki Rendering on mw6 is CRITICAL: connect to file socket /wiki/Main_Page: No such file or directoryHTTP CRITICAL - Unable to open TCP socket [20:50:31] PROBLEM - ldap1 Puppet on ldap1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [20:50:42] PROBLEM - test2 MediaWiki Rendering on test2 is CRITICAL: connect to file socket /wiki/Main_Page: No such file or directoryHTTP CRITICAL - Unable to open TCP socket [20:50:56] Ok mon1 [20:52:11] RECOVERY - ldap1 Puppet on ldap1 is OK: OK: Puppet is currently enabled, last run 42 seconds ago with 0 failures [20:52:57] Alert to Miraheze Staff: It looks like the icinga-miraheze bot has stopped! Ping !sre. [20:52:58] https://meta.miraheze.org is 03UP03 [20:52:59] [ Meta ] - meta.miraheze.org [20:52:59] Alert to Miraheze Staff: It looks like the MirahezeRC bot has stopped! Recent Changes are no longer available from IRC. [20:53:21] *shutuo [20:53:27] *shutup [20:53:41] *speak [20:54:15] Feed is back as well paladox [20:54:29] *rehash [20:54:30] Rehashing...... [20:56:14] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 27.29, 23.53, 22.99 [20:57:30] ok [20:57:49] PROBLEM - guia.esporo.net - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for guia.esporo.net could not be found [20:58:07] PROBLEM - gluster1 Current Load on gluster1 is WARNING: WARNING - load average: 7.18, 6.11, 4.85 [21:00:02] RECOVERY - gluster1 Current Load on gluster1 is OK: OK - load average: 6.09, 6.35, 5.10 [21:00:15] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 20.08, 21.82, 22.46 [21:04:48] RECOVERY - guia.esporo.net - reverse DNS on sslhost is OK: rDNS OK - guia.esporo.net reverse DNS resolves to cp6.miraheze.org [21:20:11] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 17.41, 18.50, 19.81 [21:22:21] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JJMCV [21:22:23] [02miraheze/puppet] 07paladox 03a434eb6 - monitoring: Fix check_mediawiki command We want to use -I so that the check, checks the individual servers. -s is a string match which we don't want. [21:22:24] [02puppet] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbiAS [21:22:26] [02puppet] 07paladox opened pull request 03#1479: monitoring: Fix check_mediawiki command - 13https://git.io/JJMCw [21:23:04] [02puppet] 07paladox edited pull request 03#1479: monitoring: Fix check_mediawiki command - 13https://git.io/JJMCw [21:23:51] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JJMC6 [21:23:52] [02miraheze/puppet] 07paladox 033376846 - Update monitoring.pp [21:23:54] [02puppet] 07paladox synchronize pull request 03#1479: monitoring: Fix check_mediawiki command - 13https://git.io/JJMCw [21:24:29] [02puppet] 07paladox closed pull request 03#1479: monitoring: Fix check_mediawiki command - 13https://git.io/JJMCw [21:24:31] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/JJMCi [21:24:32] [02miraheze/puppet] 07paladox 032d2d5d7 - monitoring: Fix check_mediawiki command (#1479) * monitoring: Fix check_mediawiki command We want to use -I so that the check, checks the individual servers. -s is a string match which we don't want. * Update monitoring.pp [21:24:34] [02puppet] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vbiAS [21:24:35] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-1 [21:27:03] RECOVERY - test2 MediaWiki Rendering on test2 is OK: HTTP OK: HTTP/1.1 200 OK - 17197 bytes in 0.260 second response time [21:27:32] actually i should use the ip [21:28:35] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJMCH [21:28:37] [02miraheze/puppet] 07paladox 0304dec93 - Use ip address [21:34:32] !log increase puppet2 cores to 6 (as an experiment) [21:34:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:39:56] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.36, 20.20, 19.26 [21:41:13] RECOVERY - mw4 MediaWiki Rendering on mw4 is OK: HTTP OK: HTTP/1.1 200 OK - 17160 bytes in 0.843 second response time [21:42:06] RECOVERY - jobrunner1 MediaWiki Rendering on jobrunner1 is OK: HTTP OK: HTTP/1.1 200 OK - 17207 bytes in 0.931 second response time [21:42:10] RECOVERY - mw7 MediaWiki Rendering on mw7 is OK: HTTP OK: HTTP/1.1 200 OK - 17160 bytes in 0.153 second response time [21:42:30] RECOVERY - jobrunner2 MediaWiki Rendering on jobrunner2 is OK: HTTP OK: HTTP/1.1 200 OK - 17221 bytes in 6.187 second response time [21:42:32] RECOVERY - mw5 MediaWiki Rendering on mw5 is OK: HTTP OK: HTTP/1.1 200 OK - 17160 bytes in 0.230 second response time [21:42:33] RECOVERY - mw6 MediaWiki Rendering on mw6 is OK: HTTP OK: HTTP/1.1 200 OK - 17160 bytes in 0.520 second response time [21:43:19] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.47, 20.50, 19.65 [21:45:14] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 12.51, 17.77, 18.76 [21:47:38] PROBLEM - db12 Puppet on db12 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:47:39] PROBLEM - db11 Puppet on db11 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:47:40] PROBLEM - db7 Puppet on db7 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:47:40] PROBLEM - bacula2 Puppet on bacula2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:47:43] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:47:44] PROBLEM - mw6 Puppet on mw6 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:47:47] PROBLEM - rdb1 Puppet on rdb1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:03] PROBLEM - gluster2 Puppet on gluster2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:04] PROBLEM - jobrunner1 Puppet on jobrunner1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:08] PROBLEM - cp7 Puppet on cp7 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:08] PROBLEM - gluster1 Puppet on gluster1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:10] PROBLEM - cp6 Puppet on cp6 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:11] PROBLEM - puppet2 Puppet on puppet2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:13] PROBLEM - mon1 Puppet on mon1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:14] PROBLEM - db13 Puppet on db13 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:19] PROBLEM - cloud3 Puppet on cloud3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:19] PROBLEM - test2 Puppet on test2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:20] PROBLEM - cp9 Puppet on cp9 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:31] PROBLEM - ns2 Puppet on ns2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:44] PROBLEM - mail1 Puppet on mail1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:45] PROBLEM - mw5 Puppet on mw5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:55] PROBLEM - rdb2 Puppet on rdb2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:48:57] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:49:03] PROBLEM - jobrunner2 Puppet on jobrunner2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:49:06] known [21:49:12] PROBLEM - services2 Puppet on services2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:49:13] PROBLEM - mw7 Puppet on mw7 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:49:26] PROBLEM - cloud2 Puppet on cloud2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:49:39] PROBLEM - mw4 Puppet on mw4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:49:45] PROBLEM - services1 Puppet on services1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:51:36] RECOVERY - db11 Puppet on db11 is OK: OK: Puppet is currently enabled, last run 15 seconds ago with 0 failures [21:51:37] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 9 seconds ago with 0 failures [21:51:38] RECOVERY - db7 Puppet on db7 is OK: OK: Puppet is currently enabled, last run 18 seconds ago with 0 failures [21:51:38] RECOVERY - db12 Puppet on db12 is OK: OK: Puppet is currently enabled, last run 16 seconds ago with 0 failures [21:51:43] RECOVERY - services1 Puppet on services1 is OK: OK: Puppet is currently enabled, last run 11 seconds ago with 0 failures [21:51:45] RECOVERY - bacula2 Puppet on bacula2 is OK: OK: Puppet is currently enabled, last run 23 seconds ago with 0 failures [21:51:47] RECOVERY - rdb1 Puppet on rdb1 is OK: OK: Puppet is currently enabled, last run 19 seconds ago with 0 failures [21:52:01] RECOVERY - gluster1 Puppet on gluster1 is OK: OK: Puppet is currently enabled, last run 23 seconds ago with 0 failures [21:52:02] RECOVERY - gluster2 Puppet on gluster2 is OK: OK: Puppet is currently enabled, last run 18 seconds ago with 0 failures [21:52:11] RECOVERY - mon1 Puppet on mon1 is OK: OK: Puppet is currently enabled, last run 31 seconds ago with 0 failures [21:52:11] RECOVERY - db13 Puppet on db13 is OK: OK: Puppet is currently enabled, last run 50 seconds ago with 0 failures [21:52:16] RECOVERY - cloud3 Puppet on cloud3 is OK: OK: Puppet is currently enabled, last run 58 seconds ago with 0 failures [21:52:31] RECOVERY - ns2 Puppet on ns2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:52:40] RECOVERY - mail1 Puppet on mail1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:52:50] RECOVERY - rdb2 Puppet on rdb2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:53:07] RECOVERY - services2 Puppet on services2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:53:16] RECOVERY - cloud2 Puppet on cloud2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:57:53] PROBLEM - gluster1 Puppet on gluster1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:58:01] PROBLEM - mon1 Puppet on mon1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:58:02] PROBLEM - gluster2 Puppet on gluster2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:58:04] PROBLEM - db13 Puppet on db13 is CRITICAL: CRITICAL: Puppet has 19 failures. Last run 2 minutes ago with 19 failures. Failed resources (up to 3 shown) [21:58:14] PROBLEM - cloud3 Puppet on cloud3 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 2 minutes ago with 18 failures. Failed resources (up to 3 shown) [21:58:31] PROBLEM - ns2 Puppet on ns2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:58:41] PROBLEM - mail1 Puppet on mail1 is CRITICAL: CRITICAL: Puppet has 40 failures. Last run 2 minutes ago with 40 failures. Failed resources (up to 3 shown) [21:58:44] PROBLEM - rdb2 Puppet on rdb2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:58:49] PROBLEM - ldap1 Puppet on ldap1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:58:56] PROBLEM - services2 Puppet on services2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:59:01] PROBLEM - cloud2 Puppet on cloud2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:59:24] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Puppet has 33 failures. Last run 3 minutes ago with 33 failures. Failed resources (up to 3 shown) [21:59:24] PROBLEM - db11 Puppet on db11 is CRITICAL: CRITICAL: Puppet has 19 failures. Last run 3 minutes ago with 19 failures. Failed resources (up to 3 shown) [21:59:28] PROBLEM - db7 Puppet on db7 is CRITICAL: CRITICAL: Puppet has 19 failures. Last run 3 minutes ago with 19 failures. Failed resources (up to 3 shown) [21:59:32] PROBLEM - db12 Puppet on db12 is CRITICAL: CRITICAL: Puppet has 19 failures. Last run 3 minutes ago with 19 failures. Failed resources (up to 3 shown) [21:59:34] PROBLEM - services1 Puppet on services1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:59:40] PROBLEM - rdb1 Puppet on rdb1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [21:59:47] PROBLEM - bacula2 Puppet on bacula2 is CRITICAL: CRITICAL: Puppet has 16 failures. Last run 3 minutes ago with 16 failures. Failed resources (up to 3 shown) [22:00:12] PROBLEM - cloud1 Puppet on cloud1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/usr/local/bin/puppet-enabled] [22:02:32] RECOVERY - ns2 Puppet on ns2 is OK: OK: Puppet is currently enabled, last run 11 seconds ago with 0 failures [22:02:41] RECOVERY - rdb2 Puppet on rdb2 is OK: OK: Puppet is currently enabled, last run 15 seconds ago with 0 failures [22:02:42] RECOVERY - mail1 Puppet on mail1 is OK: OK: Puppet is currently enabled, last run 12 seconds ago with 0 failures [22:02:50] RECOVERY - ldap1 Puppet on ldap1 is OK: OK: Puppet is currently enabled, last run 26 seconds ago with 0 failures [22:02:51] RECOVERY - cloud2 Puppet on cloud2 is OK: OK: Puppet is currently enabled, last run 32 seconds ago with 0 failures [22:02:52] RECOVERY - services2 Puppet on services2 is OK: OK: Puppet is currently enabled, last run 22 seconds ago with 0 failures [22:03:16] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 50 seconds ago with 0 failures [22:03:21] RECOVERY - db11 Puppet on db11 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:03:24] RECOVERY - db7 Puppet on db7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:03:30] RECOVERY - db12 Puppet on db12 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:03:30] RECOVERY - services1 Puppet on services1 is OK: OK: Puppet is currently enabled, last run 58 seconds ago with 0 failures [22:03:36] RECOVERY - rdb1 Puppet on rdb1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:03:37] RECOVERY - cp9 Puppet on cp9 is OK: OK: Puppet is currently enabled, last run 1 second ago with 0 failures [22:03:37] RECOVERY - jobrunner1 Puppet on jobrunner1 is OK: OK: Puppet is currently enabled, last run 19 seconds ago with 0 failures [22:03:37] RECOVERY - test2 Puppet on test2 is OK: OK: Puppet is currently enabled, last run 29 seconds ago with 0 failures [22:03:43] RECOVERY - gluster1 Puppet on gluster1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:03:46] RECOVERY - mw6 Puppet on mw6 is OK: OK: Puppet is currently enabled, last run 31 seconds ago with 0 failures [22:03:47] RECOVERY - bacula2 Puppet on bacula2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:03:51] RECOVERY - cp6 Puppet on cp6 is OK: OK: Puppet is currently enabled, last run 48 seconds ago with 0 failures [22:03:57] RECOVERY - puppet2 Puppet on puppet2 is OK: OK: Puppet is currently enabled, last run 50 seconds ago with 0 failures [22:03:58] RECOVERY - mon1 Puppet on mon1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:04:02] RECOVERY - db13 Puppet on db13 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:04:02] RECOVERY - gluster2 Puppet on gluster2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:04:07] RECOVERY - cp7 Puppet on cp7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:04:08] RECOVERY - cloud3 Puppet on cloud3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:04:12] RECOVERY - cloud1 Puppet on cloud1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:04:43] RECOVERY - mw7 Puppet on mw7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:04:44] RECOVERY - mw5 Puppet on mw5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:04:58] RECOVERY - jobrunner2 Puppet on jobrunner2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:05:00] RECOVERY - mw4 Puppet on mw4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:07:04] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:57:32] .ban *!?id445178@* [22:57:32] Please wait... [22:57:39] .ban *!?id445178@* [22:57:49] .op [22:57:58] .deop [23:38:04] !log upgrade mail1 debian to 10.5 [23:38:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:42:31] !log upgrade bacula2 debian to 10.5 [23:42:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:45:44] !log upgrade gluster[12] debian to 10.5 [23:47:32] RECOVERY - gluster1 GlusterFS port 49152 on gluster1 is OK: TCP OK - 0.003 second response time on 51.77.107.209 port 49152 [23:59:41] !log regenerate backups on bacula2 [23:59:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log