[00:05:39] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+1/-0/±0] 13https://git.io/fN5IB [00:05:40] [02miraheze/puppet] 07paladox 033c388c8 - Create security.pp [00:05:42] [02puppet] 07paladox synchronize pull request 03#805: Add security_updates.list - 13https://git.io/fN5ko [00:06:13] [02puppet] 07paladox synchronize pull request 03#805: Add security_updates.list - 13https://git.io/fN5ko [00:06:14] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/fN5IR [00:06:16] [02miraheze/puppet] 07paladox 0359d7da4 - Update security.pp [00:06:54] [02puppet] 07paladox synchronize pull request 03#805: Add security_updates.list - 13https://git.io/fN5ko [00:06:56] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/fN5Iu [00:06:57] [02miraheze/puppet] 07paladox 0330317d5 - Update security.pp [00:08:42] [02puppet] 07paladox synchronize pull request 03#805: Add security_updates.list - 13https://git.io/fN5ko [00:08:44] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-0/±1] 13https://git.io/fN5Iw [00:08:45] [02miraheze/puppet] 07paladox 034a5c8de - Update init.pp [00:08:49] [02puppet] 07paladox synchronize pull request 03#805: Add security_updates.list - 13https://git.io/fN5ko [00:08:50] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-2 [+0/-1/±0] 13https://git.io/fN5Io [00:08:52] [02miraheze/puppet] 07paladox 03bde69da - Delete security_updates.list [00:09:14] [02puppet] 07paladox closed pull request 03#805: Add security_updates.list - 13https://git.io/fN5ko [00:09:15] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/fN5IK [00:09:17] [02miraheze/puppet] 07paladox 0346811d0 - Add security_updates.list (#805) * Add security_updates.list * Create security_updates.list * Update init.pp * Create security.pp * Update security.pp * Update security.pp * Update init.pp * Delete security_updates.list [00:09:18] [02puppet] 07paladox deleted branch 03paladox-patch-2 - 13https://git.io/vbiAS [00:09:20] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-2 [00:12:22] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:12:34] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:12:42] PROBLEM - cp5 Puppet on cp5 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:12:52] PROBLEM - lizardfs2 Puppet on lizardfs2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:13:02] PROBLEM - db4 Puppet on db4 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:13:06] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:13:14] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:13:20] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:13:38] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:13:40] PROBLEM - lizardfs1 Puppet on lizardfs1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:13:42] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:14:08] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:14:10] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:14:16] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:14:18] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fN5Iy [00:14:19] [02miraheze/puppet] 07paladox 03bdfe27e - Update security.pp [00:14:22] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:14:32] well that was fast [00:15:20] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 42 seconds ago with 0 failures [00:16:34] RECOVERY - misc3 Puppet on misc3 is OK: OK: Puppet is currently enabled, last run 16 seconds ago with 0 failures [00:16:52] RECOVERY - lizardfs2 Puppet on lizardfs2 is OK: OK: Puppet is currently enabled, last run 27 seconds ago with 0 failures [00:17:02] RECOVERY - db4 Puppet on db4 is OK: OK: Puppet is currently enabled, last run 35 seconds ago with 0 failures [00:17:04] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 31 seconds ago with 0 failures [00:17:06] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 36 seconds ago with 0 failures [00:17:14] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 58 seconds ago with 0 failures [00:17:40] RECOVERY - lizardfs1 Puppet on lizardfs1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:17:42] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 31 seconds ago with 0 failures [00:18:08] RECOVERY - cp4 Puppet on cp4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:18:10] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:18:16] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:18:22] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:18:45] RECOVERY - cp5 Puppet on cp5 is OK: OK: Puppet is currently enabled, last run 51 seconds ago with 0 failures [00:21:39] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:23:07] !log apt-get upgrade on * [00:23:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [00:32:43] PROBLEM - cp5 Puppet on cp5 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:34:49] RECOVERY - cp5 Puppet on cp5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:54:30] HELLO [00:54:32] ups [00:54:33] Hello* [00:54:59] hi [01:01:09] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [01:01:23] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fN5tk [01:01:24] [02miraheze/mw-config] 07paladox 03575b338 - "HTTP/1.0 404 Not Found" -> "HTTP/1.1 404 Not Found" [01:06:52] hi Voidwalker :) [01:07:12] :) [02:09:20] I've decided WSL is absolute trash kthxbai https://i.imgur.com/KKdY5qY.png [04:51:44] Just a note that the two interwiki user rights changes recently performed by me say "requested via irc" and "logged" but it was actually requested via IRC (and is logged in the sense that if they delete the messages, they are restored by a bot) [06:55:15] Is there a need to delete this: https://spiral.wiki/wiki/Topic:Uhefp3ynhivm4a6q ? [06:55:17] Title: [ Best room four hands in New Yourk on User talk:2.95.198.192 ] - spiral.wiki [06:56:13] Voidwalker? [06:59:17] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: 1 backends are down. mw2 [07:00:15] PROBLEM - guiasdobrasil.com.br - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:00:49] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [07:01:55] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:8902::f03c:91ff:fe07:444e/cpweb [07:02:13] PROBLEM - bacula1 Bacula Private Git on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:03:25] PROBLEM - bacula1 Bacula Static Lizardfs2 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:04:13] RECOVERY - bacula1 Bacula Private Git on bacula1 is OK: OK: Full, 1458 files, 1.568MB, 2018-08-05 13:19:00 (5.7 days ago) [07:04:21] RECOVERY - guiasdobrasil.com.br - LetsEncrypt on sslhost is OK: OK - Certificate 'sni61771.cloudflaressl.com' will expire on Thu 07 Feb 2019 11:59:59 PM GMT +0000. [07:05:08] PROBLEM - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:05:18] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 3 backends are healthy [07:05:34] RECOVERY - bacula1 Bacula Static Lizardfs2 on bacula1 is OK: OK: Full, 781846 files, 79.29GB, 2018-08-05 13:07:00 (5.7 days ago) [07:06:20] PROBLEM - www.guiasdobrasil.com.br - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:06:44] PROBLEM - cp5 Puppet on cp5 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [07:07:10] RECOVERY - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is OK: OK: Full, 5 files, 123.4KB, 2018-08-05 03:33:00 (6.1 days ago) [07:07:20] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 3 minutes ago with 2 failures. Failed resources (up to 3 shown): Exec[git_pull_puppet],Exec[git_pull_ssl] [07:07:54] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [07:08:22] RECOVERY - www.guiasdobrasil.com.br - LetsEncrypt on sslhost is OK: OK - Certificate 'sni61771.cloudflaressl.com' will expire on Thu 07 Feb 2019 11:59:59 PM GMT +0000. [07:10:42] PROBLEM - guiasdobrasil.com.br - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [07:11:06] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 3 minutes ago with 2 failures. Failed resources (up to 3 shown): File[wiki.tulpa.info],Exec[mathoid_npm] [07:12:02] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [07:12:12] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [07:12:22] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [07:12:40] RECOVERY - guiasdobrasil.com.br - LetsEncrypt on sslhost is OK: OK - Certificate 'sni61771.cloudflaressl.com' will expire on Thu 07 Feb 2019 11:59:59 PM GMT +0000. [07:12:50] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 3 backends are healthy [07:13:06] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 52 seconds ago with 0 failures [07:13:20] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 38 seconds ago with 0 failures [07:14:00] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [07:14:10] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 3 backends are healthy [07:14:24] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [07:32:41] RECOVERY - cp5 Puppet on cp5 is OK: OK: Puppet is currently enabled, last run 45 seconds ago with 0 failures [07:56:26] @sau226: meh, it's not really spam since it doesn't direct you to a website or anything [07:56:29] I wouldn't worry too much about that [08:02:35] PROBLEM - cp5 HTTP on cp5 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:02:35] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [08:02:35] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [08:02:35] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [08:03:43] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 3 backends are healthy [08:06:07] RECOVERY - cp5 HTTP on cp5 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 557 bytes in 3.496 second response time [08:06:07] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [08:06:09] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [08:07:41] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: 1 backends are down. mw2 [08:09:15] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2400:8902::f03c:91ff:fe07:444e/cpweb [08:10:15] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [08:11:17] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [08:11:41] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 3 backends are healthy [08:12:15] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [08:22:40] PROBLEM - wiki.ngscott.net - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:24:38] RECOVERY - wiki.ngscott.net - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.ngscott.net' will expire on Mon 08 Oct 2018 07:55:01 AM GMT +0000. [08:25:30] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: 1 backends are down. mw3 [08:26:42] PROBLEM - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: lizardfs2-fd [08:26:50] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [08:27:24] PROBLEM - bacula1 Bacula Static Lizardfs2 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:28:00] PROBLEM - bacula1 Bacula Private Git on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:29:26] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 3 backends are healthy [08:33:21] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: 1 backends are down. mw3 [08:34:21] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [08:36:53] PROBLEM - lizardfs2 Puppet on lizardfs2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [08:37:05] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [08:37:15] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [08:37:19] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Puppet has 3 failures. Last run 2 minutes ago with 3 failures. Failed resources (up to 3 shown): Exec[git_pull_puppet],Exec[git_pull_services],Exec[git_pull_ssl] [08:38:11] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [08:38:49] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 3 backends are healthy [08:39:03] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [08:39:17] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 3 backends are healthy [08:39:37] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [08:41:05] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 34 seconds ago with 0 failures [09:03:01] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [09:03:27] PROBLEM - www.splat-teams.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:03:37] PROBLEM - wiki.gtsc.vn - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:03:41] PROBLEM - disabled.life - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:03:43] PROBLEM - wiki.nvda-nl.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:03:47] PROBLEM - wiki.jacksonheights.nyc - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:03:51] PROBLEM - wiki.zymonic.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:03:55] PROBLEM - www.guiasdobrasil.com.br - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:03:57] PROBLEM - wisdomwiki.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:01] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: 2 backends are down. mw2 mw3 [09:04:03] PROBLEM - wiki.dwplive.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:13] PROBLEM - enc.for.uz - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:19] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:04:21] PROBLEM - kunwok.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:23] PROBLEM - cornetto.online - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:25] PROBLEM - marinebiodiversitymatrix.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:33] PROBLEM - www.reviwiki.info - PositiveSSLDV on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:37] PROBLEM - podpedia.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:45] PROBLEM - reviwiki.info - PositiveSSLDV on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:49] PROBLEM - wiki.macc.nyc - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:53] PROBLEM - wikipuk.cl - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:57] PROBLEM - dariawiki.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:04:59] PROBLEM - takethatwiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:03] PROBLEM - savage-wiki.com - RapidSSL on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:05] PROBLEM - wiki.autocountsoft.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:07] PROBLEM - nonbinary.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:10] PROBLEM - private.revi.wiki - Comodo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:11] PROBLEM - adadevelopersacademy.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:15] PROBLEM - hellointernet.miraheze.org - GlobalSign on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:17] PROBLEM - papelor.io - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:20] meh [09:05:29] PROBLEM - wiki.dobots.nl - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:31] PROBLEM - www.alwiki.net - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:59] RECOVERY - wiki.jacksonheights.nyc - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.jacksonheights.nyc' will expire on Sat 06 Oct 2018 04:36:29 PM GMT +0000. [09:05:59] RECOVERY - wiki.zymonic.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.zymonic.com' will expire on Thu 25 Oct 2018 01:37:27 PM GMT +0000. [09:05:59] PROBLEM - taotac.info - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:05:59] RECOVERY - saveta.org - LetsEncrypt on sslhost is OK: OK - Certificate 'saveta.org' will expire on Sat 29 Sep 2018 01:49:47 PM GMT +0000. [09:05:59] RECOVERY - www.guiasdobrasil.com.br - LetsEncrypt on sslhost is OK: OK - Certificate 'sni61771.cloudflaressl.com' will expire on Thu 07 Feb 2019 11:59:59 PM GMT +0000. [09:06:01] RECOVERY - miraheze.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'miraheze.wiki' will expire on Sat 06 Oct 2018 04:48:30 PM GMT +0000. [09:06:09] RECOVERY - enc.for.uz - LetsEncrypt on sslhost is OK: OK - Certificate 'enc.for.uz' will expire on Thu 01 Nov 2018 09:40:02 AM GMT +0000. [09:06:11] RECOVERY - embobada.com - LetsEncrypt on sslhost is OK: OK - Certificate 'embobada.com' will expire on Sun 21 Oct 2018 09:40:33 AM GMT +0000. [09:06:13] RECOVERY - wiki.consentcraft.uk - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.consentcraft.uk' will expire on Thu 25 Oct 2018 01:36:36 PM GMT +0000. [09:06:15] RECOVERY - kunwok.org - LetsEncrypt on sslhost is OK: OK - Certificate 'kunwok.org' will expire on Sun 23 Sep 2018 11:18:18 PM GMT +0000. [09:06:17] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/apache2/conf-available/00-defaults.conf] [09:06:19] RECOVERY - wiki.rmbrk.sk - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.rmbrk.sk' will expire on Sat 06 Oct 2018 08:18:08 PM GMT +0000. [09:06:21] RECOVERY - pwiki.arkcls.com - LetsEncrypt on sslhost is OK: OK - Certificate 'pwiki.arkcls.com' will expire on Fri 21 Sep 2018 03:47:01 PM GMT +0000. [09:06:23] RECOVERY - cornetto.online - LetsEncrypt on sslhost is OK: OK - Certificate 'cornetto.online' will expire on Tue 04 Sep 2018 03:46:45 PM GMT +0000. [09:06:27] RECOVERY - www.reviwiki.info - PositiveSSLDV on sslhost is OK: OK - Certificate 'reviwiki.info' will expire on Wed 03 Feb 2021 11:59:59 PM GMT +0000. [09:06:31] RECOVERY - podpedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'podpedia.org' will expire on Fri 31 Aug 2018 09:53:36 PM GMT +0000. [09:06:41] RECOVERY - reviwiki.info - PositiveSSLDV on sslhost is OK: OK - Certificate 'reviwiki.info' will expire on Wed 03 Feb 2021 11:59:59 PM GMT +0000. [09:06:45] RECOVERY - wiki.macc.nyc - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.macc.nyc' will expire on Fri 19 Oct 2018 09:27:34 PM GMT +0000. [09:06:49] RECOVERY - wikipuk.cl - LetsEncrypt on sslhost is OK: OK - Certificate 'wikipuk.cl' will expire on Fri 21 Sep 2018 03:50:45 PM GMT +0000. [09:06:51] RECOVERY - mw2 MediaWiki Rendering on mw2 is OK: HTTP OK: HTTP/1.1 200 OK - 30270 bytes in 0.044 second response time [09:06:53] RECOVERY - dariawiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'dariawiki.org' will expire on Sun 23 Sep 2018 11:05:01 PM GMT +0000. [09:06:55] RECOVERY - takethatwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'takethatwiki.com' will expire on Sun 21 Oct 2018 09:51:53 AM GMT +0000. [09:06:59] RECOVERY - savage-wiki.com - RapidSSL on sslhost is OK: OK - Certificate 'savage-wiki.com' will expire on Fri 04 Dec 2020 12:00:00 PM GMT +0000. [09:07:01] RECOVERY - nonbinary.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'nonbinary.wiki' will expire on Sat 06 Oct 2018 04:44:30 PM GMT +0000. [09:07:03] RECOVERY - madgenderscience.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'madgenderscience.wiki' will expire on Fri 31 Aug 2018 09:27:00 PM GMT +0000. [09:07:05] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Puppet has 3 failures. Last run 2 minutes ago with 3 failures. Failed resources (up to 3 shown): File[marinebiodiversitymatrix.org],File[marinebiodiversitymatrix.org_private],File[lodge.jsnydr.com] [09:07:09] RECOVERY - private.revi.wiki - Comodo on sslhost is OK: OK - Certificate 'private.revi.wiki' will expire on Wed 07 Nov 2018 11:59:59 PM GMT +0000. [09:07:11] RECOVERY - hellointernet.miraheze.org - GlobalSign on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 22 Sep 2018 08:12:13 PM GMT +0000. [09:07:15] RECOVERY - papelor.io - LetsEncrypt on sslhost is OK: OK - Certificate 'papelor.io' will expire on Fri 12 Oct 2018 06:42:54 PM GMT +0000. [09:07:25] RECOVERY - www.alwiki.net - LetsEncrypt on sslhost is OK: OK - Certificate 'www.alwiki.net' will expire on Fri 21 Sep 2018 03:49:39 PM GMT +0000. [09:07:29] RECOVERY - www.splat-teams.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.splat-teams.com' will expire on Tue 25 Sep 2018 11:58:12 AM GMT +0000. [09:07:39] RECOVERY - wiki.gtsc.vn - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.gtsc.vn' will expire on Mon 10 Sep 2018 05:55:22 AM GMT +0000. [09:07:41] RECOVERY - wiki.nvda-nl.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.nvda-nl.org' will expire on Sat 27 Oct 2018 05:31:37 PM GMT +0000. [09:07:49] RECOVERY - taotac.info - LetsEncrypt on sslhost is OK: OK - Certificate 'taotac.info' will expire on Sat 20 Oct 2018 11:11:57 AM GMT +0000. [09:07:55] RECOVERY - lodge.jsnydr.com - LetsEncrypt on sslhost is OK: OK - Certificate 'lodge.jsnydr.com' will expire on Fri 31 Aug 2018 10:07:43 PM GMT +0000. [09:07:57] - Certificate 'wiki.teessidehackspace.org.uk' will expire on Sun 21 Oct 2018 10:06:38 AM GMT +0000. [09:08:01] RECOVERY - unmade.miraheze.org - GlobalSign on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 22 Sep 2018 08:12:13 PM GMT +0000. [09:08:21] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [09:08:23] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [09:08:39] PROBLEM - misc1 Current Load on misc1 is WARNING: WARNING - load average: 1.62, 1.71, 0.88 [09:09:23] RECOVERY - bacula1 Bacula Static Lizardfs2 on bacula1 is OK: OK: Full, 781846 files, 79.29GB, 2018-08-05 13:07:00 (5.8 days ago) [09:09:45] RECOVERY - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is OK: OK: Full, 5 files, 123.4KB, 2018-08-05 03:33:00 (6.2 days ago) [09:10:39] RECOVERY - misc1 Current Load on misc1 is OK: OK - load average: 0.87, 1.31, 0.83 [09:10:45] RECOVERY - bacula1 Bacula Private Git on bacula1 is OK: OK: Full, 1458 files, 1.568MB, 2018-08-05 13:19:00 (5.8 days ago) [09:10:53] RECOVERY - lizardfs2 Puppet on lizardfs2 is OK: OK: Puppet is currently enabled, last run 10 seconds ago with 0 failures [09:11:03] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 25 seconds ago with 0 failures [09:11:05] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 20 seconds ago with 0 failures [09:11:15] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 30 seconds ago with 0 failures [09:11:19] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 23 seconds ago with 0 failures [09:11:37] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:12:11] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 56 seconds ago with 0 failures [09:12:17] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 53 seconds ago with 0 failures [09:13:23] RECOVERY - cp5 Puppet on cp5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:14:22] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:44:03] PROBLEM - programming.red - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:04] PROBLEM - savage-wiki.com - RapidSSL on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:06] PROBLEM - docs.websmart.media - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:08] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 81.4.109.133/cpweb, 172.104.111.8/cpweb [09:45:14] PROBLEM - toonpedia.cf - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:16] PROBLEM - sdiy.info - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:18] PROBLEM - private.revi.wiki - Comodo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:24] PROBLEM - www.programming.red - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:26] PROBLEM - www.iceposeidonwiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:28] PROBLEM - wiki.dobots.nl - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:32] PROBLEM - wiki.inebriation-confederation.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:42] PROBLEM - russopedia.info - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:45:56] PROBLEM - unmade.miraheze.org - GlobalSign on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:04] PROBLEM - taotac.info - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:12] PROBLEM - tensegritywiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:25] PROBLEM - wiki.ombre.io - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:26] PROBLEM - wiki.grottocenter.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:30] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [09:46:50] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [09:47:00] RECOVERY - savage-wiki.com - RapidSSL on sslhost is OK: OK - Certificate 'savage-wiki.com' will expire on Fri 04 Dec 2020 12:00:00 PM GMT +0000. [09:47:02] RECOVERY - docs.websmart.media - LetsEncrypt on sslhost is OK: OK - Certificate 'docs.websmart.media' will expire on Tue 02 Oct 2018 09:01:46 AM GMT +0000. [09:47:08] RECOVERY - toonpedia.cf - LetsEncrypt on sslhost is OK: OK - Certificate 'toonpedia.cf' will expire on Sun 07 Oct 2018 04:42:31 AM GMT +0000. [09:47:12] RECOVERY - spiral.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'spiral.wiki' will expire on Thu 01 Nov 2018 08:51:11 PM GMT +0000. [09:47:18] RECOVERY - private.revi.wiki - Comodo on sslhost is OK: OK - Certificate 'private.revi.wiki' will expire on Wed 07 Nov 2018 11:59:59 PM GMT +0000. [09:47:22] RECOVERY - www.programming.red - LetsEncrypt on sslhost is OK: OK - Certificate 'programming.red' will expire on Fri 19 Oct 2018 07:07:40 PM GMT +0000. [09:47:24] RECOVERY - www.iceposeidonwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.iceposeidonwiki.com' will expire on Thu 25 Oct 2018 02:00:38 PM GMT +0000. [09:47:26] RECOVERY - www.alwiki.net - LetsEncrypt on sslhost is OK: OK - Certificate 'www.alwiki.net' will expire on Fri 21 Sep 2018 03:49:39 PM GMT +0000. [09:47:28] RECOVERY - wiki.inebriation-confederation.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.inebriation-confederation.com' will expire on Sun 21 Oct 2018 10:00:23 AM GMT +0000. [09:47:38] RECOVERY - russopedia.info - LetsEncrypt on sslhost is OK: OK - Certificate 'russopedia.info' will expire on Fri 12 Oct 2018 12:32:34 PM GMT +0000. [09:47:40] RECOVERY - astrapedia.ru - LetsEncrypt on sslhost is OK: OK - Certificate 'astrapedia.ru' will expire on Mon 01 Oct 2018 02:23:12 PM GMT +0000. [09:47:54] RECOVERY - unmade.miraheze.org - GlobalSign on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 22 Sep 2018 08:12:13 PM GMT +0000. [09:48:00] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw3 [09:48:02] RECOVERY - taotac.info - LetsEncrypt on sslhost is OK: OK - Certificate 'taotac.info' will expire on Sat 20 Oct 2018 11:11:57 AM GMT +0000. [09:48:08] RECOVERY - tensegritywiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'tensegritywiki.com' will expire on Thu 08 Nov 2018 09:04:54 AM GMT +0000. [09:48:10] RECOVERY - programming.red - LetsEncrypt on sslhost is OK: OK - Certificate 'programming.red' will expire on Fri 19 Oct 2018 07:07:40 PM GMT +0000. [09:48:12] RECOVERY - wiki.ombre.io - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.ombre.io' will expire on Mon 08 Oct 2018 06:51:11 PM GMT +0000. [09:48:20] RECOVERY - wiki.grottocenter.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.grottocenter.org' will expire on Sun 21 Oct 2018 09:58:29 AM GMT +0000. [09:48:30] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [09:49:10] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [09:50:00] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 3 backends are healthy [09:51:00] PROBLEM - bacula1 Bacula Static Lizardfs2 on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: lizardfs2-fd [09:51:28] PROBLEM - bacula1 Bacula Private Git on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: puppet1-fd [09:52:42] PROBLEM - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:56:52] PROBLEM - lizardfs2 Puppet on lizardfs2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [09:57:06] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [09:57:14] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [09:57:20] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Puppet has 3 failures. Last run 2 minutes ago with 3 failures. Failed resources (up to 3 shown): Exec[git_pull_puppet],Exec[git_pull_services],Exec[git_pull_ssl] [09:59:00] PROBLEM - cp5 Puppet on cp5 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [09:59:02] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [09:59:38] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [10:04:39] RECOVERY - bacula1 Bacula Static Lizardfs2 on bacula1 is OK: OK: Full, 781846 files, 79.29GB, 2018-08-05 13:07:00 (5.9 days ago) [10:04:49] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 3 backends are healthy [10:05:03] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 10 seconds ago with 0 failures [10:05:05] RECOVERY - bacula1 Bacula Private Git on bacula1 is OK: OK: Full, 1458 files, 1.568MB, 2018-08-05 13:19:00 (5.9 days ago) [10:05:07] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 57 seconds ago with 0 failures [10:05:37] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 52 seconds ago with 0 failures [10:05:45] RECOVERY - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is OK: OK: Full, 5 files, 123.4KB, 2018-08-05 03:33:00 (6.3 days ago) [10:06:53] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [10:06:57] RECOVERY - cp5 Puppet on cp5 is OK: OK: Puppet is currently enabled, last run 45 seconds ago with 0 failures [10:08:49] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [10:10:53] RECOVERY - lizardfs2 Puppet on lizardfs2 is OK: OK: Puppet is currently enabled, last run 19 seconds ago with 0 failures [10:11:15] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 43 seconds ago with 0 failures [10:11:19] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 37 seconds ago with 0 failures [10:17:05] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [10:49:01] Should we lock impersonation accounts? [10:58:50] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 3 backends are healthy [10:59:18] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 3 backends are healthy [11:01:06] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 14 seconds ago with 0 failures [11:46:12] sau226 hi, yes. I think that’s what John is doing. [12:39:10] Is it just me or are all the stewards pretty much in the same time zone? [12:42:10] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [12:42:42] PROBLEM - cp5 Puppet on cp5 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [12:43:02] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [12:43:38] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [12:45:24] PROBLEM - bacula1 Bacula Static Lizardfs2 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [12:45:46] sau226 John is in the same time zone as me [12:46:05] +1 in the summer and +0 in the winter [12:46:18] PROBLEM - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [12:46:22] PROBLEM - bacula1 Bacula Private Git on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [12:46:52] PROBLEM - lizardfs2 Puppet on lizardfs2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [12:47:14] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [12:47:20] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Puppet has 3 failures. Last run 2 minutes ago with 3 failures. Failed resources (up to 3 shown): Exec[git_pull_puppet],Exec[git_pull_services],Exec[git_pull_ssl] [12:47:22] RECOVERY - bacula1 Bacula Static Lizardfs2 on bacula1 is OK: OK: Full, 781846 files, 79.29GB, 2018-08-05 13:07:00 (6.0 days ago) [12:48:17] RECOVERY - bacula1 Bacula Private Git on bacula1 is OK: OK: Full, 1458 files, 1.568MB, 2018-08-05 13:19:00 (6.0 days ago) [12:48:19] RECOVERY - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is OK: OK: Full, 5 files, 123.4KB, 2018-08-05 03:33:00 (6.4 days ago) [12:50:53] RECOVERY - lizardfs2 Puppet on lizardfs2 is OK: OK: Puppet is currently enabled, last run 7 seconds ago with 0 failures [12:51:15] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 33 seconds ago with 0 failures [12:51:19] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 33 seconds ago with 0 failures [12:51:37] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 44 seconds ago with 0 failures [12:54:11] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [12:55:07] PROBLEM - docs.websmart.media - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:55:27] PROBLEM - taotac.info - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:55:29] PROBLEM - www.splat-teams.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:56:35] h [12:57:05] RECOVERY - docs.websmart.media - LetsEncrypt on sslhost is OK: OK - Certificate 'docs.websmart.media' will expire on Tue 02 Oct 2018 09:01:46 AM GMT +0000. [12:57:11] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: 1 backends are down. mw3 [12:57:29] RECOVERY - www.splat-teams.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.splat-teams.com' will expire on Tue 25 Sep 2018 11:58:12 AM GMT +0000. [12:57:31] RECOVERY - taotac.info - LetsEncrypt on sslhost is OK: OK - Certificate 'taotac.info' will expire on Sat 20 Oct 2018 11:11:57 AM GMT +0000. [12:57:45] PROBLEM - bacula1 Bacula Static Lizardfs2 on bacula1 is CRITICAL: CRITICAL: Timeout or unknown client: lizardfs2-fd [12:58:09] PROBLEM - bacula1 Bacula Private Git on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [12:58:49] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [12:59:39] PROBLEM - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [13:06:54] PROBLEM - lizardfs2 Puppet on lizardfs2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [13:07:06] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [13:07:16] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [13:07:20] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Puppet has 3 failures. Last run 2 minutes ago with 3 failures. Failed resources (up to 3 shown): Exec[git_pull_puppet],Exec[git_pull_services],Exec[git_pull_ssl] [13:07:40] RECOVERY - bacula1 Bacula Lizardfs2 Lizardfs Chunkserver2 on bacula1 is OK: OK: Full, 5 files, 123.4KB, 2018-08-05 03:33:00 (6.4 days ago) [13:08:00] RECOVERY - bacula1 Bacula Static Lizardfs2 on bacula1 is OK: OK: Full, 781846 files, 79.29GB, 2018-08-05 13:07:00 (6.0 days ago) [13:08:12] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [13:08:50] RECOVERY - bacula1 Bacula Private Git on bacula1 is OK: OK: Full, 1458 files, 1.568MB, 2018-08-05 13:19:00 (6.0 days ago) [13:08:58] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 3 backends are healthy [13:09:38] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [13:10:54] RECOVERY - lizardfs2 Puppet on lizardfs2 is OK: OK: Puppet is currently enabled, last run 17 seconds ago with 0 failures [13:11:02] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 32 seconds ago with 0 failures [13:11:06] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 29 seconds ago with 0 failures [13:11:14] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 46 seconds ago with 0 failures [13:11:20] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 40 seconds ago with 0 failures [13:11:38] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [13:12:12] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [13:12:42] RECOVERY - cp5 Puppet on cp5 is OK: OK: Puppet is currently enabled, last run 44 seconds ago with 0 failures [13:31:09] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 2943 [14:17:36] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+6/-0/±92] 13https://git.io/fN5RQ [14:17:38] [02miraheze/puppet] 07paladox 03127ec5f - Update apt module to 5.0.1 [14:29:44] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±2] 13https://git.io/fN504 [14:29:46] [02miraheze/mw-config] 07Reception123 03a2b7a39 - restrict new account creation to autoconfirmed on meta There's no reason for non-autoconfirmed users to be creating new accounts [15:14:58] Hello [15:15:16] hi [15:16:06] a lot of work? :P [15:16:13] heh [15:16:25] !log upgrade phabricator on misc4 [15:16:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:22:25] !log ./bin/repository rebuild-identities --all on misc4 [15:22:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:39:46] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_31 [+1/-0/±1] 13https://git.io/fN5zq [15:39:48] [02miraheze/mediawiki] 07paladox 032b5f04a - Add PollNY mw extension per T2935 [15:41:53] !log /usr/local/bin/foreachwikiindblist /srv/mediawiki/dblist/all.dblist /srv/mediawiki/w/maintenance/sql.php /srv/mediawiki/w/extensions/PollNY/sql/poll.sql on mw1 in a screen [15:41:57] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:43:09] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fN5zG [15:43:11] [02miraheze/mw-config] 07paladox 03f8ae15f - Add PollNY sql to wgCreateWikiSQLfiles [15:45:24] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/fN5zn [15:45:26] [02miraheze/mw-config] 07paladox 03ef76487 - Add PollNY to ManageWiki [15:45:27] [02mw-config] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbvb3 [15:45:29] [02mw-config] 07paladox opened pull request 03#2358: Add PollNY to ManageWiki - 13https://git.io/fN5zc [15:46:10] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/fN5zW [15:46:12] [02miraheze/mw-config] 07paladox 03cf12409 - Update LocalSettings.php [15:46:13] [02mw-config] 07paladox synchronize pull request 03#2358: Add PollNY to ManageWiki - 13https://git.io/fN5zc [15:46:23] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [15:46:30] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/fN5zl [15:46:32] [02miraheze/mw-config] 07paladox 03478c59e - Update extension-list [15:46:33] [02mw-config] 07paladox synchronize pull request 03#2358: Add PollNY to ManageWiki - 13https://git.io/fN5zc [15:46:34] miraheze/mw-config/paladox-patch-1/ef76487 - paladox The build was fixed. https://travis-ci.org/miraheze/mw-config/builds/414895076 [15:47:04] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/fN5z8 [15:47:05] miraheze/mw-config/paladox-patch-1/cf12409 - paladox The build was fixed. https://travis-ci.org/miraheze/mw-config/builds/414895165 [15:47:06] [02miraheze/mw-config] 07paladox 03def358a - Update LocalExtensions.php [15:47:07] [02mw-config] 07paladox synchronize pull request 03#2358: Add PollNY to ManageWiki - 13https://git.io/fN5zc [15:49:07] PROBLEM - db4 Disk Space on db4 is WARNING: DISK WARNING - free space: / 76650 MB (20% inode=94%); [15:50:21] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 43 seconds ago with 0 failures [15:53:36] !log PURGE BINARY LOGS BEFORE '2018-08-11 16:53:00'; on db4 [15:53:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:55:08] RECOVERY - db4 Disk Space on db4 is OK: DISK OK - free space: / 93513 MB (25% inode=94%); [15:55:44] [02mw-config] 07paladox closed pull request 03#2358: Add PollNY to ManageWiki - 13https://git.io/fN5zc [15:55:46] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±4] 13https://git.io/fN5zy [15:55:47] [02miraheze/mw-config] 07paladox 0326999e2 - Add PollNY to ManageWiki (#2358) * Add PollNY to ManageWiki * Update LocalSettings.php * Update extension-list * Update LocalExtensions.php [15:55:49] [02mw-config] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vbvb3 [15:55:50] [02miraheze/mw-config] 07paladox deleted branch 03paladox-patch-1 [15:55:51] paladox: are you purging them due to storage issues? [15:57:19] SPF|Cloud yes it showed a warning [15:57:26] "PROBLEM - db4 Disk Space on db4 is WARNING: DISK WARNING - free space: / 76650 MB (20% inode=94%); [15:57:26] " [15:57:32] ../me looks [15:57:57] SPF|Cloud seems the binlogs are not removing them selfs [15:58:10] we should get mysql to remove them after a few days [15:58:25] well we shouldn't remove them actually [15:58:35] (or at least keep a few days worth of them) [15:58:40] oh [15:58:48] !log sudo -u www-data php /srv/mediawiki/w/maint*/rebuildLocalisationCache.php --wiki test1wiki on mw* [15:58:49] and 20% seems extreme to me [15:58:51] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:59:18] it's that mysql-slow.log issue. I'll fix it and document the steps right away [15:59:44] SPF|Cloud mysql-slow.log? [16:00:10] https://dev.mysql.com/doc/refman/8.0/en/slow-query-log.html [16:00:11] Title: [ MySQL :: MySQL 8.0 Reference Manual :: 5.4.5 The Slow Query Log ] - dev.mysql.com [16:01:02] oh [16:10:11] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fN5gI [16:10:12] [02miraheze/services] 07MirahezeSSLBot 0332c6ada - BOT: Updating services config for wikis [16:14:21] https://meta.miraheze.org/wiki/Tech:MariaDB#Freeing_up_space here you are [16:14:22] Title: [ Tech:MariaDB - Miraheze Meta ] - meta.miraheze.org [16:14:44] SPF|Cloud thanks! [16:26:26] !log sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=ucroniawiki --report=1 Wikipedia-20180729190338.xml on mw1 [16:26:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:28:12] !log sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=mexicopediawiki --report=1 Wikipedia-20180729190338.xml.1 on mw1 [16:28:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:35:50] paladox: [16:36:09] paladox: import of TallerCentralWiki.xml? [16:36:19] paladox: cancel [16:36:23] ok [16:36:40] Wiki-1776 i was following what was on [16:36:41] https://phabricator.miraheze.org/T3425 [16:36:42] Title: [ ⚓ T3425 Import XML on ucroniawiki ] - phabricator.miraheze.org [16:36:44] the dump was:: [16:36:59] Wikipedia-20180729190338.xml [16:37:00] Wiki-1776 ^^ [16:37:12] oh [16:37:18] the dump you wanted was hidden [16:37:27] Wiki-1776 could you update the descriptions please? :) [16:37:31] with the correct dumps? [16:37:37] paladox: correcto file https://phabricator.miraheze.org/T3426#65632 TallerCentral-20180807182946.xml20 MBDownload [16:37:38] Title: [ ⚓ T3426 Import XML on mexicopediawiki ] - phabricator.miraheze.org [16:38:10] ok [16:39:08] Hello [16:39:31] done paladox [16:39:41] sorry [16:40:22] thanks [16:40:41] Wiki-1776 description was not updated on https://phabricator.miraheze.org/T3425 [16:40:42] Title: [ ⚓ T3425 Import XML on ucroniawiki ] - phabricator.miraheze.org [16:41:19] thanks! [16:41:36] ya [16:42:03] !log sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=ucroniawiki --report=1 TallerCentral-20180807182946.xml on mw1 [16:42:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:42:10] :) [16:42:37] !log sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki= mexicopediawiki --report=1 TallerCentral-20180807182946.xml on mw1 [16:42:41] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:44:25] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-any-to-any-port-143] [16:45:08] [02mw-config] 07Amanda-Catherine opened pull request 03#2359: Disable global blocks on Weather Wiki again - 13https://git.io/fN528 [16:45:25] [02mw-config] 07paladox closed pull request 03#2359: Disable global blocks on Weather Wiki again - 13https://git.io/fN528 [16:45:27] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fN52R [16:45:28] [02miraheze/mw-config] 07Amanda-Catherine 0322e5b43 - Disable global blocks on Weather Wiki again (#2359) Way, way, way too many global blocks are being made, some of which even overlap each other [16:45:50] I don’t understand why we are obsessed with global blocks [16:48:21] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:49:05] Hmm... account impersonation of a WMF developer https://meta.miraheze.org/wiki/Special:Contributions/Legoktm [16:49:07] Title: [ User contributions for Legoktm - Miraheze Meta ] - meta.miraheze.org [16:49:22] ...and an account impersonation of a Wikia staff member https://meta.miraheze.org/wiki/Special:Contributions/Kirkburn [16:49:24] Title: [ User contributions for Kirkburn - Miraheze Meta ] - meta.miraheze.org [16:49:31] What was the "default" size for the wiki logo? [16:49:40] * AmandaCatherine is confused [16:49:58] 135px135px? [16:52:01] AmandaCatherine: global accounts for those don’t exist. The passwords were sent via email with the email set to cvt@miraheze.org [16:52:18] Hmm.. that’s more suspicious [16:52:19] So we are in control of them [16:52:26] Was that done deliberately? [16:52:38] (As in, doppelgängers to prevent real impersonations) [16:52:43] Don’t know, wasn’t us. [16:53:02] It was an LTA afaik [16:53:22] Why would any troll/spammer/vandal send an account password to a staff email address? [16:53:27] That’s boomerang-ish [17:07:09] I don’t understand why we are obsessed with global blocks [17:07:12] I do, it's to prevent the spam [17:07:21] that we have been receiving over the past days [17:07:30] Global blocks ensure that the spam will not persist on other wikis [17:08:58] Yes, but I don’t understand why we are blocking thousands of ranges that overlap/are redundant to each other, and/or are probably shared [17:09:19] Why not just block the IP that spammed/lock the account that spammed, and handle on a case-by-case basis [17:09:28] AmandaCatherine: that might be the unfortunate consequences of blocks, but it must be done to prevent further vandalism [17:09:40] AmandaCatherine: That would be a waste of time, as IPs from the same range would reappear [17:09:50] I don't understand why you are in support of less extreme measures again spam [17:09:53] *against [17:10:16] Because I don’t feel that we need to take extreme measures [17:10:26] We need to take adequate measures for each instance [17:10:41] And I’m someone who does everything I can to avoid collateral damage [17:10:58] AmandaCatherine: 1) You're not the one dealing with the vandalism globally 2) adequate measures are ensuring that these ranges are blocked [17:11:11] Yes, collateral damage is avoided by having the "if in error contact cvt@" notice [17:11:25] Hola Reception123 :) [17:11:26] This way, anyone affected can easily contact CVT and it will be dealt with [17:11:33] So far, we have not had any such reports [17:11:34] Wiki-1776: hola [17:11:41] You are entitled to your opinion, but I am entitled to mine [17:11:51] I feel that recently things have been taken too far [17:12:24] AmandaCatherine that is a lie because you blocked me because you disagreed with what i said. [17:12:25] AmandaCatherine: Yes, but last I checked CVT has been elected by the community [17:12:53] paladox: no, I blocked you for not assuming good faith [17:12:58] Not just because of what you said [17:12:58] AmandaCatherine ? [17:13:00] All your proposals would do IMO would 1) make our job harder 2) keep the spam going [17:13:05] i never edited your wiki [17:13:06] AmandaCatherine: how did Paladox not assume good faith? [17:13:09] AmandaCatherine thus you just lied [17:13:28] Cross-wiki issues paladox is a valid reason for a block [17:13:35] AmandaCatherine no it is not [17:13:44] That is abuse of [17:13:46] the blocking tool [17:13:48] Reception123: there was a heated discussion a couple of days ago [17:13:51] paladox: no it is not [17:13:54] + I don't personally see how someone can be blocked on a wiki without editing it, AmandaCatherine you are the one not assuming good faith since you assume that Paladox would do something to your wiki and preemptively block him [17:13:54] AmandaCatherine he is aware [17:14:05] AmandaCatherine yes it is [17:14:19] JohnLewis has said that wikis can block for whatever reasons they see fit [17:14:24] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[ufw-allow-tcp-from-any-to-any-port-80] [17:14:28] As long as it doesn’t violate global policy [17:14:33] AmandaCatherine: I did not read the discussion since I was not available. It is completely your right to block someone on your wiki, though I'm just personally curious as to what these "cross-wiki issues" were [17:14:34] AmandaCatherine so does that mean i can block you? [17:14:44] no it does not [17:14:48] because a: it's wrong [17:14:52] paladox: you can block me on your wiki(s) if you see fit [17:14:57] AmandaCatherine not on my wiki [17:14:59] on meta [17:15:22] On Meta we have policies, and unless Amanda explicitly defies them or common sense, a block there would not be appropriate [17:15:25] If you want to, go ahead, but I’d have a feeling that others would disagree with that action [17:15:43] AmandaCatherine: I'd still like to know the reason for why you blocked Paladox, just for my curiosity [17:16:27] AmandaCatherine that is why i would never do it [17:16:30] Reception123: both paladox and Voidwalker were blocked for immediately locking any accounts whose usernames looked suspicious, without considering the fact that they may be acting in good faith [17:16:30] because it's abuse [17:16:40] Reception123: había un tamaño recomendable para los logos de las wikis? [17:16:48] AmandaCatherine i never locked any accounts [17:17:01] But you defended Voidwalker’s locking of the accounts [17:17:05] yes [17:17:09] i live in the UK [17:17:14] freedom of speach applies [17:17:21] AmandaCatherine: how does that affect you directly? [17:17:22] i doin't know if that exists in canada [17:17:27] Reception123: the accounts with names of the Royal Family, for example, were not necessarily LTAs [17:17:33] AmandaCatherine this is looking like you want spam? [17:17:35] They could be, but they could not be [17:17:47] AmandaCatherine: they were obviously LTAs based on past behaviour + Voidwalker as a steward used CheckUser to confirm that [17:17:47] Same goes with the Trump family names [17:18:06] AmandaCatherine fyi they impersonated me by using half my email address [17:18:11] AmandaCatherine: If they are, which is extremely unlikely based on CU evidence and past evidence they can appeal [17:18:31] AmandaCatherine: as I said before, I still don't quite understand your defense for these vandals, and frankly it's pretty suspicious in my view [17:18:40] ^^ [17:18:46] As I said then, what if someone wanted to create a wiki about the Royal Family, and created a username with a Royal Family name... [17:18:50] We don’t have a username policy [17:19:01] AmandaCatherine but we have already told you [17:19:02] he gives cookies and soda to Reception123 paladox AmandaCatherine [17:19:12] Wiki-1776 thanks :) [17:19:13] Thanks Wiki-1776 [17:19:13] AmandaCatherine: they could, but what are the chances that someone did a day after vandalism with these names on Meta? [17:19:15] AmandaCatherine they created other usernames. [17:19:18] The timing is very suspicious [17:19:31] and I will also point out that this spam only started occurring this month. [17:19:35] AmandaCatherine if you choose not to listen then that's up to you. [17:19:42] Agreed it’s suspicious, but it doesn’t mean that it’s necessarily the same [17:20:00] Suspicion alone is not a valid reason to take an extreme action like globally locking IMHO [17:20:03] You need hard evidence [17:20:09] AmandaCatherine well you blocked me [17:20:16] so...... [17:20:22] paladox: because I have hard evidence you were not assuming good faith [17:20:27] AmandaCatherine: I'm saying it's suspicious that you are defending these spammers/LTAs [17:20:30] really? [17:20:32] where? [17:20:53] I’m not defending them, I’m defending the assumption of good faith on behalf of anyone and everyone [17:21:01] AmandaCatherine you are. [17:21:16] paladox: by endorsing Voidwalker’s assumption of bad faith, you too are assuming bad faith [17:21:25] AmandaCatherine: And you cannot blame Paladox for "supporting" Void as 1) Paladox has no actual say in it, he's not a steward 2) that's like saying you're a spammer because you're defending the spammers [17:21:48] AmandaCatherine: and again, there is hard evidence, Voidwalker ran a CheckUser which proved that the accounts were indeed LTAs [17:22:00] Like I said, I’m not defending these particular spammers, I’m defending the assumption of good faith [17:22:12] Just an IP address or a UA string is only part of the puzzle [17:22:21] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 8 seconds ago with 0 failures [17:22:41] ALWAYS assume good faith [17:22:53] Regardless of what you *think* something may or may not be [17:23:06] You cannot assume good faith when CheckUser points to an LTA [17:23:18] AmandaCatherine this is not up for discussion on irc. You can open a rfc on meta. [17:23:23] and (even though it's fully your right) I don't see what blocking Paladox on your wiki could possibly resolve [17:23:26] global policy are not changed by one user [17:23:28] Seeing as he did not even edit your wiki [17:23:33] and they are not held to ransom either [17:24:02] I don't think that your actions by blocking Void or Paladox on your wiki will influence their further decisions in any way, if that was the scope [17:24:08] I’m not going to go through the drama of RFC. That’s why I essentially just segregated my wiki away from this [17:24:44] AmandaCatherine because you know no one else will endorse your proposal. [17:25:15] Not necessarily - just the history of RFCs has been ugly [17:25:19] Enough! If you cannot act like an adult and open a RFC then you need to stop bring up this topic on IRC i may not be an OP on this channel but i've had enough [17:25:25] I personally think it is absurd to have less harsh measures for user's whose goal is to vandalize Miraheze [17:25:37] But you don’t know that until they do it [17:25:56] AmandaCatherine: you don't always wait for people to act, preemptive blocks are a thing [17:25:58] please stop second guessing the actions of staff [17:26:05] ^^ [17:26:31] I don’t agree with what was done, and I’m not budging on that [17:26:37] The amount of the drama in this channel has gone through the roof, it needs to stop [17:26:39] You can’t change my opinion [17:26:42] Again, staff's actions are legitimate because they were elected by the community. If you want to challenge that by proposing the revocation of a steward, feel free [17:26:49] No one is stopping you [17:26:56] I don’t think this warrants revocation of steward [17:26:58] but ranting about stewards' actions here will not do anything [17:27:01] At least not yet [17:27:19] I’m not ranting, I’m defending my opinions when they are being attacked [17:27:24] well I don't see who else would vote for the revocation of any steward.. people who are doing their jobs to keep Miraheze safe of spammers [17:27:54] AmandaCatherine: I'm just attacking 1) the attacks on Steward's actions 2) the absurd blocks on your wiki 3) the defense of spammers who are vandalizing Meta [17:28:25] There’s a difference between an attack on steward’s actions and critiquing their actions [17:28:28] If you want something changed open a RFC not rant and rave on IRC cause frankly i give a shit less about what is said on IRC. [17:28:47] I don’t want anything changed globally. I want you to leave me alone [17:28:52] AmandaCatherine: yes, but what will you achieve with "critiquing" their actions on IRC? [17:29:00] And let me have my opinions [17:29:06] Do you think Stewards will just stop blocking spammers and let the spam unleash on Meta [17:29:08] AmandaCatherine you did not let us [17:29:09] Then perhaps you should drop it? Attack people's actions isnt going to have you left alone? [17:29:10] have ours [17:29:18] you blocked us for ours [17:29:24] so it is not a opinion [17:29:31] AmandaCatherine: you are entitled to your opinions, but in this case you cannot possibly believe that we should not block spammers on Meta [17:29:33] I did not block you for your opinion. I blocked you for not assuming good faith [17:29:46] AmandaCatherine that is eactly it [17:29:47] AmandaCatherine: so should we block you for not assuming good faith? [17:29:53] AmandaCatherine: As I have asked above, what do you achieve with that block? [17:29:53] you said because i endorsed void [17:29:55] this isn't going anywhere, so I suggest all parties stop [17:30:03] ^^ [17:30:05] Reception123: You should block spammers, but you should not block users who you suspect to be spammers but don’t know for sure [17:30:05] * paladox stops [17:30:13] For me, these blocks would be considered as a "threat" in order to get stewards to do as you say [17:30:28] Yes, I agree with Voidwalker that this is pointless and that everyone should stop this useless discussion [17:30:58] AmandaCatherine: Last point. These users created emails on cvt@, my email and paladox's so that is HARD EVIDENCE that it is LTA [17:31:35] Okay, that should be in the lock summary [17:31:37] Also you dont see the private discussion we have between cvt members [17:34:20] and what happened to CommonsWiki? [17:47:48] I just realized one thing [17:48:04] Reception123: pm? [17:48:20] (or u still there? :-p) [17:48:26] On it [17:48:32] oh k [17:48:39] revi: feel free to send [17:49:22] yeah was sending lol [18:36:37] "Error 503 Backend fetch failed, forwarded for -, 127.0.0.1 (Varnish XID 233710439) via cp2 at Sat, 11 Aug 2018 18:35:46 GMT" [18:37:45] AlvaroMolina: strange, I don't get it [18:37:47] paladox: ^ [18:37:59] Reception123 i saw that [18:38:01] 503 here too [18:38:02] in icinga [18:38:07] ah, not now [18:38:09] just cleared up for me [18:38:22] i wonder if that was caused by the high load on db4 [18:39:03] http://prntscr.com/khgt3o [18:39:04] Title: [ Screenshot by Lightshot ] - prntscr.com [18:39:13] Now solved. [19:48:08] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fN5rj [19:48:10] [02miraheze/puppet] 07paladox 037d356c8 - Remove no existent script from conduct [19:48:54] Why just conduct and not the rest?? [19:49:48] JohnLewis: good question, why didn’t I do that? [19:50:57] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fN5oJ [19:50:59] [02miraheze/puppet] 07paladox 03b6076e6 - Update aliases [19:51:12] JohnLewis: done, also that question wasent sarcastic :D [21:41:09] RECOVERY - mw3 JobQueue on mw3 is OK: JOBQUEUE OK - job queue below 300 jobs [21:53:08] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 1942 [22:18:16] Does anybody want to give me the tl;dr of this channel? [22:18:49] PuppyKun: a what? [22:19:27] Zppix: I would appreciate you not making comments that you have to fix implications of by explicitly pointing out that you do not have op privileges in this channel. If you think it's that bad, please query an actual op. [22:19:40] paladox: wondering why this channel is so heated. [22:20:37] PuppyKun: Inwonder the same. seems like it was unnecessary escalation? [22:22:52] JohnLewis: yes it was