[03:33:44] PROBLEM - wiki.exnihilolinux.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:33:45] PROBLEM - wiki.mxlinuxusers.de - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:33:49] PROBLEM - infectowiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:34:12] PROBLEM - or.gyaanipedia.co.in - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:34:26] PROBLEM - www.marinebiodiversitymatrix.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:34:26] PROBLEM - wiki.contraao.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:34:32] PROBLEM - unrecnations.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:34:37] PROBLEM - ml.gyaanipedia.co.in - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:34:38] PROBLEM - mai.gyaanipedia.co.in - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:34:54] PROBLEM - wiki.ldmsys.net - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:34:58] PROBLEM - nonlinearly.com - CloudFlare on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:35:13] PROBLEM - wiki.autocountsoft.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:35:19] PROBLEM - bebaskanpengetahuan.id - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:35:23] PROBLEM - publictestwiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:35:47] RECOVERY - infectowiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'infectowiki.com' will expire on Wed 13 Nov 2019 02:26:38 PM GMT +0000. [03:35:47] RECOVERY - wiki.exnihilolinux.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.exnihilolinux.org' will expire on Wed 30 Oct 2019 08:28:25 PM GMT +0000. [03:36:11] RECOVERY - or.gyaanipedia.co.in - LetsEncrypt on sslhost is OK: OK - Certificate 'en.gyaanipedia.co.in' will expire on Thu 31 Oct 2019 03:14:12 PM GMT +0000. [03:36:17] PROBLEM - nonbinary.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:36:21] RECOVERY - wiki.contraao.com - LetsEncrypt on sslhost is OK: OK - Certificate 'contraao.com' will expire on Mon 11 Nov 2019 12:53:57 AM GMT +0000. [03:36:22] RECOVERY - www.marinebiodiversitymatrix.org - LetsEncrypt on sslhost is OK: OK - Certificate 'marinebiodiversitymatrix.org' will expire on Fri 01 Nov 2019 07:40:20 PM GMT +0000. [03:36:26] RECOVERY - unrecnations.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'unrecnations.wiki' will expire on Sat 21 Sep 2019 10:53:21 PM GMT +0000. [03:36:31] RECOVERY - ml.gyaanipedia.co.in - LetsEncrypt on sslhost is OK: OK - Certificate 'en.gyaanipedia.co.in' will expire on Thu 31 Oct 2019 03:14:12 PM GMT +0000. [03:36:37] RECOVERY - mai.gyaanipedia.co.in - LetsEncrypt on sslhost is OK: OK - Certificate 'en.gyaanipedia.co.in' will expire on Thu 31 Oct 2019 03:14:12 PM GMT +0000. [03:36:50] RECOVERY - wiki.ldmsys.net - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.ldmsys.net' will expire on Mon 11 Nov 2019 10:52:25 AM GMT +0000. [03:36:53] RECOVERY - nonlinearly.com - CloudFlare on sslhost is OK: OK - Certificate 'sni.cloudflaressl.com' will expire on Tue 14 Apr 2020 12:00:00 PM GMT +0000. [03:37:08] RECOVERY - wiki.autocountsoft.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.autocountsoft.com' will expire on Sun 10 Nov 2019 11:13:25 AM GMT +0000. [03:37:17] RECOVERY - bebaskanpengetahuan.id - LetsEncrypt on sslhost is OK: OK - Certificate 'bebaskanpengetahuan.id' will expire on Sat 16 Nov 2019 01:49:33 PM GMT +0000. [03:37:21] RECOVERY - publictestwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'publictestwiki.com' will expire on Tue 29 Oct 2019 06:45:33 PM GMT +0000. [03:37:44] RECOVERY - wiki.mxlinuxusers.de - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.mxlinuxusers.de' will expire on Sun 13 Oct 2019 02:55:22 PM GMT +0000. [03:38:11] RECOVERY - nonbinary.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'nonbinary.wiki' will expire on Mon 11 Nov 2019 11:04:33 AM GMT +0000. [04:12:35] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.61, 1.71, 1.31 [04:16:31] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.57, 1.77, 1.43 [04:18:29] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.72, 2.07, 1.58 [04:20:27] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.18, 1.83, 1.56 [04:24:23] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.43, 2.23, 1.76 [04:26:21] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.44, 1.98, 1.73 [04:28:19] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 1.04, 1.63, 1.62 [04:59:41] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 1.07, 2.13, 1.82 [05:01:40] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 0.87, 1.78, 1.72 [05:02:38] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 2 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb [05:03:38] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 0.70, 1.50, 1.63 [05:04:38] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [06:25:05] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 2 minutes ago with 2 failures. Failed resources (up to 3 shown): Package[php7.2-apcu],Package[php7.2-redis] [06:25:55] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 2996 MB (12% inode=94%); [06:33:04] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 36 seconds ago with 0 failures [07:39:08] PROBLEM - bacula1 Bacula Static on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:39:09] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:41:13] PROBLEM - bacula1 Bacula Databases db5 on bacula1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:41:19] PROBLEM - bacula1 Bacula Static on bacula1 is WARNING: WARNING: Full, 4580627 files, 388.1GB, 2019-08-10 08:02:00 (2.6 weeks ago) [07:41:20] PROBLEM - bacula1 Bacula Phabricator Static on bacula1 is WARNING: WARNING: Full, 80402 files, 2.603GB, 2019-08-10 12:53:00 (2.5 weeks ago) [07:43:11] PROBLEM - bacula1 Bacula Databases db5 on bacula1 is WARNING: WARNING: Full, 422 files, 19.13GB, 2019-08-10 13:18:00 (2.5 weeks ago) [07:59:46] PROBLEM - cp3 Varnish Backends on cp3 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [07:59:57] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 1 backends are down. mw3 [07:59:59] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb [08:01:18] Reception123: ^ [08:01:23] What's up? [08:01:46] RECOVERY - cp3 Varnish Backends on cp3 is OK: All 5 backends are healthy [08:01:57] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [08:01:59] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [08:02:03] Back [08:26:17] RhinosF1: really don't know, it's been happening for a while [08:26:22] We'll have to ask paladox [08:26:30] Ok, thx [08:26:43] Reception123: don't forget disk space on DBs [08:26:54] RhinosF1: true, thanks :) [08:26:59] :) [08:27:38] !log purge binary logs before '2019-08-27 22:00:00'; [08:27:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [08:27:49] whoops should've mentioned db4 now that there's two [08:28:06] Reception123: !log on db4 [08:28:23] Done on wiki [08:28:26] yup :) [08:29:29] Reception123: I'm just sighing at a major financial institution running software that's frankly a ticking time bomb and moaning it's slow [08:29:50] heh, which one? [08:30:11] Reception123: a local building society to me [08:30:15] ah [08:51:54] Reception123: I’m not sure either ;) [08:51:55] Needs investigation [08:53:08] yeah [08:55:21] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjxJP [08:55:22] [02miraheze/mw-config] 07Reception123 032e5effe - REMOTE_ADDR -> HTTP_X_REAL_IP [09:04:23] paladox: ^ even with that it doesn't work... [09:07:55] PROBLEM - test1 Puppet on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [09:08:21] Ok [09:08:36] paladox: how else do you think we can do it if neither works? [09:11:20] I’m not sure [09:24:36] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 28 seconds ago with 0 failures [09:25:18] paladox: yeah, actually that didn't work at all [09:25:21] even test1 doesn't worknow [09:25:22] *work now [09:26:50] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjxUr [09:26:51] [02miraheze/mw-config] 07Reception123 032434692 - HTTP_X_REAL_IP -> REMOTE_ADDR not working (even on test1) [09:30:45] Reception123: are you still looking at the cookie issue [09:31:30] RhinosF1: yeah, and not getting any good results :( [09:31:45] :( I can see [09:32:21] varnish being annoying [09:34:00] Ah [09:36:57] now test1 works again but the other's nope [09:40:07] It's varnish then - you just need to get it to work [09:40:11] Right [09:40:29] * RhinosF1 is not sure past that [09:40:31] yeah, though I really know nothing about varnish [09:40:40] need to wait for SPF he's the Varnish expert [09:40:56] SPF|Cloud: ^ [09:41:28] Reception123: I'm Mediawiki config, python and bots - some wiki bots for editing and IRC bots [09:41:37] yeah [09:41:39] Although my bots are basic [09:42:30] You ask me about any of them and I should at least have a good idea [09:51:58] yeah, there must be someway to disable that cookie warning for the service only [09:52:10] since we have to have it for Europe users per the GDPR so completely would not be an option [09:52:14] How though? [09:52:18] Yeah [09:52:36] * RhinosF1 thinks GDPR is good but has its annoyances [09:52:41] No idea beyond this [09:52:57] Yeah, I like what the GDPR does but this cookie thing is meh, no one reads or anything everyone always just clicks ok [09:53:01] * RhinosF1 looks over at SPF|Cloud again [09:53:08] Reception123: true [09:53:24] I've read the policies [09:53:28] (for once) [11:26:27] RhinosF1: if phab tasks are assigned to me they will eventually be taken care of [11:26:46] I don't have access to my laptop thus can't look from here [11:28:10] Though I do want to say REMOTE_ADDR should work regardless of behind varnish or not, so that's an interesting issue [11:59:24] SPF|Cloud: okay, I'll add you to the task then [11:59:39] SPF|Cloud: though neither REMOTE_ADDR nor the other option you gave me work.. [12:03:23] Reception123: what do they return? [12:04:13] JohnLewis: hm? how would I find that out? All I've seen is that with both of them the notice still is on the PDFs while it isn't for test1 [12:04:55] var_dump :) [12:08:08] oh yeah [12:08:11] return=eval.php [12:08:43] JohnLewis: well even without running it I get "Notice: Undefined index: REMOTE_ADDR in /srv/mediawiki/config/LocalWiki.php on line 76" [12:08:55] tada [12:09:14] JohnLewis: but then why does it work on test1 is my question, and 2) how do I define it :P? [12:09:47] https://www.php.net/manual/en/reserved.variables.server.php [12:09:48] [ PHP: $_SERVER - Manual ] - www.php.net [12:12:40] so haven't I done that? and again why would it magically work on test1 then? [12:12:58] I see an example on that page identical to what I did, so not sure what's wrong [12:18:11] JohnLewis: ^ [12:19:42] sounds like a webserver thing, talk to SPF|Cloud :P [12:20:37] ok then... [12:43:15] Reception123: [12:43:16] http://pingbin.com/2012/01/nginx-php-remote_addr-proxy-varnish-cache/ [12:43:17] [ Nginx PHP REMOTE_ADDR with Proxy (Varnish Cache) | PingBin ] - pingbin.com [12:45:10] paladox: so would that have to be done in puppet? Since it mentions a module [12:45:19] paladox: could you create a quick script that var_dumps the server variable? [12:45:43] Can be on any mw* server in /srv/mediawiki [12:46:12] SpF|Cloud: I’m in the middle of Warrick castle :) [12:46:49] This pingbin thing doesn't really apply to us btw because our configuration is different [12:47:02] Well eh Reception123, do you have access? [12:47:17] I have access but not knowledge :P [12:47:32] var_dump( $_SERVER ) [12:47:34] boom [12:47:55] Go to mw2, create /srv/mediawiki/server-iptest.php with " Ok [12:48:53] SPF|Cloud: done [12:49:26] ["REMOTE_ADDR"]=> string(11) ""  [12:50:24] The IPs are correct, so the cause is either your solution or varnish is serving cached responses [12:50:53] well I don't think it's the solution because test1 worked, so it's probably Varnish [12:50:56] though what can we do about that? [12:53:42] Follow the link I have and follow it :) [12:54:08] Have [12:54:11] Gave [12:54:45] Reception123: [12:54:46] HTTP_X_FORWARDED_FOR [12:54:50] What does that give you? [12:55:01] Give where? [12:55:58] In the script you created [12:57:07] Where do I include it though? [12:58:41] What about telling varnish to pass all requests from the restbase UA to the backens? [12:58:54] (as in bypassing the cache) [12:59:11] paladox: ^ [12:59:25] Would that be easy to do? [13:02:19] I have no idea how you’ll do that [13:02:27] Well actually... [13:02:34] We need mw* to have the op [13:02:37] *ip [13:02:47] Since that’s where we are whitelisting things [13:03:27] You can whitelist everything you want but if varnish serves cached responses it won't work :) [13:04:20] so then couldn't we do your option of bypassing the cache for that UA? [13:04:34] Try grepping for the misc1(?) IP in the nginx access logs on cp4 [13:04:47] Oh [13:05:30] misc3 I think [13:06:44] SPF|Cloud: see -staff [13:54:38] [02puppet] 07Reception123 created branch 03Reception123-patch-2 - 13https://git.io/vbiAS [13:54:40] [02miraheze/puppet] 07Reception123 pushed 031 commit to 03Reception123-patch-2 [+0/-0/±1] 13https://git.io/fjxLA [13:54:41] [02miraheze/puppet] 07Reception123 039fa6bd7 - temp fix electronpdfservice from displaying CookieWarning Do not merge unless you are sure of this change. [13:54:43] [02puppet] 07Reception123 opened pull request 03#1074: temp fix electronpdfservice from displaying CookieWarning - 13https://git.io/fjxLp [13:55:03] ^ SPF|Cloud this correct? [13:58:21] Reception123: there's a block with that name already [13:58:31] Please put the if inside that block [13:58:35] ok [14:00:01] [02miraheze/puppet] 07Reception123 pushed 031 commit to 03Reception123-patch-2 [+0/-0/±1] 13https://git.io/fjxtf [14:00:02] [02miraheze/puppet] 07Reception123 03fddbe46 - changes per SPF [14:00:04] [02puppet] 07Reception123 synchronize pull request 03#1074: temp fix electronpdfservice from displaying CookieWarning - 13https://git.io/fjxLp [14:00:31] SPF|Cloud: done ^ [14:00:41] That's perfect, you can deploy [14:01:26] SPF|Cloud: okay, thanks :) [14:01:33] hope it works or else we'll really be out of solutions [14:01:47] [02puppet] 07Reception123 closed pull request 03#1074: temp fix electronpdfservice from displaying CookieWarning - 13https://git.io/fjxLp [14:01:48] [02miraheze/puppet] 07Reception123 pushed 033 commits to 03master [+0/-0/±3] 13https://git.io/fjxtT [14:01:50] [02miraheze/puppet] 07Reception123 03ae6d1dd - Merge pull request #1074 from miraheze/Reception123-patch-2 temp fix electronpdfservice from displaying CookieWarning [14:02:42] Reception123: let me know when it's fully deployed and I'll check [14:02:49] RhinosF1: sure thing [14:08:29] RhinosF1: try it in 4 mins just to be safe :) [14:08:45] Reception123: :) [14:12:46] No change yet [14:12:55] then it probably didn't work :( [14:14:32] SPF|Cloud: paladox seems to not work either :( [14:17:55] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2648 MB (10% inode=94%); [14:19:58] Reception123: yeah, still fine on test1 but test still shows cookie banner [14:20:09] yeah :( [14:29:01] !log added cwarswiki to discord webhooks in hiera [14:29:06] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:32:22] PROBLEM - lizardfs4 Puppet on lizardfs4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:32:31] PROBLEM - misc3 Puppet on misc3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:32:32] PROBLEM - puppet1 Puppet on puppet1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:32:41] PROBLEM - misc4 Puppet on misc4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:32:46] PROBLEM - lizardfs2 Puppet on lizardfs2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:32:49] PROBLEM - db5 Puppet on db5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:32:52] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:32:54] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:32:57] PROBLEM - mw1 Puppet on mw1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:33:02] PROBLEM - mw2 Puppet on mw2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:33:04] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:33:17] Yey puppet spam [14:33:27] PROBLEM - test1 Puppet on test1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:33:32] PROBLEM - bacula1 Puppet on bacula1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:33:33] ^ fixed [14:33:35] PROBLEM - cp2 Puppet on cp2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:33:37] RhinosF1: that's my fault [14:33:40] PROBLEM - db4 Puppet on db4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:33:43] PROBLEM - lizardfs3 Puppet on lizardfs3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:33:44] PROBLEM - mw3 Puppet on mw3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:33:56] PROBLEM - cp4 Puppet on cp4 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:34:00] PROBLEM - lizardfs5 Puppet on lizardfs5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:34:05] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:34:08] Reception123: :) giving us our daily dose! [14:34:20] PROBLEM - lizardfs1 Puppet on lizardfs1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:44:32] RECOVERY - puppet1 Puppet on puppet1 is OK: OK: Puppet is currently enabled, last run 21 seconds ago with 0 failures [14:52:22] RECOVERY - lizardfs1 Puppet on lizardfs1 is OK: OK: Puppet is currently enabled, last run 12 seconds ago with 0 failures [14:52:22] RECOVERY - lizardfs4 Puppet on lizardfs4 is OK: OK: Puppet is currently enabled, last run 15 seconds ago with 0 failures [14:52:31] RECOVERY - misc3 Puppet on misc3 is OK: OK: Puppet is currently enabled, last run 13 seconds ago with 0 failures [14:52:41] RECOVERY - misc4 Puppet on misc4 is OK: OK: Puppet is currently enabled, last run 17 seconds ago with 0 failures [14:52:46] RECOVERY - lizardfs2 Puppet on lizardfs2 is OK: OK: Puppet is currently enabled, last run 37 seconds ago with 0 failures [14:52:49] RECOVERY - db5 Puppet on db5 is OK: OK: Puppet is currently enabled, last run 44 seconds ago with 0 failures [14:52:52] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 47 seconds ago with 0 failures [14:52:57] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 2 seconds ago with 0 failures [14:53:02] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 14 seconds ago with 0 failures [14:53:04] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 37 seconds ago with 0 failures [14:53:27] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 9 seconds ago with 0 failures [14:53:32] RECOVERY - bacula1 Puppet on bacula1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:53:35] RECOVERY - cp2 Puppet on cp2 is OK: OK: Puppet is currently enabled, last run 7 seconds ago with 0 failures [14:53:40] RECOVERY - db4 Puppet on db4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:53:43] RECOVERY - lizardfs3 Puppet on lizardfs3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:53:44] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 47 seconds ago with 0 failures [14:53:56] RECOVERY - cp4 Puppet on cp4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:53:59] RECOVERY - lizardfs5 Puppet on lizardfs5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:54:05] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:54:54] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 56 seconds ago with 0 failures [17:03:33] paladox: SPF|Cloud have you seen the message from before? [17:04:51] Yeh [17:05:34] paladox: so how could even that not work? [17:05:49] * paladox is not sure [18:01:25] Reception123, paladox, JohnLewis: Error 503 Backend fetch failed, forwarded for 2a00:23c4:9e1e:e900:8cd7:2ad3:2e90:e442, 127.0.0.1 [18:01:25] (Varnish XID 38666349) via cp4 at Wed, 28 Aug 2019 18:00:51 GMT. [18:01:56] See Icinga web for more information [18:02:37] well I just have no idea why all these 503s are appearing once again [18:03:38] Reception123: recovered but it's misc1 this time and all backends failed [18:03:58] 6 datacentres reported down on Icinga [18:04:18] We might want to make it report earlier and not go through the soft thing to track it better [18:21:35] yeah [18:21:43] though no idea why [18:21:59] Reception123: hmm [18:22:30] I wonder if it’s connection to either misc3 or misc2 [18:22:31] (Lizardfs master / redis) [18:25:13] paladox: maybe because of your transfer? [18:25:45] Maybe [18:25:46] Dunno though [18:50:13] Hello SA99! If you have any questions feel free to ask and someone should answer soon. [18:53:51] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 9.61, 7.70, 5.89 [18:55:51] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 4.95, 6.53, 5.68 [19:04:46] PROBLEM - mw1 Current Load on mw1 is CRITICAL: CRITICAL - load average: 8.27, 6.92, 5.63 [19:06:45] RECOVERY - mw1 Current Load on mw1 is OK: OK - load average: 6.42, 6.79, 5.75 [19:50:14] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjx3j [19:50:15] [02miraheze/services] 07MirahezeSSLBot 031999854 - BOT: Updating services config for wikis [19:56:46] PROBLEM - mw2 Current Load on mw2 is WARNING: WARNING - load average: 7.85, 7.11, 4.91 [19:57:43] with regards to https://meta.miraheze.org/w/index.php?title=Community_noticeboard#ManageWiki_Sidebar_Links , do folks think this feature would be better changed to opt-in, or opt-out? [19:57:44] [ Community noticeboard - Miraheze Meta ] - meta.miraheze.org [19:58:09] Voidwalker: opt in / opt out please [19:58:20] Preferably at both wiki and user level [19:58:30] I'm leaning opt-in myself [19:58:41] if it's at a per-user level, then opt-in sounds best [19:58:46] RECOVERY - mw2 Current Load on mw2 is OK: OK - load average: 3.13, 5.58, 4.62 [19:59:05] Voidwalker: either good to me, if opt-in can it be on global preferences [20:00:05] !log added reviwikiwiki discord webhook to hiera [20:00:48] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:15:02] RhinosF1: I've found why CookieWarning won't go away, ElectronPDF just doesn't want Miraheze to track its cookies so it isn't agreeing to our policy, and won't click ok :P [20:15:18] today's bots are becoming too smart... soon they'll take over [20:15:30] RhinosF1: I dread the day where Miraheze will be completely run by icinga-miraheze ... [20:15:43] when* [20:15:51] Reception123: do I need to get icinga-miraheze_ out again? [20:16:10] it's coming soon, the bot is quietly observing and one day it'll op itself and take over IRC [20:16:27] :) [20:16:37] and we'll have to spam to get its attention [20:16:45] Wow [20:17:00] plotting its revenge for us making it work without pay for all these years [20:17:15] Ah [20:20:05] It used to like when there were no critical errors but now it resents us so much it can't wait to spam warnings and criticals all day [20:20:29] And don't get me started on logbot... we even make it call us "master" [20:23:51] PROBLEM - mw3 Current Load on mw3 is CRITICAL: CRITICAL - load average: 11.64, 7.38, 5.48 [20:24:36] See RhinosF1 I told you it's listening here's the proof ^ [20:27:31] [02ManageWiki] 07The-Voidwalker opened pull request 03#116: make sidebar for all users opt-in - 13https://git.io/fjxsz [20:27:51] PROBLEM - mw3 Current Load on mw3 is WARNING: WARNING - load average: 4.04, 7.15, 5.98 [20:29:51] RECOVERY - mw3 Current Load on mw3 is OK: OK - load average: 3.71, 6.02, 5.71 [20:30:51] [02ManageWiki] 07The-Voidwalker synchronize pull request 03#116: make sidebar for all users opt-in - 13https://git.io/fjxsz [20:34:02] Reception123, ^ [20:36:54] Voidwalker: Travis seems unhappy [20:37:25] sonarcloud doesn't seem to work on pull requests [20:39:59] most pull requests to that repo since around Jun 22 have behaved similarly, despite the code working [20:42:18] Reception123: ah [20:42:31] ok, will merge then [20:42:56] it kinda makes it a little harder to tell when there is actually an issue though :P [20:43:03] yeah [20:43:13] will have to see with paladox about that oe [20:43:14] *one [20:43:48] [02ManageWiki] 07Reception123 closed pull request 03#116: make sidebar for all users opt-in - 13https://git.io/fjxsz [20:43:49] [02miraheze/ManageWiki] 07Reception123 pushed 031 commit to 03master [+0/-0/±3] 13https://git.io/fjxsH [20:43:51] [02miraheze/ManageWiki] 07The-Voidwalker 037112563 - make sidebar for all users opt-in (#116) * make sidebar for all users opt-in and also provide a preference option to force the display * fix i18n * fix [20:45:47] Voidwalker: but does this keep the tab for admins? because if not it'll be hard for them to find /perms /namespaces etc. [20:45:59] yup, it should [20:46:18] ok :) [20:46:59] [02miraheze/mediawiki] 07Reception123 pushed 031 commit to 03REL1_33 [+0/-0/±1] 13https://git.io/fjxs7 [20:47:01] [02miraheze/mediawiki] 07Reception123 03bdafe7f - Update ManageWiki && CreateWiki [20:48:03] next thing would probably be putting the per-wiki opt-in setting into ManageWikiSettings :) [20:48:52] yup [20:49:23] Voidwalker: will that work in global preferences as well? [20:49:41] and ye we need the ability for a per wiki defauly [20:50:01] if the preference works at all, then it should work globally [20:51:11] Reception123: [069923a29f0108a17c2e4241] 2019-08-28 20:51:00: Fatal exception of type "MWException" [20:51:36] Reception123: and Rebuild LC [20:51:43] what page? [20:52:07] waiting for puppet to run then will rebuild and if it doesn't fix will revert for now [20:52:13] Voidwalker: recovered but preferences [20:52:13] also, that wasn't where I expected it to turn up [20:52:30] Reception123: recovered can you run LC puppet has deployed [20:52:35] !log sudo -u www-data php /srv/mediawiki/w/maint*/rebuildLocalisationCache.php --wiki loginwiki [20:52:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [20:53:01] Voidwalker: worked [20:53:20] worked for me too :) [20:53:48] Voidwalker: thanks for being the dev of the day again ;) [20:54:26] thx void [20:54:31] strange, I'm still getting the forced display on an anon view [20:55:09] Voidwalker: it works logged in [20:55:18] it was off until I enabled it [20:55:31] hmm, seems to be a caching issue for me [20:55:59] Voidwalker: yeah fine for me [20:56:53] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.72, 1.58, 1.24 [20:58:02] yup, aside from that hiccup, that's working [20:58:39] Voidwalker: good [20:58:52] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.47, 1.86, 1.38 [21:00:52] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.57, 1.74, 1.39 [21:01:55] PROBLEM - test1 Puppet on test1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [21:02:52] PROBLEM - test1 Current Load on test1 is CRITICAL: CRITICAL - load average: 2.09, 1.86, 1.47 [21:03:20] Reception123: ^ is that u? [21:03:23] icinga-miraheze: shut it [21:04:52] PROBLEM - test1 Current Load on test1 is WARNING: WARNING - load average: 1.88, 1.94, 1.55 [21:05:44] [02mw-config] 07The-Voidwalker opened pull request 03#2747: add wgManageWikiForceSidebarLinks to MWS - 13https://git.io/fjxGv [21:05:58] RECOVERY - test1 Puppet on test1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:06:17] Voidwalker: lifesaver [21:06:39] :) [21:06:52] RECOVERY - test1 Current Load on test1 is OK: OK - load average: 0.80, 1.50, 1.43 [21:08:12] Reception123, ^ :) [21:17:16] Reception123, paladox: https://phabricator.miraheze.org/T4671 [21:17:17] [ ⚓ T4671 CSR Request for aging wiki ] - phabricator.miraheze.org [21:35:38] I’m mobile only [21:35:52] paladox: how long for? [21:37:36] A day [21:37:45] paladox: k [21:44:29] come on Not-f817 [21:45:10] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fjxGi [21:45:11] [02miraheze/services] 07MirahezeSSLBot 0301a1d67 - BOT: Updating services config for wikis [21:51:21] Voidwalker: https://phabricator.wikimedia.org/T212779 [21:51:22] [ ⚓ T212779 Implement Global CheckUser ] - phabricator.wikimedia.org [21:52:11] Reception123, JohnLewis: [50210c2564fd639381ba5cf8] Caught exception of type Flow\Exception\NoParserException [22:01:12] SPF|Cloud, JohnLewis, Reception123: wikis are slow and icinga reporting issues again [23:40:50] RhinosF1, Reception123: See my answers to your questions on my request for permission [23:45:16] JohnLewis: Have you seen my RfP?