[00:04:17] PROBLEM - lizardfs1 Puppet on lizardfs1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [00:06:17] RECOVERY - lizardfs1 Puppet on lizardfs1 is OK: OK: Puppet is currently enabled, last run 33 seconds ago with 0 failures [01:07:29] !log remove php-tideways from test1 [01:07:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [08:56:15] [02miraheze/dns] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2tx [08:56:16] [02miraheze/dns] 07Southparkfan 031b7c1c4 - add TXT record for globalsign renewal [09:00:44] nice! [09:00:44] RECOVERY - piwik.miraheze.org - GlobalSign on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Wed 23 Oct 2019 08:12:13 PM GMT +0000. [09:00:53] RECOVERY - studynotekr.miraheze.org - GlobalSign on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Wed 23 Oct 2019 08:12:13 PM GMT +0000. [09:02:04] [02miraheze/ssl] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2qU [09:02:06] [02miraheze/ssl] 07Southparkfan 031f04f39 - Renewal of *.miraheze.org certificate for 2018-2019 [09:04:45] PROBLEM - piwik.miraheze.org - GlobalSign on sslhost is WARNING: WARNING - Certificate '*.miraheze.org' expires in 14 day(s) (Sat 22 Sep 2018 08:12:13 PM GMT +0000). [09:04:53] PROBLEM - studynotekr.miraheze.org - GlobalSign on sslhost is WARNING: WARNING - Certificate '*.miraheze.org' expires in 14 day(s) (Sat 22 Sep 2018 08:12:13 PM GMT +0000). [09:09:07] RECOVERY - wc.miraheze.org on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Wed 23 Oct 2019 08:12:13 PM GMT +0000. [09:10:13] RECOVERY - unmade.miraheze.org - GlobalSign on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Wed 23 Oct 2019 08:12:13 PM GMT +0000. [09:10:25] RECOVERY - hellointernet.miraheze.org - GlobalSign on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Wed 23 Oct 2019 08:12:13 PM GMT +0000. [09:10:45] RECOVERY - piwik.miraheze.org - GlobalSign on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Wed 23 Oct 2019 08:12:13 PM GMT +0000. [09:10:51] RECOVERY - studynotekr.miraheze.org - GlobalSign on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Wed 23 Oct 2019 08:12:13 PM GMT +0000. [10:18:51] !m SPF|Cloud [10:18:51] <[d__d]> You're doing good work, SPF|Cloud! [10:18:53] (for the cert :) [10:36:31] [02miraheze/dns] 07Reception123 pushed 031 commit to 03master [+1/-0/±0] 13https://git.io/fA2Y7 [10:36:33] [02miraheze/dns] 07Reception123 0368e515c - add kkutu.wiki zone [11:23:09] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/fA2Oj [11:23:10] [02miraheze/ssl] 07Reception123 0372dde62 - add kkutu.wiki cert [11:23:17] !log renaming wiki from cpprwiki to leyesprwiki [11:23:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [11:26:07] !log drop databse cpprwiki; [11:26:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [11:40:13] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA23n [11:40:15] [02miraheze/services] 07MirahezeSSLBot 03af702e8 - BOT: Updating services config for wikis [12:05:11] [02miraheze/ManageWiki] 07JohnFLewis pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2sL [12:05:13] [02miraheze/ManageWiki] 07JohnFLewis 03692152c - if $res is null, fake array [12:06:28] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_31 [+0/-0/±1] 13https://git.io/fA2st [12:06:29] [02miraheze/mediawiki] 07paladox 039e484da - Update MW [12:26:33] !log PURGE BINARY LOGS BEFORE '2018-09-08 13:26:00'; on db4 [12:26:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [12:27:52] RECOVERY - db4 Disk Space on db4 is OK: DISK OK - free space: / 91540 MB (24% inode=94%); [13:17:01] PROBLEM - mw3 JobQueue on mw3 is CRITICAL: JOBQUEUE CRITICAL - job queue greater than 300 jobs. Current queue: 6317 [13:46:21] Hello [13:49:25] hi [14:19:31] !log restarting mysql on db4 due to high cpu / load [14:19:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:21:02] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2cl [14:21:03] [02miraheze/puppet] 07paladox 03277d76b - Add cdn.syndication.twimg.com to CSP [14:22:05] PROBLEM - db4 MySQL on db4 is CRITICAL: Can't connect to MySQL server on '81.4.109.166' (111 "Connection refused") [14:22:23] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [14:22:33] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [14:22:47] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [14:22:52] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2cR [14:22:53] [02miraheze/puppet] 07paladox 030b9bd1f - Make twimg.com a regex in CSP [14:22:57] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 85% [14:23:05] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [14:23:07] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 96% [14:23:11] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [14:23:21] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [14:23:27] PROBLEM - misc2 HTTPS on misc2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 2203 bytes in 0.065 second response time [14:23:37] PROBLEM - misc4 phabricator.miraheze.org HTTPS on misc4 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 4127 bytes in 0.043 second response time [14:24:19] PROBLEM - mw3 MediaWiki Rendering on mw3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 3774 bytes in 0.020 second response time [14:24:39] PROBLEM - misc2 Puppet on misc2 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [14:24:45] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [14:24:57] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is WARNING: WARNING - NGINX Error Rate is 54% [14:25:05] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [14:25:11] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 5 backends are healthy [14:25:25] RECOVERY - misc2 HTTPS on misc2 is OK: HTTP OK: HTTP/1.1 200 OK - 40153 bytes in 0.096 second response time [14:25:31] PROBLEM - cp5 HTTP 4xx/5xx ERROR Rate on cp5 is CRITICAL: CRITICAL - NGINX Error Rate is 62% [14:25:37] RECOVERY - misc4 phabricator.miraheze.org HTTPS on misc4 is OK: HTTP OK: HTTP/1.1 200 OK - 17264 bytes in 0.171 second response time [14:25:42] What keeps happening.. [14:26:06] RECOVERY - db4 MySQL on db4 is OK: Uptime: 280 Threads: 68 Questions: 64562 Slow queries: 0 Opens: 2162 Flush tables: 1 Open tables: 400 Queries per second avg: 230.578 [14:26:19] Reception123 [15:19:31] <@paladox> !log restarting mysql on db4 due to high cpu / load [14:26:20] RECOVERY - mw3 MediaWiki Rendering on mw3 is OK: HTTP OK: HTTP/1.1 200 OK - 30864 bytes in 0.030 second response time [14:26:32] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:26:42] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [14:26:46] paladox: are you sure restarting MySQL is a start idea? [14:26:48] *smart [14:26:58] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 4% [14:26:58] Reception123 seems cpu has gone down. [14:27:08] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 19% [14:27:20] So at least we are safe for now. But we need to optimise the permission management as soon as possible. [14:27:32] RECOVERY - cp5 HTTP 4xx/5xx ERROR Rate on cp5 is OK: OK - NGINX Error Rate is 29% [14:27:47] paladox: what's wrong with permission management? [14:28:05] Actually cpu just shot back up just at this moment. [14:28:12] Reception123 it's not optimising the querys [14:28:19] paladox: has this been a problem before a few hours ago? [14:28:45] Reception123 it started last night [14:29:04] ok [14:29:13] We need to find out what is causing this high load [14:29:46] I really do not like how https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?orgId=1&var-job=node&var-node=db4.miraheze.org&var-port=9100 is looking [14:29:47] Title: [ Grafana ] - grafana.miraheze.org [14:30:00] Yikes it even reaches almost 6 at some point [14:30:08] Yep [14:30:21] Reception123 I belive it's linked to ManageWikiPermissions [14:30:27] since now all wikis are hitting the db [14:30:34] due to it not caching / optimising the querys [14:30:39] paladox: oh [14:30:40] RECOVERY - misc2 Puppet on misc2 is OK: OK: Puppet is currently enabled, last run 0 seconds ago with 0 failures [14:30:55] paladox: but why wouldn't that have happened during the original release of ManageWiki settings and extensions? [14:30:58] Why only permissions? [14:31:34] Reception123 um [14:31:37] good question [14:32:02] paladox: and have many wikis been changing their settings? [14:32:15] I think it wouldn't be a bad idea to have some global evidence log of all ManageWiki changes [14:32:31] Reception123 that is another good question which i doint currently have a answer to :) [14:33:18] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 33 seconds ago with 0 failures [14:33:24] We should maybe get it to read from all.dblist which would reduce load on db4, but maybe we need to optimise db4? [14:34:06] paladox: Yeah, possibly. Though I do not know much at all about DB optimization [14:34:11] SPF|Cloud would be useful right now :) [14:34:21] Yes SPF|Cloud would be useful here :) [14:34:57] * SPF|Cloud is here [14:35:04] SPF|Cloud: oh, great :) [14:35:08] SPF|Cloud see https://phabricator.miraheze.org/T3570 [14:35:09] Title: [ ⚓ T3570 High cpu / load on db4 since 09/07/18 at 10:40pm bst ] - phabricator.miraheze.org [14:35:14] SPF|Cloud: Not sure if you read above, but DB is having issues with load for a while [14:35:26] we're not sure what to do about that, and where it's coming from [14:35:33] paladox thinks it's ManageWikiPermissions [14:36:27] What does SHOW FULL PROCESSLIST; say? (remember, sql queries can contain PII!) [14:37:03] How many connections? Is there an increase in requests (look at varnisg board in grafana) [14:37:31] SPF|Cloud Reception123 https://phabricator.miraheze.org/P109 [14:37:32] Title: [ Login ] - phabricator.miraheze.org [14:38:11] https://grafana.miraheze.org/d/6Ym79i4ik/varnish-traffic?refresh=30s&orgId=1&from=now-2d&to=now [14:38:12] Title: [ Grafana ] - grafana.miraheze.org [14:38:50] Look at that frontend requests graph [14:39:14] Wow ATT is doing a lot [14:39:37] paladox: What if it's not ManageWikiPermissions but MatomoAnalytics? [14:39:51] P109 does not seem to contain strange content [14:39:59] hmm, good question, but then the other question would be why now? [14:40:05] why not when MatomoAnalytics was deployed? [14:40:22] paladox: well ManageWikiPermissions was deployed a few days ago, so why would issues only appear now? [14:40:59] because it was deployed to all wikis last night [14:41:07] Did this issue start when the frontend graph sspiked? [14:41:39] but strange thing is the high cpu started before the deploy [14:41:40] SPF|Cloud nope [14:41:50] Wait, ManageWiki sends out a request on each page hit? [14:41:58] started around 21:50pm [14:42:01] No caching whatsoever? [14:42:04] SPF|Cloud apparently i think. [14:42:08] (though not sure) [14:42:12] What I don't get is why this wasn't a problem before? [14:42:22] ManageWiki extensions and settings would've had the same behavior, no? [14:43:14] depends on how it is wrote i guess [14:43:16] I wonder if we could get a managewiki.log logging all changes? [14:43:22] https://github.com/miraheze/ManageWiki/blob/master/includes/ManageWikiHooks.php#L11 [14:43:23] Title: [ ManageWiki/ManageWikiHooks.php at master · miraheze/ManageWiki · GitHub ] - github.com [14:44:03] Well Permissions was inspired from CentralAuth that I know [14:46:55] https://stackoverflow.com/questions/568564/how-can-i-view-live-mysql-queries#comment28263811_7470567 [14:46:56] Title: [ monitoring - How can I view live MySQL queries? - Stack Overflow ] - stackoverflow.com [14:47:10] Turn general log on and use the tail command [14:47:30] ^ paladox [14:47:45] ok, SPF|Cloud isen't general log already on? [14:47:50] also https://github.com/miraheze/puppet/blob/master/modules/mariadb/templates/config/mw.cnf.erb#L50 why is that 0? [14:47:50] If you think a specific (kind of) query frequently pops up [14:47:50] Title: [ puppet/mw.cnf.erb at master · miraheze/puppet · GitHub ] - github.com [14:47:51] No [14:47:57] Then that may be the cause [14:48:05] !log SET GLOBAL general_log = 'ON'; on db4 [14:48:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [14:50:02] SPF|Cloud ok it's now writing to the log [14:50:04] tail -f -n300 /srv/mariadb/db4.log [14:50:10] Run the tail command on the general_log_fille [14:50:12] Great [14:51:02] it's printing alot [14:51:05] Any kind of query popping up frequently? [14:52:11] I doubt one MWPermissions query could bring this server on its knees but you never know.. [14:52:40] SPF|Cloud um nope, it's printing alot from resourceloader [14:52:42] to echo [14:52:45] to another extension [14:52:47] to User:: [14:53:11] That sounds normal tbh [14:53:34] so SPF|Cloud looking at the tech:log https://meta.miraheze.org/wiki/Tech:Server_admin_log [14:53:36] Title: [ Tech:Server admin log - Miraheze Meta ] - meta.miraheze.org [14:53:57] at 9:36 john did "21:36 John: populating usergroup on all wikis in mw_permissions" [14:54:54] Afaics this issue started not long after MWP deployment right? [14:55:09] it started before the actual switching on all wikis [14:55:26] but happened after john migrated settings to mwp in preperations [14:55:29] for the switch over [14:55:49] Timing is probably coincidence then [14:56:16] Well, something did have to cause it [14:57:55] yep [14:58:35] more acurate time https://grafana.miraheze.org/d/W9MIkA7iz/miraheze-cluster?orgId=1&from=now-1d%2Fd&to=now-1d%2Fd&var-job=node&var-node=db4.miraheze.org&var-port=9100 [14:58:35] Title: [ Grafana ] - grafana.miraheze.org [14:59:45] let me try setting a query cache [15:00:09] !log SET GLOBAL query_cache_size = 1000000; on db4 [15:00:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [15:01:10] meh not helping [15:01:35] Morning [15:01:49] hi [15:02:37] paladox: I've explained you a while ago why we don't have q_c_s [15:02:41] q_c* [15:02:51] Ah ok (i forgot) :) [15:02:55] !m JohnLewis (for ManageWikiPermissions) [15:02:55] <[d__d]> You're doing good work, JohnLewis (for ManageWikiPermissions)! [15:03:03] disabled again [15:03:07] (query cache) [15:03:25] paladox: ?? [15:03:35] AmandaCatherine we are having db troubles [15:03:35] AmandaCatherine: He disabled the query cache.. [15:03:38] high cpu / load [15:03:57] Oh, John answered my question [15:04:02] Oh, did that disable MWP everywhere including the limited deployment wikis? [15:04:17] AmandaCatherine: it didn't? [15:04:35] is it possible to disable MWP? [15:04:46] Ok, it sounded like paladox had disabled MWP [15:04:48] SPF|Cloud: well yeah, there's a setting [15:04:56] SPF|Cloud just reverting this commit: [15:05:00] SPF|Cloud: why would we want to? [15:05:13] https://github.com/miraheze/mw-config/commit/e234bb69298eb18ddfa43576dc667bc2d772bfc7 [15:05:14] Title: [ convert GroupPermissions to ManageWiki · miraheze/mw-config@e234bb6 · GitHub ] - github.com [15:05:20] AmandaCatherine due to high cpu on db4 [15:05:34] because we are dealing with server issues and I cannot certify it's not from the MWP deployment [15:05:42] Oh [15:06:02] SPF|Cloud so i revert (just making sure)? [15:06:10] Maybe deploying something on 2000+ wikis at once wasn’t a good idea [15:06:13] not yet, hold on [15:06:17] ok [15:06:26] SPF|Cloud: paladox what I'm not sure of, is if we disable, do default permissions and permissions set still apply? [15:06:39] In theory, disabling should just disable the UI, not the actual settings, right? [15:06:42] writing a quick fix to cache the results might be even faster [15:06:45] Reception123 they go back to LS wgGroupPermissions [15:07:09] paladox: oh, yeah but then the new permissions set by users would be gone [15:07:20] Yep [15:07:27] but only temp until the root cause is fixed [15:07:30] ie caching etc [15:07:54] misc1 is redis right? [15:08:01] Which server(s) are having problems? (db4 just means “database 4” I think) [15:08:03] or was it misc2? (need this info quick) [15:08:35] nvm, seems misc2. gonna lookup caching examples and see if I can implement that [15:09:07] AmandaCatherine: The database server is db4 [15:09:19] It's called db4 because previously we had 3 other database servers which are now decomissioned [15:09:22] (see Meta docs) [15:10:08] Hola Voidwalker :) [15:10:21] Hello* hehe [15:11:28] hi [15:12:00] hi [15:21:55] paladox: what else is hosted on misc4 other than Phabricator? [15:22:03] Reception123 lizardfs master [15:22:08] paladox: ok [15:22:12] paladox: docs should be updated then [15:22:51] :) [15:23:08] done [15:23:15] though https://meta.miraheze.org/wiki/Tech:Misc4 [15:23:16] Title: [ Tech:Misc4 - Miraheze Meta ] - meta.miraheze.org [15:23:21] already said lizardfs master [15:28:47] [02miraheze/ManageWiki] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2lV [15:28:49] [02miraheze/ManageWiki] 07Southparkfan 03e25038b - Add quick caching for MWPermissions [15:29:13] paladox ^ please update mediawiki repo quickly and test this change on test1 before deploying [15:29:20] ok [15:29:29] i will disable puppet on mw* [15:29:57] I'm also rather clueless what else it could be to be honest [15:30:51] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_31 [+0/-0/±1] 13https://git.io/fA2l1 [15:30:52] [02miraheze/mediawiki] 07paladox 031340a0a - Update MW [15:31:22] SPF|Cloud fatal exception /me looks [15:31:23] https://test1.miraheze.org/wiki/ [15:31:25] Title: [ Internal error - {{SITENAME}} ] - test1.miraheze.org [15:32:00] SPF|Cloud RuntimeException from line 4534 of /srv/mediawiki/w/includes/libs/rdbms/database/Database.php: Database serialization may cause problems, since the connection is not restored on wakeup. [15:32:06] PROBLEM - mw1 Puppet on mw1 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 11 minutes ago with 0 failures [15:32:08] #1 /srv/mediawiki/w/extensions/ManageWiki/includes/ManageWikiHooks.php(34): serialize(Wikimedia\Rdbms\ResultWrapper) [15:33:00] PROBLEM - mw2 Puppet on mw2 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 12 minutes ago with 0 failures [15:33:30] argh, revert your commit .. [15:33:48] my* [15:33:54] PROBLEM - mw3 Puppet on mw3 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 12 minutes ago with 0 failures [15:33:58] ok [15:34:28] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_31 [+0/-0/±1] 13https://git.io/fA2lb [15:34:29] [02miraheze/mediawiki] 07paladox 03a76ae65 - Revert "Update MW" This reverts commit 1340a0a6628f9cc100fb642c208dc86af715b750. [15:35:44] storing DatabaseResult objects is not the way to go apparently [15:36:29] don't have much time left to write a 'nice caching solution that works' so my suggestion is to revert MWP deployment, paladox [15:36:38] SPF|Cloud why not do ok [15:36:41] meh [15:36:44] i ment ok [15:36:51] or wait until John and/or I have time again if there is no rush for this [15:36:55] yep [15:37:19] [02miraheze/ManageWiki] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA28v [15:37:21] [02miraheze/ManageWiki] 07paladox 031902d13 - Revert "Add quick caching for MWPermissions" This reverts commit e25038b807e62d612d987c382735acb050c2c5ec. [15:37:44] unless wiki literally break apart John can invent something I guess. [15:37:46] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±3] 13https://git.io/fA28f [15:37:47] [02miraheze/mw-config] 07paladox 03e08a701 - Revert "convert GroupPermissions to ManageWiki" This reverts commit e234bb69298eb18ddfa43576dc667bc2d772bfc7. Reverting this temp until john is around. [15:38:06] RECOVERY - mw1 Puppet on mw1 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [15:38:58] RECOVERY - mw2 Puppet on mw2 is OK: OK: Puppet is currently enabled, last run 36 seconds ago with 0 failures [15:39:19] paladox: unless absolutely necessary MWP can stay enabled, fyi, at your discretion now [15:39:42] SPF|Cloud ok, it has been disabled by my revert [15:39:54] RECOVERY - mw3 Puppet on mw3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:39:59] john enabled it at the same time as moving wgGroupPermissions to ManageWikiPermissions setting [15:40:04] I cannot estimate now if the situation will get worse in a few hours, so [15:40:13] SPF|Cloud load has gone down [15:40:16] since the revert [15:40:18] and cpu too [15:41:45] | 150735 | mediawiki | 185.52.1.75:55780 | metawiki | Sleep | 1969 | [15:41:55] is someone running a script? [15:42:06] though MWP is still enabled on meta [15:42:12] as i enabled it a few days before [15:44:02] load looks better indeed [15:48:58] SPF|Cloud im thinking this https://github.com/miraheze/ManageWiki/blob/master/includes/ManageWikiHooks.php#L12 is the problem [15:48:59] Title: [ ManageWiki/ManageWikiHooks.php at master · miraheze/ManageWiki · GitHub ] - github.com [15:50:06] https://github.com/wikimedia/mediawiki/blob/8cea6e052c944bec3abe241463d6b86f9a706a3a/includes/Title.php#L3357 the solution is something like this btw [15:50:06] Title: [ mediawiki/Title.php at 8cea6e052c944bec3abe241463d6b86f9a706a3a · wikimedia/mediawiki · GitHub ] - github.com [15:56:27] !log SET GLOBAL general_log = 'OFF'; on db4 [15:56:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:04:32] https://meta.miraheze.org/wiki/Community_noticeboard#About_Copyrighted_Images_.28and_that_they_use_wgAllowExternalImages.29 [16:04:33] Title: [ Community noticeboard - Miraheze Meta ] - meta.miraheze.org [16:05:43] A doubt I have and it should be resolved. [16:06:16] [02miraheze/ManageWiki] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA24H [16:06:18] [02miraheze/ManageWiki] 07paladox 03e3e552c - Update ManageWiki.php [16:28:42] !log upgrade phabricator on misc4 [16:28:47] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log, Master [16:31:06] paladox, have you ever seen something like https://testwiki.wiki/images/f/f8/Phab_bug.png [16:31:38] Voidwalker that seems to be connected to mw [16:31:41] i guess it's 500? [16:31:44] ohhhh [16:31:51] paladox, not for mh [16:31:53] did you composer install in OAuth [16:31:57] Voidwalker yep ^^ [16:33:18] hmm, I don't really touch composer [16:33:50] Voidwalker needs a composer install [16:34:20] https://github.com/wikimedia/mediawiki-extensions-OAuth/blob/master/composer.json#L8 [16:34:21] Title: [ mediawiki-extensions-OAuth/composer.json at master · wikimedia/mediawiki-extensions-OAuth · GitHub ] - github.com [16:39:03] I was wondering what the cause of that was! [16:39:08] Are we going to have a new Mediawiki update in December? [16:39:37] well, there's your problem: "-bash: composer: command not found" [16:39:43] Wiki-1776: we always try to use the latest version of Mediawiki so yes [16:40:28] ok [16:41:42] Voidwalker curl -sS https://getcomposer.org/installer | php && php composer.phar install [16:42:56] Composer I believe should work from /var/www/html since I believe that’s where I installed it [16:43:32] void@server:/var/www/html$ composer [16:43:32] -bash: composer: command not found [16:44:01] I don’t have access right now otherwise I would help [16:44:40] I thought I installed it once but ok [16:44:41] Voidwalker where the OAuth extension is installed do "curl -sS https://getcomposer.org/installer | php && php composer.phar install" :) [16:45:56] MacFan4000, should I run the install in /var/www/html? [16:46:15] Voidwalker well..... [16:46:19] you could do this: [16:46:35] curl -sS https://getcomposer.org/installer && mv composer.phar /usr/bin/ [16:46:43] I don’t care where it gets installed as long as it works :) [16:46:46] which would allow you to do composer install [16:46:54] ^^ [16:47:44] actually [16:47:50] curl -sS https://getcomposer.org/installer && mv composer.phar /usr/bin/composer [16:48:00] otherwise it would be composer.phar install heh [16:48:18] and then php /usr/bin/composer/composer.phar install ? [16:48:38] nope then composer install [16:48:53] oh :P [16:48:55] /usr/bin/composer/composer.phar shoudl be /usr/bin/composer/composer [16:48:58] *should [16:49:01] uh [16:49:07] wait mv /usr/bin/composer/composer.phar /usr/bin/composer [16:49:21] there should be no composer directory :P [16:49:28] it's a single file thats installed :) [16:54:28] paladox, running curl -sS https://getcomposer.org/installer instead returned a bunch of cert files [16:54:40] cert? [16:54:55] Voidwalker what version os are you running? [16:55:54] sudo apt-get update && sudo apt-get install curl (needs to be done as root) [16:57:17] Voidwalker also sudo apt-get install ca-certificates [16:58:16] ca-certificates is already the newest version (20161130+nmu1+deb9u1). [16:59:14] paladox, I'm going to try following the instructions listed on the downloads page [16:59:20] ok [16:59:46] paladox: it’s Debian stretch [16:59:50] ok [17:00:02] The cert that the site uses is LE [17:00:39] paladox, I've already got composer.phar [17:00:44] ok [17:01:45] Ah yeh, I downloaded it a while ago but never implemented it as a command [17:01:58] and done [17:02:19] ok [17:02:28] Voidwalker so oauth works now? :) [17:03:04] paladox, I just did composer update [17:03:04] paladox: yes [17:03:10] :) [17:03:47] confirmed working [17:05:01] :) [17:05:36] !m paladox [17:05:36] <[d__d]> You're doing good work, paladox! [17:05:44] :) [17:05:47] !m Voidwalker [17:05:48] <[d__d]> You're doing good work, Voidwalker! [17:06:03] I'll respond to the user [17:06:33] They reported in IRC and have since left the channel [17:07:06] and also email and Community portal [17:11:39] Voidwalker: MacFan4000 Reception123 see MetaWiki [17:11:50] JohnLewis: SPF|Cloud PuppyKun [17:12:16] Weathereiki too [17:12:45] thanks [17:14:30] heh, we should re-enable the wiki request throttle [17:15:00] https://meta.miraheze.org/wiki/Special:Log/farmer lol [17:15:01] Title: [ Farmer log - Miraheze Meta ] - meta.miraheze.org [17:21:07] @Void Whoops, accidentally commented on the same wiki request as you [18:08:18] [02mw-config] 07The-Voidwalker opened pull request 03#2440: allow stewards to globalblock - 13https://git.io/fA2ue [18:08:46] [02mw-config] 07paladox closed pull request 03#2440: allow stewards to globalblock - 13https://git.io/fA2ue [18:08:48] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2uf [18:08:49] [02miraheze/mw-config] 07The-Voidwalker 03ee433c3 - allow stewards to globalblock (#2440) [18:08:54] ty paladox [18:09:00] your welcome :) [18:09:12] should be deployed in 1minute (as cron does it every 10 mins) [18:19:13] miraheze/mw-config/master/ee433c3 - The-Voidwalker The build has errored. https://travis-ci.org/miraheze/mw-config/builds/426157490 [18:45:59] [02miraheze/MatomoAnalytics] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2z3 [18:46:01] [02miraheze/MatomoAnalytics] 07paladox 03baadc3f - Update hidden text to use "matomo" [18:46:34] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_31 [+0/-0/±2] 13https://git.io/fA2zG [18:46:35] [02miraheze/mediawiki] 07paladox 03b1b1c34 - Update MA [18:56:39] PROBLEM - misc1 GDNSD Datacenters on misc1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [18:56:51] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 6 datacenters are down: 107.191.126.23/cpweb, 2604:180:0:33b::2/cpweb, 81.4.109.133/cpweb, 2a00:d880:5:8ea::ebc7/cpweb, 172.104.111.8/cpweb, 2400:8902::f03c:91ff:fe07:444e/cpweb [18:56:57] paladox ^ there you go [18:57:03] ah ok [18:57:04] thanks [18:57:05] PROBLEM - cp2 Varnish Backends on cp2 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [18:57:31] PROBLEM - cp5 Varnish Backends on cp5 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [18:57:47] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 3 backends are down. mw1 mw2 mw3 [18:58:57] PROBLEM - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is CRITICAL: CRITICAL - NGINX Error Rate is 82% [18:59:09] PROBLEM - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is CRITICAL: CRITICAL - NGINX Error Rate is 94% [19:00:44] err it's not comming back up :/ [19:01:49] PROBLEM - cp5 HTTP 4xx/5xx ERROR Rate on cp5 is CRITICAL: CRITICAL - NGINX Error Rate is 67% [19:02:25] fatal exception [19:02:57] RECOVERY - cp4 HTTP 4xx/5xx ERROR Rate on cp4 is OK: OK - NGINX Error Rate is 30% [19:03:20] found the issue [19:03:26] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_31 [+0/-0/±1] 13https://git.io/fA2z5 [19:03:28] [02miraheze/mediawiki] 07paladox 0350c738e - Update MW [19:03:32] it was a stupid mistake i re reverted the change i reverted by SPF|Cloud [19:04:42] Needs a icident report i guess but later (will discuss with john first) [19:04:53] RECOVERY - misc1 GDNSD Datacenters on misc1 is OK: OK - all datacenters are online [19:05:09] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:05:16] Voidwalker fixed [19:05:58] thanks [19:07:08] RECOVERY - cp2 HTTP 4xx/5xx ERROR Rate on cp2 is OK: OK - NGINX Error Rate is 3% [19:07:50] PROBLEM - cp5 HTTP 4xx/5xx ERROR Rate on cp5 is WARNING: WARNING - NGINX Error Rate is 54% [19:09:06] RECOVERY - cp2 Varnish Backends on cp2 is OK: All 5 backends are healthy [19:09:28] RECOVERY - cp5 Varnish Backends on cp5 is OK: All 5 backends are healthy [19:09:37] Error 503 Backend fetch failed, forwarded for 31.173.87.79, 127.0.0.1 (Varnish XID 6193636) via cp4 at Sat, 08 Sep 2018 19:07:53 GMT. [19:09:48] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [19:10:15] @WerySkok already being resolved [19:10:45] ok, waiting... [19:11:50] RECOVERY - cp5 HTTP 4xx/5xx ERROR Rate on cp5 is OK: OK - NGINX Error Rate is 29% [19:15:51] oh, already fixed [19:19:46] pretty much, yeah [19:41:48] PROBLEM - cp4 Varnish Backends on cp4 is CRITICAL: 1 backends are down. mw2 [19:47:48] RECOVERY - cp4 Varnish Backends on cp4 is OK: All 5 backends are healthy [21:24:30] Voidwalker: as I’ve mentioned before, please don’t revert vandalism on my wiki unless it is something serious (libel, other NPA violations, copyright infringement, etc) [21:24:57] I’m not the kind of person who insists that any vandalism be removed immediately [21:25:12] gotcha [21:28:59] How would I go about creating an “extendedconfirmed” user group that users would be automatically added to by the software when certain conditions are met, but that is not an implicit group? [21:29:09] (See Wikipedia for an example)\ [21:32:34] Hmm... why are all of the $wgAddGroups and $wgRemoveGroups listed twice in Special:ManageWikiPermissions? [21:32:39] paladox ? ^ [21:32:47] Twice? [21:33:02] https://i.imgur.com/K8dhbg6.jpg [21:33:40] Oh, not sure, could you file a task for john (who would know) please? :) [21:33:51] Ok [21:36:43] is there any way to embed iframes in miraheze articles? (not YouTube, I already got that working) [21:37:19] .tell JohnLewis please investigate https://phabricator.miraheze.org/T3574 ASAP [21:37:19] AmandaCatherine: I'll pass that on when JohnLewis is around. [21:37:21] Title: [ ⚓ T3574 $wgAddGroups and $wgRemoveGroups listed twice in MWP ] - phabricator.miraheze.org [23:20:27] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2it [23:20:28] [02miraheze/puppet] 07paladox 0392cb7d3 - Add astrobiologywiki and doomsdaydebunkedwiki too wiki dump every fortnight fixes T3575 [23:20:43] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/fA2iq [23:20:44] [02miraheze/puppet] 07paladox 03d9c5ca8 - Update xml_dump.yaml