[00:01:33] RECOVERY - cp3 Disk Space on cp3 is OK: DISK OK - free space: / 3821 MB (15% inode=93%); [01:07:14] !log set up 2G swap on db11, non-persistent [01:07:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:22:20] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 7.36, 5.94, 4.82 [01:24:20] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 5.70, 5.99, 4.98 [02:04:39] [02miraheze/mw-config] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJWi6 [02:04:41] [02miraheze/mw-config] 07Southparkfan 036594520 - Add global 5y anniversary notice, with opt out [02:06:19] miraheze/mw-config/master/6594520 - Ferran Tufan The build was broken. https://travis-ci.org/miraheze/mw-config/builds/710599913 [02:07:21] [02miraheze/mw-config] 07Southparkfan pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJWiX [02:07:22] [02miraheze/mw-config] 07Southparkfan 03318ac97 - Increase font size [02:09:01] miraheze/mw-config/master/318ac97 - Ferran Tufan The build is still failing. https://travis-ci.org/miraheze/mw-config/builds/710600569 [02:12:12] ^ can ignore build error, syntax is invalid in php 7.2 but it's valid in php 7.3 [02:12:18] night :) [02:17:43] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:17:44] PROBLEM - misc1 Current Load on misc1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:17:59] PROBLEM - misc1 NTP time on misc1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:18:15] PROBLEM - misc1 IMAP on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:19:17] PROBLEM - misc1 SSH on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:20:37] PROBLEM - misc1 Disk Space on misc1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:21:33] PROBLEM - misc1 SMTP on misc1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:21:37] PROBLEM - misc1 APT on misc1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [02:21:57] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 10 minutes ago with 0 failures [02:22:01] RECOVERY - misc1 NTP time on misc1 is OK: NTP OK: Offset -0.0006358027458 secs [02:22:25] RECOVERY - misc1 IMAP on misc1 is OK: IMAP OK - 0.036 second response time on 185.52.1.76 port 143 [* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS LOGINDISABLED] Dovecot ready.] [02:22:36] RECOVERY - misc1 Disk Space on misc1 is OK: DISK OK - free space: / 34634 MB (84% inode=99%); [02:23:25] RECOVERY - misc1 SSH on misc1 is OK: SSH OK - OpenSSH_7.4p1 Debian-10+deb9u7 (protocol 2.0) [02:23:30] RECOVERY - misc1 SMTP on misc1 is OK: SMTP OK - 0.068 sec. response time [02:23:34] RECOVERY - misc1 APT on misc1 is OK: APT OK: 26 packages available for upgrade (0 critical updates). [02:29:24] Hello happy5thbirthday! If you have any questions, feel free to ask and someone should answer soon. [02:30:02] 🎉 [02:37:32] PROBLEM - misc1 Current Load on misc1 is WARNING: WARNING - load average: 0.01, 0.56, 1.98 [02:41:32] RECOVERY - misc1 Current Load on misc1 is OK: OK - load average: 0.05, 0.28, 1.54 [04:04:20] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 8.03, 7.10, 5.49 [04:06:20] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 7.17, 7.10, 5.68 [04:08:20] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 4.74, 6.33, 5.57 [04:14:20] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 5.72, 6.95, 6.16 [04:16:20] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 6.80, 6.72, 6.16 [06:10:22] !log renamed animewiki to shihouwik [06:10:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [06:15:13] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJWHf [06:15:14] [02miraheze/services] 07MirahezeSSLBot 03ea401b4 - BOT: Updating services config for wikis [06:16:59] PROBLEM - dreamsit.com.br - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'dreamsit.com.br' expires in 15 day(s) (Fri 07 Aug 2020 06:08:05 GMT +0000). [06:17:38] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJWHL [06:17:40] [02miraheze/ssl] 07MirahezeSSLBot 0322b33c8 - Bot: Update SSL cert for dreamsit.com.br [06:23:41] RECOVERY - dreamsit.com.br - LetsEncrypt on sslhost is OK: OK - Certificate 'dreamsit.com.br' will expire on Tue 20 Oct 2020 05:17:32 GMT +0000. [06:55:41] Happy Birthday Miraheze! [06:55:46] Morning Reception123 [06:56:12] Morning [06:56:23] Happy birthday Miraheze :D [06:56:27] 5 years... [06:57:02] Yep [07:00:15] PROBLEM - wiki.valentinaproject.org - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.valentinaproject.org' expires in 15 day(s) (Fri 07 Aug 2020 06:56:21 GMT +0000). [07:00:43] [ANNOUNCEMENT] Miraheze celebrates its 5th Birthday today. Everyone behind MirahezeBot would like to wish Miraheze a very happy birthday. Congrats and good luck for many more years! Join the celebration via https://meta.miraheze.org/wiki/Miraheze-5-year [07:00:54] Morning, RhinosF1 and Reception123. 🙂 [07:01:06] Hey [07:01:11] Hi [07:01:14] And now it's just past midnight on the west coast so it's time for my sleep. 😛 [07:01:36] Have a good night [07:01:46] thanks [07:01:50] Good night :) [07:03:48] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJW7z [07:03:49] [02miraheze/ssl] 07MirahezeSSLBot 03f0a4648 - Bot: Update SSL cert for wiki.valentinaproject.org [07:11:59] .status mhmeta Happy Birthday Miraheze! [07:12:01] RhinosF1: Updating User:RhinosF1/Status to Happy Birthday Miraheze!! [07:12:09] RhinosF1: Updated! [07:13:55] RECOVERY - wiki.valentinaproject.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.valentinaproject.org' will expire on Tue 20 Oct 2020 06:03:41 GMT +0000. [09:18:20] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 6.76, 6.86, 5.19 [09:20:20] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 4.30, 5.90, 5.03 [11:32:21] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 8.62, 7.26, 5.69 [11:34:22] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 4.32, 6.10, 5.46 [13:27:03] PROBLEM - db7 Check MariaDB Replication on db7 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 131s [13:27:22] SPF|Cloud, paladox: ^ [13:28:58] it'll catch up. [13:29:34] 131s is hardly much diff [13:29:35] Zppix: 2020-07-21 - 20:26:37CDT tell Zppix ? [13:32:19] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 23.84, 19.66, 15.03 [13:35:02] Zppix: then it shouldn't be alerting [13:35:12] paladox: does it need to alert critical every day [13:35:40] I guess we should tweek it, going to leave it to SPF|Cloud as he's more an expert in the db side of things :) [13:36:16] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 33.25, 25.19, 18.16 [13:37:50] paladox: okay! [13:38:03] It's tweak though :) [13:38:21] PROBLEM - cp7 Current Load on cp7 is CRITICAL: CRITICAL - load average: 6.45, 8.49, 4.96 [13:40:14] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.67, 22.34, 18.65 [13:40:20] RECOVERY - cp7 Current Load on cp7 is OK: OK - load average: 3.17, 6.60, 4.70 [13:42:30] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 71.47, 39.03, 25.02 [13:44:16] PROBLEM - zw.fontainebleau-avon.fr - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210' [13:44:20] PROBLEM - permanentfuturelab.wiki - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' [13:44:27] PROBLEM - www.lab612.at - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:105a::10,51.222.27.129' [13:44:29] PROBLEM - wiki.villagecollaborative.net - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.222.27.129' [13:44:35] PROBLEM - tep.wiki - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' [13:44:37] PROBLEM - sims.miraheze.org - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.222.27.129' [13:44:44] paladox: ^ is another going off too fern [13:44:46] Often [13:44:47] PROBLEM - enc.for.uz - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' [13:47:45] i have no idea why it tried using cp9 [13:47:51] SPF|Cloud ^ [13:50:20] [02miraheze/services] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJlti [13:50:22] [02miraheze/services] 07MirahezeSSLBot 0315263ec - BOT: Updating services config for wikis [13:50:31] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.65, 23.09, 22.80 [13:50:58] RECOVERY - zw.fontainebleau-avon.fr - DNS on sslhost is OK: DNS OK: 0.034 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [13:51:03] RECOVERY - permanentfuturelab.wiki - DNS on sslhost is OK: DNS OK: 0.036 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [13:51:15] RECOVERY - www.lab612.at - DNS on sslhost is OK: DNS OK: 0.053 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [13:51:23] RECOVERY - sims.miraheze.org - DNS on sslhost is OK: DNS OK: 0.038 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [13:51:24] RECOVERY - wiki.villagecollaborative.net - DNS on sslhost is OK: DNS OK: 0.034 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [13:51:35] RECOVERY - tep.wiki - DNS on sslhost is OK: DNS OK: 0.034 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [13:51:38] RECOVERY - enc.for.uz - DNS on sslhost is OK: DNS OK: 0.037 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [13:52:44] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 55.50, 30.48, 25.26 [13:58:42] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 16.61, 21.89, 23.16 [14:00:41] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.62, 23.71, 23.69 [14:02:39] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 13.88, 20.69, 22.64 [14:06:36] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 31.23, 25.08, 24.04 [14:08:35] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 22.66, 23.82, 23.69 [14:10:33] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 31.07, 25.96, 24.41 [14:12:35] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 19.31, 23.50, 23.72 [14:16:33] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 32.94, 25.17, 24.11 [14:30:34] PROBLEM - cp7 Current Load on cp7 is CRITICAL: CRITICAL - load average: 8.96, 7.36, 5.01 [14:32:31] PROBLEM - cp7 Current Load on cp7 is WARNING: WARNING - load average: 6.57, 6.88, 5.10 [14:34:28] PROBLEM - cp7 Current Load on cp7 is CRITICAL: CRITICAL - load average: 38.38, 18.83, 9.65 [14:42:43] PROBLEM - cp7 Current Load on cp7 is WARNING: WARNING - load average: 3.47, 7.77, 7.94 [14:43:30] RECOVERY - db7 Check MariaDB Replication on db7 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [14:48:32] RECOVERY - cp7 Current Load on cp7 is OK: OK - load average: 2.46, 4.27, 6.30 [14:49:12] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 9.45, 16.57, 23.95 [14:57:07] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 9.23, 13.15, 19.39 [15:42:33] Hello PichuVI! If you have any questions, feel free to ask and someone should answer soon. [16:25:32] PROBLEM - cp3 Disk Space on cp3 is WARNING: DISK WARNING - free space: / 2645 MB (10% inode=93%); [16:31:29] Wow, #irc-relay has been quiet for the past 9-10 hours. That's not a lot of conversation in that span. [16:43:53] hello [16:44:24] if I do a desambiguation page, that page counts as a real page in the counter. Is there a way that page counts like it was a redirection? [16:45:04] No it doesn't get counted as a page. [16:45:26] I think it might need turning on via Extension:Disambigator though [16:48:53] maybe is because I did it as a normal page, as I don't know how to as a real disambiguation. Do I need to request that extension or I can activate it myself? [16:49:16] puedes activarla en ManageWiki [16:55:49] ^ [16:57:30] gracias / thank you [16:58:10] I spent a minute trying to figure out where the "tabulator" (tabulador) was, and it was a tab (pestaña). I think that can be worded differently [16:58:30] in Spanish [16:59:06] Ah! [16:59:13] "Una vez que termines, presiona la pestaña llamada "enviar" para guardar tus cambios" instead "Una vez que has hecho, presiona el tabulador que se llama "presenta" para guardar tus cambios." [17:00:22] PROBLEM - wiki.grottocenter.org - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' [17:00:27] Can I propose that change officially somewhere? in phabricator? or do you have another service to translate by chains? [17:01:09] Jakeukalane: which url do you see it on and I'll find where to fix the translation [17:01:16] Is it for Spanish? [17:02:03] yes [17:02:07] wiki/Especial:ManageWiki/extensions [17:02:31] I will reword all the parragraph [17:02:43] https://usercontent.irccloud-cdn.com/file/wsGF5SmC/ManageWiki.png [17:02:48] RhinosF1: ^ [17:05:31] Jakeukalane: https://translatewiki.net/w/i.php?title=Special:Translate&showMessage=managewiki-header-extensions&group=mwgithub-managewiki&language=es&filter=&optional=1&action=translate [17:05:32] [ Translate - translatewiki.net ] - translatewiki.net [17:05:33] https://pastebin.com/raw/yzGmWhYC [17:05:55] thank you [17:06:11] Jakeukalane: you can sign up for translatewiki and edit it there [17:06:58] that's what I wanted, I am in several of those pages already :) [17:07:07] RECOVERY - wiki.grottocenter.org - DNS on sslhost is OK: DNS OK: 0.037 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [17:07:34] Good [17:08:35] Hello Jakeukalane_! If you have any questions, feel free to ask and someone should answer soon. [18:09:29] just received the mail that I have permissions to edit, now [18:11:46] just send my translation [18:11:51] sent* [18:15:00] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 19.60, 11.40, 6.97 [18:26:35] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 6.44, 7.81, 7.37 [18:30:26] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 3.83, 5.94, 6.73 [19:30:58] [02mw-config] 07dmehus commented on pull request 03#3168: Update LocalSettings.php - 13https://git.io/JJluI [19:57:10] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 8.15, 6.89, 5.55 [19:59:06] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 5.77, 7.42, 5.98 [20:01:09] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 3.12, 5.83, 5.56 [20:09:53] this is a known visualization problem https://usercontent.irccloud-cdn.com/file/nMYzlBra/visualizationproblem.png [20:13:47] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 12.23, 8.74, 6.85 [20:16:09] hispano76: if it's an svg, yeah [20:17:43] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 7.70, 7.87, 6.93 [20:19:39] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 13.26, 9.28, 7.51 [20:22:07] not SVG [20:22:44] happens with jpg and png [20:23:58] the image is of https://commons.wikimedia.org/wiki/File:Andres_manuel_lopez_obrador_oct05.jpg RhinosF1 [20:23:59] [ File:Andres manuel lopez obrador oct05.jpg - Wikimedia Commons ] - commons.wikimedia.org [20:24:15] Hmm [20:24:17] paladox: ^ [20:24:48] (problem also occurs with images hosted on Miraheze Commons) [20:26:35] hmm? [20:27:47] paladox: same as the svg issue or? [20:28:08] what's the same as the svg issue? That's commons.wikimedia.org, we don't control that domain. [20:28:29] https://usercontent.irccloud-cdn.com/file/1HMvgXSg/Example%20on%20MHCommons.png [20:28:37] hmm [20:28:43] that's the same as the svg image [20:28:48] it's doing 10px [20:29:01] paladox: RhinosF1 ^ [20:29:15] I'll let paladox look into it [20:29:30] is this only occuring on photos sourced from wikimedia and miraheze commons? [20:29:46] [02mw-config] 07paladox closed pull request 03#3168: Update LocalSettings.php - 13https://git.io/JJskA [20:29:48] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJlgQ [20:29:49] [02miraheze/mw-config] 07dmehus 03f8b2af7 - Update LocalSettings.php (#3168) In the case of "wikitech," per discussion with @RhinosF1 on Discord following a declined wiki creation request. Though we may not need it imminently, it seems prudent to reserve this subdomain for potential Miraheze use as a public or private wiki in the future. Likewise, for "gazetteer," per the several declined wiki requests which [20:29:49] @Pix1234 and I have declined, this one seeks to replicate our Gazetteer of Wikis page on Meta without some sort of community discussion and clearly articulating how we could not improve upon the data available in/provided by the WikiDiscover special page. We may want to wildcard "gazetteer" to include "gazeteer" as a misspelling. [20:31:21] miraheze/mw-config/master/f8b2af7 - Doug Mehus The build is still failing. https://travis-ci.org/miraheze/mw-config/builds/710884565 [20:34:19] paladox: ^ [20:34:52] Sitenotice on ohp7.2 [20:35:38] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJl23 [20:35:40] [02miraheze/mw-config] 07paladox 03dddcc24 - Fix [20:35:48] php7.2 error, works on php7.3 [20:37:10] miraheze/mw-config/master/dddcc24 - paladox The build was fixed. https://travis-ci.org/miraheze/mw-config/builds/710886185 [20:37:12] An extension has broken images [20:37:28] i just have to figure out which extension.. [20:37:45] paladox: I know [20:37:55] you know? because i didn't [20:38:55] paladox: no, for php7.2 [20:38:59] oh [20:39:01] 21:35:49 php7.2 error, works on php7.3 [20:39:23] RhinosF1 in you're opinion which extension would you think would mostly affect images? [20:39:27] *your [20:39:36] paladox: is it affecting every wiki? [20:39:50] yes, any wiki that has it enabled [20:39:57] Since it works on Johns test delete wiki [20:40:05] That's not every wiki [20:40:11] What extension? [20:40:14] If it's working on some not others [20:40:20] Then work out the difference [20:40:29] And slowly turn things off [20:40:33] Until it works [20:45:56] yes i know, that's why i asked you if you had an extension in mind [20:46:59] hum, check skin citizen [20:48:08] is one of the most recent extensions/skins I've enabled [20:48:59] paladox: best to do is just look at the minimally reproducible set and work from there [20:49:10] And yeah look at changes around when it broke [20:49:28] @RhinosF1 and @Hispano76 Regarding the blurry images, I've heard some people on [[community noticeboard]] say they found a workaround whereby they create their SVG images in a desktop graphics program like Inkscape instead of ImageMagick and it seems to resolve any blurry image problems. I've personally never used an online graphics rendering / conversion program, other than, well, maybe Facebook's or Twitter's when cropping my profile [20:49:29] picture/avatar. [20:49:42] oh [20:49:46] it's citizen [20:49:48] RhinosF1 hispano76 ^ [20:50:07] paladox: ah! I'll talk to the developer then. [20:50:13] @paladox what's "citizen"? [20:50:18] a skin [20:50:23] ah [20:50:24] I use inkscape for image rendering [20:51:14] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_34 [+0/-0/±1] 13https://git.io/JJl2H [20:51:16] [02miraheze/mediawiki] 07paladox 031571d75 - Update Citizen [20:52:04] RhinosF1 https://github.com/StarCitizenTools/mediawiki-skins-Citizen/blob/524ba6b9b9949bf45223aa3601dfb06af53c88a5/includes/CitizenHooks.php#L88 [20:52:04] [ mediawiki-skins-Citizen/CitizenHooks.php at 524ba6b9b9949bf45223aa3601dfb06af53c88a5 · StarCitizenTools/mediawiki-skins-Citizen · GitHub ] - github.com [20:55:06] paladox: that's your issue then! [20:56:59] :) [21:02:23] PROBLEM - mw5 Puppet on mw5 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [21:02:54] PROBLEM - mw4 Puppet on mw4 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [21:03:26] PROBLEM - jobrunner1 Puppet on jobrunner1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [21:05:22] RECOVERY - jobrunner1 Puppet on jobrunner1 is OK: OK: Puppet is currently enabled, last run 49 seconds ago with 0 failures [21:05:45] PROBLEM - cyberneticeye.xyz - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:05:51] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 24.49, 24.31, 19.67 [21:06:04] paladox: can we Revert them dns alerts? [21:06:19] PROBLEM - mw6 Puppet on mw6 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [21:06:20] PROBLEM - mw7 Puppet on mw7 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [21:06:25] They're going off for nothing way too often [21:07:51] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 15.46, 21.11, 19.07 [21:08:14] RECOVERY - mw5 Puppet on mw5 is OK: OK: Puppet is currently enabled, last run 51 seconds ago with 0 failures [21:08:54] RECOVERY - mw4 Puppet on mw4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:09:51] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 14.50, 19.63, 18.82 [21:10:01] Hey Voidwalker [21:11:08] hi [21:11:43] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_34 [+0/-0/±1] 13https://git.io/JJlaD [21:11:44] [02miraheze/mediawiki] 07paladox 03988d1f7 - Update Citizen [21:12:32] RECOVERY - cyberneticeye.xyz - DNS on sslhost is OK: DNS OK: 0.036 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:13:11] How's Voidwalker on our 5th Birthday [21:13:53] doin all right [21:14:22] Great! [21:14:50] !log rebuild lc on mw* and jobrunner1 [21:14:50] I gonna try and finish my big MirahezeBot update at some point this week [21:15:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:16:18] RECOVERY - mw6 Puppet on mw6 is OK: OK: Puppet is currently enabled, last run 54 seconds ago with 0 failures [21:16:20] RECOVERY - mw7 Puppet on mw7 is OK: OK: Puppet is currently enabled, last run 47 seconds ago with 0 failures [21:20:51] [02miraheze/mediawiki] 07paladox pushed 031 commit to 03REL1_34 [+0/-0/±1] 13https://git.io/JJlVk [21:20:52] [02miraheze/mediawiki] 07paladox 0315278ba - Update Tweeki [21:23:04] PROBLEM - mw5 Current Load on mw5 is WARNING: WARNING - load average: 6.91, 5.97, 4.42 [21:23:25] hispano76 should work now [21:23:45] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 37.90, 26.38, 21.43 [21:24:01] PROBLEM - gluster1 Current Load on gluster1 is CRITICAL: CRITICAL - load average: 8.69, 5.85, 4.08 [21:24:11] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 4 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb, 51.89.160.142/cpweb, 2607:5300:205:200::2ac4/cpweb [21:24:47] PROBLEM - mw4 Current Load on mw4 is CRITICAL: CRITICAL - load average: 8.25, 6.99, 5.04 [21:25:54] PROBLEM - innersphere.tk - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:25:57] PROBLEM - theatlas.pw - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:25:58] PROBLEM - wiki.ripto.gq - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:25:58] RECOVERY - gluster1 Current Load on gluster1 is OK: OK - load average: 6.59, 6.26, 4.46 [21:26:07] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:26:12] PROBLEM - vmcodex.net - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:26:14] PROBLEM - sahitya.shaunak.in - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:26:19] PROBLEM - orain.org - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:26:23] PROBLEM - wiki.grottocenter.org - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210' [21:26:23] PROBLEM - sims.miraheze.org - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210' [21:26:38] PROBLEM - dreamsit.com.br - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:26:40] PROBLEM - www.programming.red - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:26:40] PROBLEM - wiki.campaign-labour.org - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:26:44] PROBLEM - storytime.jdstroy.cf - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:26:45] PROBLEM - vault.aics.cf - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:26:46] PROBLEM - spiral.wiki - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:26:46] PROBLEM - duepedia.uk.to - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:26:46] PROBLEM - mr.gyaanipedia.co.in - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:26:47] PROBLEM - secularknowledge.com - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:26:50] PROBLEM - test1.miraheze.org - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:26:53] PROBLEM - www.mh142.com - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:26:59] PROBLEM - disabled.life - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:27:00] PROBLEM - www.modesofdiscourse.com - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:27:02] PROBLEM - mw5 Current Load on mw5 is CRITICAL: CRITICAL - load average: 9.94, 7.75, 5.48 [21:27:06] PROBLEM - wiki.animalrebellion.cz - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210' [21:27:07] PROBLEM - bn.gyaanipedia.co.in - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:27:18] PROBLEM - guia.esporo.net - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:27:21] PROBLEM - adadevelopersacademy.wiki - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210' [21:27:21] PROBLEM - wiki.mobilityengineer.com - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:27:23] PROBLEM - wiki.openhatch.org - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:27:45] PROBLEM - wiki.bullshit.systems - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:28:04] flood XD [21:28:38] Yeah! works! thanks! :D [21:28:52] PROBLEM - mw4 Current Load on mw4 is WARNING: WARNING - load average: 6.62, 7.70, 5.85 [21:29:02] PROBLEM - mw5 Current Load on mw5 is WARNING: WARNING - load average: 6.36, 7.28, 5.60 [21:29:07] [02miraheze/dns] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://git.io/JJlV8 [21:29:08] [02miraheze/dns] 07paladox 0388dc06b - Up timeout to 10 [21:29:10] [02dns] 07paladox created branch 03paladox-patch-1 - 13https://git.io/vbQXl [21:29:11] [02dns] 07paladox opened pull request 03#169: Up timeout to 10 - 13https://git.io/JJlV4 [21:29:25] paladox: I was just about to say deal with that [21:30:47] RECOVERY - mw4 Current Load on mw4 is OK: OK - load average: 4.82, 6.68, 5.69 [21:31:01] RECOVERY - mw5 Current Load on mw5 is OK: OK - load average: 5.31, 6.57, 5.53 [21:31:24] PROBLEM - meta.gyaanipedia.co.in - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,51.77.107.210,51.89.160.142' [21:31:28] PROBLEM - wiki.casadocarvalho.net - DNS on sslhost is CRITICAL: DNS CRITICAL - expected '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142' but got '2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.89.160.142' [21:31:39] Go away icinga-miraheze [21:31:46] There's nothing wrong [21:32:33] RECOVERY - innersphere.tk - DNS on sslhost is OK: DNS OK: 0.077 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:32:39] RECOVERY - theatlas.pw - DNS on sslhost is OK: DNS OK: 0.037 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:32:41] RECOVERY - wiki.ripto.gq - DNS on sslhost is OK: DNS OK: 0.036 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:02] RECOVERY - orain.org - DNS on sslhost is OK: DNS OK: 0.041 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:06] RECOVERY - vmcodex.net - DNS on sslhost is OK: DNS OK: 0.055 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:08] RECOVERY - sahitya.shaunak.in - DNS on sslhost is OK: DNS OK: 0.036 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:08] RECOVERY - wiki.grottocenter.org - DNS on sslhost is OK: DNS OK: 0.041 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:09] RECOVERY - sims.miraheze.org - DNS on sslhost is OK: DNS OK: 0.040 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:18] RECOVERY - dreamsit.com.br - DNS on sslhost is OK: DNS OK: 0.052 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:23] RECOVERY - www.programming.red - DNS on sslhost is OK: DNS OK: 0.052 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:23] RECOVERY - wiki.campaign-labour.org - DNS on sslhost is OK: DNS OK: 0.078 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:28] RECOVERY - storytime.jdstroy.cf - DNS on sslhost is OK: DNS OK: 0.064 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:28] RECOVERY - vault.aics.cf - DNS on sslhost is OK: DNS OK: 0.061 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:36] RECOVERY - spiral.wiki - DNS on sslhost is OK: DNS OK: 0.038 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:37] RECOVERY - duepedia.uk.to - DNS on sslhost is OK: DNS OK: 0.061 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:37] RECOVERY - mr.gyaanipedia.co.in - DNS on sslhost is OK: DNS OK: 0.101 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:37] RECOVERY - secularknowledge.com - DNS on sslhost is OK: DNS OK: 0.155 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:41] RECOVERY - disabled.life - DNS on sslhost is OK: DNS OK: 0.153 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:41] RECOVERY - www.modesofdiscourse.com - DNS on sslhost is OK: DNS OK: 0.028 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:42] RECOVERY - test1.miraheze.org - DNS on sslhost is OK: DNS OK: 0.032 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:46] ._. [21:33:49] RECOVERY - www.mh142.com - DNS on sslhost is OK: DNS OK: 0.056 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:54] RECOVERY - wiki.animalrebellion.cz - DNS on sslhost is OK: DNS OK: 0.036 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:54] RECOVERY - bn.gyaanipedia.co.in - DNS on sslhost is OK: DNS OK: 0.097 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:33:56] RECOVERY - guia.esporo.net - DNS on sslhost is OK: DNS OK: 0.051 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:34:03] RECOVERY - adadevelopersacademy.wiki - DNS on sslhost is OK: DNS OK: 0.038 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:34:03] RECOVERY - wiki.mobilityengineer.com - DNS on sslhost is OK: DNS OK: 0.047 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:34:08] RECOVERY - wiki.openhatch.org - DNS on sslhost is OK: DNS OK: 0.031 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:34:30] !log MariaDB [incidents]> update incidents set i_published = NULL where i_id = 33; - db11 [21:34:30] RECOVERY - wiki.bullshit.systems - DNS on sslhost is OK: DNS OK: 0.043 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:34:38] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:36:19] PROBLEM - mw6 Puppet on mw6 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [21:36:20] PROBLEM - mw7 Puppet on mw7 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [21:37:29] Happy birthday ;) [21:37:38] 🥳🥳🥳🥳🥳 [21:38:16] RECOVERY - meta.gyaanipedia.co.in - DNS on sslhost is OK: DNS OK: 0.025 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:38:26] RECOVERY - wiki.casadocarvalho.net - DNS on sslhost is OK: DNS OK: 0.035 seconds response time. sslhost returns 2001:41d0:800:1056::2,2001:41d0:800:105a::10,51.77.107.210,51.89.160.142 [21:39:12] Thanks eth01 [21:40:22] Is eth01 a human or bot users? [21:41:27] Human [21:41:33] He owns fosshost [21:41:46] Which provides the servers for bots+tools [21:42:04] Plus I think we're looking at a few cache proxies [21:45:41] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 10.34, 15.97, 22.36 [21:46:18] RECOVERY - mw6 Puppet on mw6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:46:20] RECOVERY - mw7 Puppet on mw7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:49:41] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 11.00, 14.31, 20.27 [21:56:51] !log upgrade gluster to 7.7 on gluster 1 & 2 [21:56:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:57:56] !log upgrade puppet-agent on gluster 1 & 2 [21:58:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:59:42] !log upgrade gluster & puppet-agent on mw 4,5,6,7 [21:59:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:01:03] !log upgrade phabricator on phab1 [22:01:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:06:14] PROBLEM - mw5 Puppet on mw5 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:06:15] !log upgrade gluster & puppet-agent on jobrunner1 & test2 [22:06:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:06:22] PROBLEM - cp7 Current Load on cp7 is CRITICAL: CRITICAL - load average: 8.40, 4.87, 3.48 [22:06:38] paladox: why's mw5 puppet upset? [22:06:41] .!log paladox is logging a lot [22:06:54] mutante: it's update time! [22:07:04] PROBLEM - phab1 Puppet on phab1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Package[php7.3-apcu] [22:07:20] RhinosF1: PHP version too ?:) [22:07:24] oh heh [22:07:29] didn't realise mutante you are here :P [22:07:31] paladox: 7.4 ? [22:07:39] mutante php 7.3 [22:07:42] paladox: just sometimes :) [22:07:51] RhinosF1 it likely was running when i was doing the upgrade [22:07:55] paladox: but it cant find php7.3-apcu anymore? [22:08:11] nope [22:08:21] RECOVERY - cp7 Current Load on cp7 is OK: OK - load average: 3.23, 4.11, 3.37 [22:08:21] mutante it's php-apcu [22:08:26] This doesn't seem to provide any information: https://meta.miraheze.org/wiki/Special:IncidentReports/32 [22:08:27] [ Incident Reports - Miraheze Meta ] - meta.miraheze.org [22:08:34] paladox: ah, right! [22:08:43] It shows that there was a technical issue with MediaWiki, but what and why isn't listed [22:08:46] mutante: puppet likes doing weird stuff, it normally talks random stuff and fixes itself 10 mins later [22:09:01] RhinosF1: that's .. not normal :) [22:09:23] RhinosF1: the issue is the PHP package name changes in older Debian version [22:09:35] they removed the version from it.. which is good [22:09:45] up until some point there was still a transitional package [22:09:46] mutante: every MediaWiki deploy it times out and has to be ran twice. I kind get used to waiting 10 mins or kicking it when puppet moans. [22:09:54] so it would keep working until that also gets removed [22:10:51] I need to fix the icinga check for puppet on the bots repos [22:10:55] RhinosF1: i have no idea how Miraheze deploys MW. At WMF that would be completely separate from puppet. scap and puppet don't even know each other [22:11:10] mutante: puppet pulls the repos every 10 mins [22:11:36] RhinosF1: ok and what do you mean by "puppet moans" exactly [22:11:59] mutante: comes up with some failure of some sort [22:12:16] what would that failure be? [22:12:25] it just runs git pull... right [22:12:33] mutante: the most common seems to be git timing out [22:12:40] so maybe the server it pulls from gets overloaded ? [22:12:48] Or depends cycles [22:12:52] Dependancy* [22:13:05] RhinosF1: that seems to be related to pulling with many clients from a single server at the same time [22:13:11] RhinosF1: so not really puppet [22:13:27] mutante: it takes longer than the timeout on first attempt, yeah we should do something about it tbh. [22:13:39] if you would just run "git pull" in a shell script in cron .. would that not do the same? [22:13:47] Yes [22:13:54] Assuming the timeout is the same [22:14:11] so then we can agree "puppet does random things" is actually not puppet at all [22:14:13] The timeout is set so 2 puppet runs can't overlap [22:14:37] mutante: A lot are random quirks yes that we probably should do something about [22:14:49] Like disabling puppet when updating puppet-agent [22:15:02] RECOVERY - phab1 Puppet on phab1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:15:05] RhinosF1: you are basically describing the reason why git pull isn't working as a deployment method if you have a) many servers b) they all need to be updated at the same time [22:15:06] RhinosF1 do you remember what i said for why i failed over the db for https://meta.miraheze.org/wiki/Special:IncidentReports/32 [22:15:08] [ Incident Reports - Miraheze Meta ] - meta.miraheze.org [22:15:10] Or not running puppet every 10 mins so we can increase the timeout [22:15:51] RhinosF1: but if you have different MW versions on different servers that is not going to be good [22:16:01] paladox: was that when the Mac addresses or IPs were impacted by another server being impacted. [22:16:08] ahhh [22:16:09] yes [22:16:14] RECOVERY - mw5 Puppet on mw5 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:16:25] how many mw servers do you have now? [22:16:43] 4 prod, 1 test and a jobrunner1 [22:16:51] So 6 with MediaWiki installed [22:18:06] RhinosF1: ok, so you could do "poor man's scap". which would be a bash script that rsyncs the MW files from 1 server to the 5 others [22:18:24] that should fix the git pull issue [22:18:33] and still doesnt mean you need all of scap [22:18:41] basically just a few rsync lines [22:19:02] mutante: we need to consider other solutions. Services could do with being moved to a cron or something as well rather than in puppet [22:19:03] then slap them into deploy.sh or something so you can run that [22:19:30] Then yeah with Rsync and services on it's own system it might be better [22:19:32] RhinosF1: i don't think cron is a solution for this case [22:19:50] mutante: services is the list of wikis with parsoid enabled [22:19:54] look.. you will have one of 2 problems [22:20:07] either you are running the git pull all at the same time.. then you will have the timeouts [22:20:17] for this it doesnt matter if puppet runs it or cron or a human [22:20:20] That's the only reason puppet runs 10 minutely anyway as it's pulled from github [22:20:39] or you do NOT run them at the same time.. then you avoid the timeouts but you have different MW versions with unpredictable results [22:20:46] mutante: if you move it to pulling on 1 server then and rsync that [22:20:47] rsync fixes both [22:20:54] RhinosF1: exactly [22:20:59] tbh, there probably is a better way to do it than pulling git, since git has overhead that slows things down [22:21:32] you need to pull it once.. just not 6 times from the same source [22:21:42] mutante: what about the list of wikis with parsoid on. That currently is the only reason to be honest puppet is every 10 minutes [22:21:54] after you pulled it once (to the deployment server or the first server or wherever) then you can rsync it to the rest [22:21:54] If we could unpuppet that, that would be good as well [22:22:15] RhinosF1: imho all of this has very little to do with puppet [22:22:29] puppet just runs a command for you [22:22:33] mutante: yeah a lot probably aren't puppets fault [22:22:36] the issues are with the command itself [22:22:44] It's the way we have it doing things [22:22:53] Voidwalker https://meta.miraheze.org/wiki/Special:IncidentReports/32/ better? [22:22:55] [ Incident Reports - Miraheze Meta ] - meta.miraheze.org [22:23:07] you should run puppet agent at random times [22:23:08] paladox: any thought on what mutante is saying [22:23:16] RhinosF1 tldr? [22:23:18] so that each server runs it at a different minute [22:23:37] mutante: okay [22:23:40] i mean give me the quick summery of what was said/suggested [22:23:41] I hear that [22:23:53] plus 1 to what mutante says [22:23:56] paladox: have MediaWiki pulled to 1 server and rsync that [22:24:03] paladox: deploy MediaWiki with a bash script that does a) git pull ONCE b) rsyncs from there to other servers [22:24:09] paladox: poor man's scap [22:24:12] oh [22:24:22] Then move services.yaml to not using puppet like you were going to [22:24:22] that would be safe actually i think [22:24:32] And yeah have puppet then run random times [22:24:40] i don't know how services.yaml or parsoid is related to this yet [22:24:46] so i said nothing about that or puppet [22:24:58] all i am saying is you don't want to git pull on all servers at once [22:24:59] better, thanks! [22:25:02] It's one other thing that shouldn't use puppet [22:25:19] And probably could go the same way with rsync and pull [22:25:36] Or be generated somewhat saner [22:25:49] you know.. you can also let puppet do the rsync :p [22:25:59] mutante when wikimedia was evaluating logging solutions, did it do a graylog vs logstash? or only went with logstash? [22:26:11] mutante: we could [22:26:27] I'm going to write this in a phab task when I'm awake [22:26:49] paladox: i don't know but graylog has many hits in phab :p [22:27:03] heh [22:27:03] RhinosF1: what is it about parsoid and services.yaml ? [22:28:16] mutante: it's the only reason puppet runs every 10 minutes. We have a bot detevt when parsoid needy things are enabled, push a commit to git, then puppet has to pull the change to all 6 servers [22:28:26] Most of the time just because VE was enabled [22:28:43] And it means from VE being enabled to useful takes 10-15 minutes [22:29:14] Aren't they looking making VE work out of box for 1.35 [22:29:17] Did they do that? [22:29:22] RhinosF1: so people deploy changes and don't want to wait? [22:29:26] mutante we generate services.yaml for parsoid/restbase (we push the file to a git repo, and we have it pull on puppet where we foreach over it) [22:30:31] paladox: ok, so it's all about making deployment faster? is there a problem with puppet running every 10 minutes though? [22:30:45] i mean yeh [22:30:51] it jampacks the master :P [22:30:57] mutante: people want VE instantly and if we scrapped it then puppet could run whenever it is best [22:31:00] puppetserver performance is crap [22:31:05] paladox: because the agents all run at the same time? [22:31:08] puppetmaster performance 1,000 times better [22:31:09] yeh [22:31:11] Rather than having to run every 10 minutes on all servers [22:31:29] paladox: randomize the minute in each cron job [22:31:48] that's what we want to do, but cannot do that until we sort out services.yaml [22:31:54] you can have server A at 2,12,22 and server B at 4,14,24 etc [22:32:00] What paladox said [22:32:06] otherwise services 1/2 will have a different config for the minutes that puppet haven't ran [22:32:15] paladox: let me find the VE out of the box task [22:32:22] ok [22:32:23] i don't understand what you mean by "sort out services.yaml" [22:32:29] RhinosF1 that's parsoid php [22:32:34] that's not restbase [22:32:37] you just said the problem to solve is that the puppetmaster does not go down [22:32:46] paladox: oh [22:32:49] that is fixable by not running them at the same time [22:33:01] while _still_ having them run every 10 minutes [22:33:19] mutante: but if we ran puppet at different times, everything would get inconsistent [22:33:35] mutante well we generate https://github.com/miraheze/services/blob/master/services.yaml which we use to help generate restbase/parsoid config: https://github.com/miraheze/puppet/blob/master/modules/services/templates/parsoid/config.yaml#L11 [22:33:35] But yeah pushing to one server and rsyncing stuff could work [22:33:35] [ services/services.yaml at master · miraheze/services · GitHub ] - github.com [22:33:36] [ puppet/config.yaml at master · miraheze/puppet · GitHub ] - github.com [22:33:50] https://github.com/miraheze/puppet/blob/master/modules/services/templates/restbase/config.yaml.erb#L91 [22:33:51] [ puppet/config.yaml.erb at master · miraheze/puppet · GitHub ] - github.com [22:34:05] RhinosF1: if the puppetmaster can't handle 4 agents at once it might need more power in the first place [22:34:14] but if we had puppet randomise as is, services 1 could have an out dated version for a while and services 2 could have the new version. [22:34:34] mutante puppetmaster can handle 4 at once, puppetserver cannot :P [22:34:38] i mean that's the situation when i would use cumin to run puppet on all 4 hosts at once [22:34:49] paladox: then why use puppserver ?:P [22:35:07] because it's what they support now [22:35:31] mutante: I wish we could have scripts for people without fleetwide access like me to run commands on multiple servers [22:35:32] isn't it the other way around and puppetserver is the commcercial solution that costs money [22:35:36] while puppetmaster is free [22:35:41] no [22:35:48] puppetserver is the replacement to puppetmaster [22:35:53] [02dns] 07JohnFLewis commented on pull request 03#169: Up timeout to 10 - 13https://git.io/JJloI [22:36:10] paladox: if the new thing doesn't work as well as the old thing,, maybe don't replace it [22:36:36] mutante but puppetmaster is no longer supported, looks like debian buster is the last release that will support it. [22:36:40] puppetmaster is just an apache with the passenger module [22:36:53] what does "support" even mean :) [22:37:09] when's the last time you had Puppet Inc answer your support ticket, heh [22:37:32] Never, I ask you or paladox [22:37:37] [02dns] 07paladox edited pull request 03#169: Up timeout to 10 - 13https://git.io/JJlV4 [22:38:17] "While Puppet Server is designed to replace the deprecated Apache/Passenger Puppet master stack, they diverge in a handful of ways due to differences in Puppet Server's underlying architecture. See Puppet Server vs. Apache/Passenger Puppet Master for details." [22:38:23] From https://github.com/puppetlabs/puppetserver mutante [22:38:24] [ GitHub - puppetlabs/puppetserver: Server automation framework and application ] - github.com [22:39:06] paladox: the "handful of ways" it "diverges" seems to include "can't handle 4 agents at once".. so that sucks :p [22:39:23] it's based in java and they limit how many it can handle to cpu [22:39:28] for example i have it set to 2 [22:39:35] though i suppose may as well set it to 4 [22:40:21] paladox: Java .. so the debugging and tuning turns into this huge deal like with Gerrit :p [22:40:28] lol [22:40:36] yea, upping that number from 2 seems to make sense, lol [22:42:44] done [22:43:07] paladox: !log [22:43:29] !log increased how many cpus puppetserver can used [22:43:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:43:52] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJlog [22:43:53] [02miraheze/puppet] 07paladox 03b63eb67 - puppetdb::command_processing_threads: Increase to 4 [22:44:37] I'm gonna sleep and catch up in the morning [22:44:59] [02dns] 07JohnFLewis commented on pull request 03#169: Up timeout to 10 - 13https://git.io/JJlow [22:45:15] good evening RhinosF1 :) [22:45:20] PROBLEM - cp7 Puppet on cp7 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:45:21] PROBLEM - jobrunner1 Puppet on jobrunner1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:45:21] PROBLEM - gluster1 Puppet on gluster1 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:45:37] PROBLEM - ldap1 Puppet on ldap1 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 2 minutes ago with 18 failures. Failed resources (up to 3 shown) [22:45:39] PROBLEM - db12 Puppet on db12 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 2 minutes ago with 18 failures. Failed resources (up to 3 shown) [22:45:44] PROBLEM - db7 Puppet on db7 is CRITICAL: CRITICAL: Puppet has 19 failures. Last run 2 minutes ago with 19 failures. Failed resources (up to 3 shown) [22:45:52] paladox: is this another puppet moan because you changed it as it ran [22:45:53] PROBLEM - services1 Puppet on services1 is CRITICAL: CRITICAL: Puppet has 26 failures. Last run 2 minutes ago with 26 failures. Failed resources (up to 3 shown) [22:45:55] PROBLEM - rdb2 Puppet on rdb2 is CRITICAL: CRITICAL: Puppet has 17 failures. Last run 2 minutes ago with 17 failures. Failed resources (up to 3 shown) [22:45:59] PROBLEM - mon1 Puppet on mon1 is CRITICAL: CRITICAL: Puppet has 49 failures. Last run 2 minutes ago with 49 failures. Failed resources (up to 3 shown) [22:45:59] hispano76: night! [22:46:00] yes [22:46:04] PROBLEM - ns1 Puppet on ns1 is CRITICAL: CRITICAL: Puppet has 17 failures. Last run 2 minutes ago with 17 failures. Failed resources (up to 3 shown) [22:46:04] i restarted the service [22:46:07] PROBLEM - db13 Puppet on db13 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 2 minutes ago with 18 failures. Failed resources (up to 3 shown) [22:46:12] PROBLEM - gluster2 Puppet on gluster2 is CRITICAL: CRITICAL: Puppet has 20 failures. Last run 2 minutes ago with 20 failures. Failed resources (up to 3 shown) [22:46:15] PROBLEM - mw5 Puppet on mw5 is CRITICAL: CRITICAL: Puppet has 289 failures. Last run 3 minutes ago with 289 failures. Failed resources (up to 3 shown) [22:46:19] PROBLEM - mw6 Puppet on mw6 is CRITICAL: CRITICAL: Puppet has 289 failures. Last run 2 minutes ago with 289 failures. Failed resources (up to 3 shown) [22:46:20] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 4.26, 5.10, 7.85 [22:46:20] PROBLEM - mw7 Puppet on mw7 is CRITICAL: CRITICAL: Puppet has 289 failures. Last run 2 minutes ago with 289 failures. Failed resources (up to 3 shown) [22:46:21] PROBLEM - ns2 Puppet on ns2 is CRITICAL: CRITICAL: Puppet has 17 failures. Last run 3 minutes ago with 17 failures. Failed resources (up to 3 shown) [22:46:21] * RhinosF1 wonders if stopping puppet while we make changes to it would be good [22:46:23] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [22:46:26] PROBLEM - db11 Puppet on db11 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 3 minutes ago with 18 failures. Failed resources (up to 3 shown) [22:46:27] PROBLEM - mail1 Puppet on mail1 is CRITICAL: CRITICAL: Puppet has 41 failures. Last run 3 minutes ago with 41 failures. Failed resources (up to 3 shown) [22:46:35] PROBLEM - cp6 Puppet on cp6 is CRITICAL: CRITICAL: Puppet has 276 failures. Last run 3 minutes ago with 276 failures. Failed resources (up to 3 shown) [22:46:36] PROBLEM - services2 Puppet on services2 is CRITICAL: CRITICAL: Puppet has 26 failures. Last run 3 minutes ago with 26 failures. Failed resources (up to 3 shown) [22:46:41] PROBLEM - mw4 Puppet on mw4 is CRITICAL: CRITICAL: Puppet has 289 failures. Last run 3 minutes ago with 289 failures. Failed resources (up to 3 shown) [22:46:41] PROBLEM - cloud3 Puppet on cloud3 is CRITICAL: CRITICAL: Puppet has 17 failures. Last run 3 minutes ago with 17 failures. Failed resources (up to 3 shown) [22:46:43] PROBLEM - bacula2 Puppet on bacula2 is CRITICAL: CRITICAL: Puppet has 15 failures. Last run 3 minutes ago with 15 failures. Failed resources (up to 3 shown) [22:47:00] PROBLEM - cloud2 Puppet on cloud2 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 3 minutes ago with 18 failures. Failed resources (up to 3 shown) [22:47:02] PROBLEM - rdb1 Puppet on rdb1 is CRITICAL: CRITICAL: Puppet has 18 failures. Last run 3 minutes ago with 18 failures. Failed resources (up to 3 shown) [22:47:07] PROBLEM - phab1 Puppet on phab1 is CRITICAL: CRITICAL: Puppet has 26 failures. Last run 3 minutes ago with 26 failures. Failed resources (up to 3 shown) [22:47:10] PROBLEM - cloud1 Puppet on cloud1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/usr/local/bin/puppet-enabled] [22:47:16] PROBLEM - misc1 Puppet on misc1 is CRITICAL: CRITICAL: Puppet has 33 failures. Last run 3 minutes ago with 33 failures. Failed resources (up to 3 shown) [22:48:08] PROBLEM - cp9 Puppet on cp9 is CRITICAL: CRITICAL: Puppet has 239 failures. Last run 3 minutes ago with 239 failures. Failed resources (up to 3 shown): File[/usr/local/bin/puppet-enabled],File[/etc/rsyslog.d],File[/etc/rsyslog.conf],File[authority certificates] [22:48:40] [02dns] 07JohnFLewis commented on pull request 03#169: Up timeout to 10 - 13https://git.io/JJloF [22:49:04] [02dns] 07paladox commented on pull request 03#169: Up timeout to 10 - 13https://git.io/JJloA [22:49:06] [02dns] 07paladox closed pull request 03#169: Up timeout to 10 - 13https://git.io/JJlV4 [22:49:07] [02dns] 07paladox deleted branch 03paladox-patch-1 - 13https://git.io/vbQXl [22:49:09] [02miraheze/dns] 07paladox deleted branch 03paladox-patch-1 [22:52:20] RECOVERY - jobrunner1 Current Load on jobrunner1 is OK: OK - load average: 4.11, 4.21, 6.54 [22:52:27] RhinosF1: no, you don't need to stop puppet "while working on it" [22:52:41] RECOVERY - cloud3 Puppet on cloud3 is OK: OK: Puppet is currently enabled, last run 8 seconds ago with 0 failures [22:52:43] RECOVERY - bacula2 Puppet on bacula2 is OK: OK: Puppet is currently enabled, last run 7 seconds ago with 0 failures [22:52:48] unless "working on it" is defined as making multiple code changes and the first one failing [22:53:01] RECOVERY - cloud2 Puppet on cloud2 is OK: OK: Puppet is currently enabled, last run 24 seconds ago with 0 failures [22:53:03] RECOVERY - rdb1 Puppet on rdb1 is OK: OK: Puppet is currently enabled, last run 33 seconds ago with 0 failures [22:53:03] RECOVERY - phab1 Puppet on phab1 is OK: OK: Puppet is currently enabled, last run 24 seconds ago with 0 failures [22:53:08] RECOVERY - cloud1 Puppet on cloud1 is OK: OK: Puppet is currently enabled, last run 31 seconds ago with 0 failures [22:53:16] RECOVERY - misc1 Puppet on misc1 is OK: OK: Puppet is currently enabled, last run 38 seconds ago with 0 failures [22:53:20] RECOVERY - cp7 Puppet on cp7 is OK: OK: Puppet is currently enabled, last run 14 seconds ago with 0 failures [22:53:21] RECOVERY - gluster1 Puppet on gluster1 is OK: OK: Puppet is currently enabled, last run 40 seconds ago with 0 failures [22:53:36] RECOVERY - ldap1 Puppet on ldap1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:53:39] RECOVERY - db12 Puppet on db12 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:53:44] RECOVERY - db7 Puppet on db7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:53:52] RECOVERY - services1 Puppet on services1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:53:55] RECOVERY - rdb2 Puppet on rdb2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:53:59] RECOVERY - mon1 Puppet on mon1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:04] RECOVERY - ns1 Puppet on ns1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:04] RECOVERY - db13 Puppet on db13 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:07] RECOVERY - cp9 Puppet on cp9 is OK: OK: Puppet is currently enabled, last run 27 seconds ago with 0 failures [22:54:12] RECOVERY - gluster2 Puppet on gluster2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:14] RECOVERY - mw5 Puppet on mw5 is OK: OK: Puppet is currently enabled, last run 52 seconds ago with 0 failures [22:54:18] RECOVERY - mw6 Puppet on mw6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:20] RECOVERY - mw7 Puppet on mw7 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:21] RECOVERY - ns2 Puppet on ns2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:26] !log set max-active-instances to 3 on puppet2 [22:54:27] RECOVERY - db11 Puppet on db11 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:27] RECOVERY - mail1 Puppet on mail1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:54:35] RECOVERY - cp6 Puppet on cp6 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:36] RECOVERY - services2 Puppet on services2 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:54:42] RECOVERY - mw4 Puppet on mw4 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:55:13] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJlKI [22:55:15] [02miraheze/puppet] 07paladox 03a9554c6 - puppetdb::command_processing_threads: set to 3 [22:55:20] RECOVERY - jobrunner1 Puppet on jobrunner1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [22:56:12] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JJlKL [22:56:13] [02miraheze/puppet] 07paladox 03d5d315f - Revert "puppetdb::command_processing_threads: set to 3" This reverts commit a9554c6de33e55e86697607a634bfa1d4d3e6937. [23:06:23] RECOVERY - cp3 Puppet on cp3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:15:02] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 16.38, 21.10, 17.54 [23:17:00] PROBLEM - cloud2 Current Load on cloud2 is CRITICAL: CRITICAL - load average: 31.42, 23.21, 18.59 [23:20:22] PROBLEM - cp7 Current Load on cp7 is WARNING: WARNING - load average: 3.60, 7.76, 5.56 [23:22:21] RECOVERY - cp7 Current Load on cp7 is OK: OK - load average: 5.12, 6.64, 5.40 [23:22:58] PROBLEM - cloud2 Current Load on cloud2 is WARNING: WARNING - load average: 18.67, 22.37, 20.09 [23:23:57] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 14.10, 10.95, 7.78 [23:24:58] RECOVERY - cloud2 Current Load on cloud2 is OK: OK - load average: 15.73, 19.84, 19.42 [23:30:17] @RhinosF1 Ah, cool! (re: eth01 and fosshost) 🙂 [23:35:45] >  mutante: people want VE instantly and if we scrapped it then puppet could run whenever it is best @RhinosF1 I don't know all the technical reasons behind that, but that sounds good to me. That would be my preference; just enable VE as a regular, non-beta feature, in some or all namespaces, but make New Wikitext Editor available in all namespaces for those that want to use that as their source editor. Then, you could just [23:35:45] let people disable VE if they absolutely don't want it. Outreach Wiki just recent voted to enable VE in this way, and I think Wikidata, MediaWiki, and maybe one or two other English wikis do this. 🙂 [23:36:04] Plus, then we wouldn't need to bother with a big global RfC to get this done. [23:49:03] PROBLEM - jobrunner1 Current Load on jobrunner1 is WARNING: WARNING - load average: 5.48, 6.95, 7.96 [23:50:58] PROBLEM - jobrunner1 Current Load on jobrunner1 is CRITICAL: CRITICAL - load average: 12.39, 8.64, 8.42