[00:04:01] PROBLEM - dbbackup1 APT on dbbackup1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [00:08:18] RECOVERY - dbbackup1 APT on dbbackup1 is OK: APT OK: 11 packages available for upgrade (0 critical updates). [00:08:36] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.45, 6.23, 5.10 [00:12:36] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 5.78, 6.22, 5.37 [00:21:53] RECOVERY - test3 Puppet on test3 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:22:34] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.50, 7.17, 6.35 [00:23:31] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.72, 6.92, 5.84 [00:24:30] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 6.02, 6.79, 6.31 [00:25:28] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 4.94, 6.21, 5.71 [00:25:54] PROBLEM - test3 Puppet on test3 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_MediaWiki core] [00:38:04] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.65, 7.07, 6.37 [00:41:14] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.39, 7.25, 6.81 [00:45:51] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 5.64, 6.50, 6.42 [00:47:04] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 9.08, 7.97, 7.23 [00:49:00] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.37, 7.49, 7.13 [00:58:44] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.29, 6.11, 6.61 [01:01:47] PROBLEM - dbbackup1 APT on dbbackup1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:03:18] PROBLEM - dbbackup1 PowerDNS Recursor on dbbackup1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:03:30] PROBLEM - dbbackup1 Check MariaDB Replication c2 on dbbackup1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:04:03] PROBLEM - dbbackup1 Puppet on dbbackup1 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [01:04:49] PROBLEM - dbbackup1 SSH on dbbackup1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:07:30] RECOVERY - dbbackup1 PowerDNS Recursor on dbbackup1 is OK: DNS OK: 1.931 second response time. miraheze.org returns 2607:5300:205:200::1c30,51.222.25.132 [01:07:46] PROBLEM - dbbackup1 Check MariaDB Replication c2 on dbbackup1 is UNKNOWN: NRPE: Unable to read output [01:08:04] RECOVERY - dbbackup1 Puppet on dbbackup1 is OK: OK: Puppet is currently enabled, last run 37 minutes ago with 0 failures [01:08:13] RECOVERY - dbbackup1 APT on dbbackup1 is OK: APT OK: 11 packages available for upgrade (0 critical updates). [01:08:49] RECOVERY - dbbackup1 SSH on dbbackup1 is OK: SSH OK - OpenSSH_7.9p1 Debian-10+deb10u2 (protocol 2.0) [01:10:18] PROBLEM - dbbackup1 MariaDB c2 on dbbackup1 is CRITICAL: Can't connect to MySQL server on 'dbbackup1.miraheze.org' (111) [01:11:39] PROBLEM - dbbackup1 Check MariaDB Replication c2 on dbbackup1 is CRITICAL: MariaDB replication - both - CRITICAL - Slave IO State not correct, slave stopped or replication broken! [01:12:13] RECOVERY - dbbackup1 MariaDB c2 on dbbackup1 is OK: Uptime: 282 Threads: 8 Questions: 12 Slow queries: 1 Opens: 16 Flush tables: 1 Open tables: 10 Queries per second avg: 0.042 [01:45:11] PROBLEM - ping6 on bacula2 is WARNING: PING WARNING - Packet loss = 0%, RTA = 119.16 ms [01:49:51] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp11.miraheze.org [01:53:08] RECOVERY - ping6 on bacula2 is OK: PING OK - Packet loss = 0%, RTA = 89.95 ms [01:58:26] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 173s [02:02:25] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [02:12:26] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 233s [02:14:26] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 172s [02:16:25] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 291s [02:34:26] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 44s [04:08:21] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.10, 6.66, 5.61 [04:10:22] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 8.04, 6.93, 5.83 [04:12:20] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 4.59, 5.91, 5.58 [04:20:20] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.83, 6.53, 5.93 [04:22:21] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 4.82, 6.05, 5.84 [04:22:26] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 323s [04:30:26] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [04:53:53] PROBLEM - wiki.mlpwiki.net - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mlpwiki.net reverse DNS resolves to 192-185-16-85.unifiedlayer.com [06:17:20] RECOVERY - test3 APT on test3 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [06:23:35] RECOVERY - mw9 APT on mw9 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [06:23:56] RECOVERY - mw11 APT on mw11 is OK: APT OK: 25 packages available for upgrade (0 critical updates). [06:40:13] RECOVERY - mw10 APT on mw10 is OK: APT OK: 25 packages available for upgrade (0 critical updates). [06:44:04] RECOVERY - mw8 APT on mw8 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [06:45:59] RECOVERY - jobrunner3 APT on jobrunner3 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [07:00:34] RECOVERY - jobrunner4 APT on jobrunner4 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [07:37:37] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 10.51, 7.02, 3.25 [07:37:47] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-2 [+0/-0/±1] 13https://git.io/JYkHV [07:37:48] [02miraheze/mw-config] 07Universal-Omega 03e0b2a41 - Add "Memcached" to wgIncidentReportingServices [07:37:50] [02mw-config] 07Universal-Omega created branch 03Universal-Omega-patch-2 - 13https://git.io/vbvb3 [07:37:51] [02mw-config] 07Universal-Omega opened pull request 03#3807: Add "Memcached" to wgIncidentReportingServices - 13https://git.io/JYkHw [07:41:26] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 0.48, 3.43, 2.61 [07:43:20] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 2.06, 2.95, 2.52 [07:59:15] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+0/-1/±1] 13https://git.io/JYkda [07:59:16] [02miraheze/ssl] 07Reception123 03112d577 - rm wiki.ddr.red cert (no longer pointing to us) [08:02:01] RECOVERY - test3 Puppet on test3 is OK: OK: Puppet is currently enabled, last run 32 seconds ago with 0 failures [08:28:43] [02mw-config] 07Reception123 closed pull request 03#3806: T4005: use firejail - 13https://git.io/JYT6i [08:28:45] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYkps [08:28:47] [02miraheze/mw-config] 07Universal-Omega 034b2d54b - T4005: use firejail (#3806) [08:28:48] [02mw-config] 07Reception123 deleted branch 03Universal-Omega-patch-1 - 13https://git.io/vbvb3 [08:28:50] [02miraheze/mw-config] 07Reception123 deleted branch 03Universal-Omega-patch-1 [08:29:49] miraheze/mw-config - Reception123 the build passed. [08:38:25] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 1.01, 4.80, 3.38 [08:40:25] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 0.57, 3.36, 3.02 [08:53:10] .tell JohnLewis Hi. Isn't it quite strange that the AI is giving the same score to https://meta.miraheze.org/wiki/Special:RequestWikiQueue/17357#mw-section-request and https://meta.miraheze.org/wiki/Special:RequestWikiQueue/17365#mw-section-request ? [08:53:10] Reception123: I'll pass that on when JohnLewis is around. [08:53:11] [ Wiki requests queue - Miraheze Meta ] - meta.miraheze.org [08:53:12] [ Wiki requests queue - Miraheze Meta ] - meta.miraheze.org [08:55:57] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is CRITICAL: MariaDB replication - both - CRITICAL - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 250s [08:56:47] [02miraheze/mw-config] 07Reception123 pushed 031 commit to 03Reception123-patch-2 [+0/-0/±1] 13https://git.io/JYIfu [08:56:48] [02miraheze/mw-config] 07Reception123 03275c4ee - make none of the above first createwiki purpose [08:56:50] [02mw-config] 07Reception123 created branch 03Reception123-patch-2 - 13https://git.io/vbvb3 [08:56:51] [02mw-config] 07Reception123 opened pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYIfg [08:57:54] miraheze/mw-config - Reception123 the build passed. [08:58:15] [02mw-config] 07Reception123 edited pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYIfg [08:58:21] [02mw-config] 07Universal-Omega commented on pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYIfN [08:58:24] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 2.43, 6.23, 4.13 [08:59:01] [02mw-config] 07Reception123 commented on pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYIJL [09:01:06] [02mw-config] 07Universal-Omega commented on pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYIJg [09:01:49] [02mw-config] 07Universal-Omega edited a comment on pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYIJg [09:04:22] PROBLEM - cp11 Current Load on cp11 is WARNING: WARNING - load average: 0.67, 3.01, 3.45 [09:06:21] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 0.92, 2.37, 3.16 [09:08:44] [02mw-config] 07RhinosF1 commented on pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYITm [09:11:32] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 146s [09:13:32] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 78s [10:51:59] PROBLEM - cp11 Current Load on cp11 is CRITICAL: CRITICAL - load average: 1.91, 6.28, 3.82 [10:52:20] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 4.45, 3.64, 1.73 [10:53:30] JohnLewis: hi. Isn't it quite strange that the AI is giving the same score to https://meta.miraheze.org/wiki/Special:RequestWikiQueue/17357#mw-section-request and https://meta.miraheze.org/wiki/Special:RequestWikiQueue/17365#mw-section-request ? [10:53:30] [ Wiki requests queue - Miraheze Meta ] - meta.miraheze.org [10:53:31] [ Wiki requests queue - Miraheze Meta ] - meta.miraheze.org [10:53:50] (the message is going to appear again from MirahezeBot, I thought it would send it when you joined but apparently it sends only when there's activity) [10:54:07] . [10:54:07] JohnLewis: 2021-03-25 - 08:53:10UTC tell JohnLewis Hi. Isn't it quite strange that the AI is giving the same score to https://meta.miraheze.org/wiki/Special:RequestWikiQueue/17357#mw-section-request and https://meta.miraheze.org/wiki/Special:RequestWikiQueue/17365#mw-section-request ? [10:54:20] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 1.82, 2.90, 1.69 [10:54:49] It probably is, but also could not be a coincidence, if others aren't giving similar scores [10:55:58] RECOVERY - cp11 Current Load on cp11 is OK: OK - load average: 0.26, 3.06, 3.05 [10:58:58] JohnLewis: yeah, I've found other scores to be quite weird lately [10:59:14] and here it doesn't make much sense for a one character description to get the same score as a decent one [10:59:54] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp11.miraheze.org [11:00:37] More words, the better and more accurate a score will be - based on history, it defaults to 15 when theres insufficient length [11:01:50] hm, ok [11:03:20] !log reception@jobrunner3:~$ sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki gulpiewiki /home/reception/delbackups2/gulpiewiki.xml [11:03:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [11:06:44] JohnLewis: this is what I mean: https://meta.miraheze.org/wiki/Special:RequestWikiQueue/17368#mw-section-comments [11:06:45] [ Wiki requests queue - Miraheze Meta ] - meta.miraheze.org [11:07:00] it's quite weird that this request gets a 0.77 but the other one that was arguably more detailed got 0.15 [11:17:44] It depends on other requests [11:18:02] It might be more detailed, but if the words are mostly used in declined requests, that request will be declined [11:21:17] It's also a limited data set as well because of memory limitations, and it's not been updated since it was introduced as well, so probably the last 1000 requests have no weight on decisions. [11:26:26] oh, I thought we did want the latest requests to have weight considering the new canned response system [11:33:35] It has to be updated manually, and there is no code to give weights to canned responses [11:40:21] oh, I understood there was [11:40:27] what kind of manual update? [12:00:08] There’s a maintenance script in CW [12:41:42] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.99, 5.98, 5.05 [12:41:50] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.21, 6.13, 5.11 [12:43:41] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.72, 5.70, 5.05 [12:43:50] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 4.41, 5.45, 4.99 [12:54:42] PROBLEM - mw11 Check Gluster Clients on mw11 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [13:03:41] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 8.48, 6.88, 5.83 [13:05:40] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.64, 7.05, 6.05 [13:07:40] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 4.53, 6.26, 5.89 [13:07:58] PROBLEM - mw11 Puppet on mw11 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[/mnt/mediawiki-static] [13:21:29] so would I run it just like that? [13:23:32] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 7.86, 6.88, 5.81 [13:25:32] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.80, 5.92, 5.58 [13:29:32] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 8.52, 6.86, 5.98 [13:31:31] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 5.64, 6.41, 5.93 [13:32:45] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.58, 6.46, 5.60 [13:35:26] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 8.28, 7.37, 6.39 [13:35:28] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 12.30, 8.75, 6.93 [13:37:23] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.51, 6.51, 6.18 [13:37:27] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 5.37, 7.31, 6.62 [13:37:44] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.23, 7.36, 6.43 [13:38:49] PROBLEM - mw11 Current Load on mw11 is CRITICAL: CRITICAL - load average: 8.42, 7.32, 6.18 [13:40:44] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 5.46, 6.48, 6.00 [13:41:24] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 5.30, 6.54, 6.51 [13:41:39] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 6.41, 6.67, 6.37 [13:53:58] Is there any procedure for requesting an unmaintained extension to be replaced with a new, largely rewritten version of the extension (in another codebase) maintained by the same group or org? [13:56:31] [02mw-config] 07dmehus commented on pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYLIG [13:56:45] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 19.38, 21.23, 9.81 [13:57:06] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 19.16, 25.18, 17.93 [13:57:28] [02mw-config] 07dmehus commented on pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYLIw [13:59:06] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 11.15, 20.14, 16.96 [14:10:51] PROBLEM - wiki.mlpwiki.net - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.mlpwiki.net reverse DNS resolves to 192-185-16-85.unifiedlayer.com [14:14:20] PROBLEM - cp10 Current Load on cp10 is WARNING: WARNING - load average: 0.61, 1.48, 3.77 [14:15:42] > Is there any procedure for requesting an unmaintained extension to be replaced with a new, largely rewritten version of the extension (in another codebase) maintained by the same group or org? [14:15:43] R4356th, not sure. Assuming we have the unmaintained extension installed, we'd probably just conduct a security review on the new, replacement extension, and once that passed, install it. If the functionality was identical or substantially identical, it could probably replace the unmaintained extension, yeah, and discussion about that would likely take place on the Phabricator task? [14:18:20] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.68, 1.08, 3.08 [14:27:35] 🤔 [14:39:37] > More words, the better and more accurate a score will be - based on history, it defaults to 15 when theres insufficient length [14:39:38] JohnLewis, can you clarify that a bit? Do you mean that so long as the minimum description length is met, the greater the number of words, the higher the CW AI score will be? [14:45:52] No, but it follows the more words there are the more accurate a score will be [15:08:32] [02mw-config] 07dmehus opened pull request 03#3809: Adding Business & Finance category - 13https://git.io/JYLlm [15:09:40] miraheze/mw-config - dmehus the build passed. [15:12:26] > No, but it follows the more words there are the more accurate a score will be [15:12:26] Well, yeah, but the issue is that many users add extraneous sentences to their description like about why they dislike Fandom so much or that they've been blocked on an another wiki or, worse, that users will be wise to the CW AI scoring and merely add extraneous words to get their wiki request to auto-approval [15:18:00] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.11, 6.84, 6.07 [15:21:54] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 6.00, 6.77, 6.24 [15:25:35] PROBLEM - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is WARNING: MariaDB replication - both - WARNING - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 197s [15:27:33] RECOVERY - dbbackup2 Check MariaDB Replication c3 on dbbackup2 is OK: MariaDB replication - both - OK - Slave_IO_Running state : Yes, Slave_SQL_Running state : Yes, Seconds_Behind_Master : 0s [15:29:28] More words == more accurate, not higher score [15:36:27] JohnLewis, though yeah at least that it's more words == more accurate in theory, not necessarily in practice, but ack that it != higher score [15:37:13] In practice it is more accurate, as accurate as the data and previous results can possibly allow [15:37:49] the only harm in accuracy is because of humans making decisions that aren’t justifiable to an automatic algorithm [15:41:27] PROBLEM - services4 APT on services4 is CRITICAL: APT CRITICAL: 26 packages available for upgrade (3 critical updates). [15:43:04] PROBLEM - cp3 APT on cp3 is CRITICAL: APT CRITICAL: 2 packages available for upgrade (2 critical updates). [15:43:27] PROBLEM - services3 APT on services3 is CRITICAL: APT CRITICAL: 26 packages available for upgrade (3 critical updates). [15:45:46] [02mw-config] 07Universal-Omega closed pull request 03#3809: Adding Business & Finance category - 13https://git.io/JYLlm [15:45:47] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYL2S [15:45:49] [02miraheze/mw-config] 07dmehus 038771a6c - Adding Business & Finance category (#3809) [15:46:10] PROBLEM - cloud4 APT on cloud4 is CRITICAL: APT CRITICAL: 46 packages available for upgrade (2 critical updates). [15:46:51] [02mw-config] 07Universal-Omega closed pull request 03#3807: Add "Memcached" to wgIncidentReportingServices - 13https://git.io/JYkHw [15:46:53] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYLaJ [15:46:53] PROBLEM - graylog2 APT on graylog2 is CRITICAL: APT CRITICAL: 24 packages available for upgrade (2 critical updates). [15:46:54] [02miraheze/mw-config] 07Universal-Omega 03f9cf5d0 - Add "Memcached" to wgIncidentReportingServices (#3807) [15:46:56] [02mw-config] 07Universal-Omega deleted branch 03Universal-Omega-patch-2 - 13https://git.io/vbvb3 [15:46:57] [02miraheze/mw-config] 07Universal-Omega deleted branch 03Universal-Omega-patch-2 [15:46:59] miraheze/mw-config - Universal-Omega the build passed. [15:47:01] > In practice it is more accurate, as accurate as the data and previous results can possibly allow [15:47:01] Okay, fair enough [15:47:01] > the only harm in accuracy is because of humans making decisions that aren’t justifiable to an automatic algorithm [15:47:01] Right, but the concern is that until we can have the AI overcome humans creating wiki descriptions that extraneously long and irrelevant, I wouldn't feel comfortable with the CW AI auto-approving requests below a 0.95 score. [15:47:07] thanks, Universal_Omega [15:47:24] No problem dmehus! [15:47:59] miraheze/mw-config - Universal-Omega the build passed. [15:48:35] RECOVERY - graylog2 APT on graylog2 is OK: APT OK: 22 packages available for upgrade (0 critical updates). [15:48:41] RECOVERY - services3 APT on services3 is OK: APT OK: 23 packages available for upgrade (0 critical updates). [15:48:48] RECOVERY - cloud4 APT on cloud4 is OK: APT OK: 44 packages available for upgrade (0 critical updates). [15:49:05] RECOVERY - cp3 APT on cp3 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [15:49:30] RECOVERY - services4 APT on services4 is OK: APT OK: 23 packages available for upgrade (0 critical updates). [15:49:37] !log install security updates on all hosts [15:49:41] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:50:48] RECOVERY - mw11 Check Gluster Clients on mw11 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [15:52:20] PROBLEM - mon2 APT on mon2 is CRITICAL: APT CRITICAL: 9 packages available for upgrade (3 critical updates). [15:53:16] AI is good at yes/no tasks, which is what creating a wiki should be. If people make it a complex task, you’ll never be able to make it an automatic method without creating such a complex model we can’t run on our current platform [15:53:44] You’d be entering neutral networking technology if it no longer becomes a yes/no task [15:54:46] RECOVERY - mon2 APT on mon2 is OK: APT OK: 6 packages available for upgrade (0 critical updates). [15:55:36] PROBLEM - mw9 Check Gluster Clients on mw9 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [15:56:34] !log upgrade grafana on mon2 [15:56:38] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:01:05] RECOVERY - mw9 Check Gluster Clients on mw9 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [16:05:17] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 7.43, 6.84, 5.97 [16:05:23] RECOVERY - mw11 Puppet on mw11 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:05:40] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 8.30, 7.28, 6.50 [16:07:16] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.96, 6.18, 5.83 [16:07:40] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.83, 7.29, 6.59 [16:11:40] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.76, 6.48, 6.42 [16:12:02] [02mw-config] 07Reception123 commented on pull request 03#3808: make none of the above first createwiki purpose - 13https://git.io/JYL6o [16:19:55] JohnLewis: I assume you mean https://github.com/miraheze/CreateWiki/blob/master/maintenance/createPersistentModel.php [16:19:55] [ CreateWiki/createPersistentModel.php at master · miraheze/CreateWiki · GitHub ] - github.com [16:21:46] Would be nice if the new purpose stuff could be taken into account where it exists [16:23:08] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/JYLXw [16:23:09] [02miraheze/ssl] 07Reception123 03aeabbc7 - add wiki.aridia.space cert [16:23:36] Yeah that’s the script [16:26:05] JohnLewis: I assume Owen would be best to look at whether the purpose stuff etc would be useful where it exists [16:26:18] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/JYL1l [16:26:19] [02miraheze/ssl] 07Reception123 03fbb4f72 - add wiki.wilderyogi.eu cert [16:30:27] The model used currently is a one feature model [16:32:29] The purpose field also would likely add zero value [16:33:53] JohnLewis, I thought we were going to revise the current model, though? Though I do agree that purpose field could use some revisions. Basing it on the free form description field, which it can't interpret other than length, is a challenge [16:33:55] Okay [16:34:15] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 7.56, 6.98, 6.26 [16:34:17] dmehus: we can update the model for descriptions at any time [16:34:59] RhinosF1, yeah, that'd be good [16:35:34] ‘can’t interpret other than length’ what do you mean? [16:36:15] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.99, 6.24, 6.08 [16:36:26] JohnLewis, well how does it interpret the description field? [16:36:36] As words [16:37:41] JohnLewis, okay, well, either way, it needs some improvement, so it can eliminate extraneous information (i.e., information on why the user dislikes Fandom and is migrating to Miraheze) [16:38:04] That’s impossible using the models were able to run [16:38:15] oh :( [16:38:44] If every user complaining about Fandom gets their wikis accepted, complaining about Fandom increases the chances of future requests being accepted [16:42:32] JohnLewis, yeah... that's part of the problem, as subsequent revisions and added information do usually cause the wiki to be approved [16:43:44] It doesn’t matter how many versions it takes [16:48:26] It'll be based on the status as of when the model was created [16:48:35] As we don't keep a history anyway [17:00:06] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.75, 7.34, 6.85 [17:02:03] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 8.52, 7.61, 6.99 [17:07:56] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 5.69, 7.36, 7.20 [17:11:42] PROBLEM - cp10 Current Load on cp10 is CRITICAL: CRITICAL - load average: 1.24, 4.18, 2.55 [17:13:42] RECOVERY - cp10 Current Load on cp10 is OK: OK - load average: 0.35, 2.89, 2.27 [17:15:49] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 6.29, 6.35, 6.70 [17:20:51] [02puppet] 07Universal-Omega opened pull request 03#1722: Install ghostscript on all mediawiki servers - 13https://git.io/JYLb7 [17:22:24] [02puppet] 07Universal-Omega synchronize pull request 03#1722: Install ghostscript on all mediawiki servers - 13https://git.io/JYLb7 [17:26:16] [02puppet] 07Reception123 closed pull request 03#1722: Install ghostscript on all mediawiki servers - 13https://git.io/JYLb7 [17:26:18] [02miraheze/puppet] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://git.io/JYLNd [17:26:19] [02miraheze/puppet] 07Universal-Omega 038b4c637 - Install ghostscript on all mediawiki servers (#1722) [17:31:46] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-0/±1] 13https://git.io/JYLxO [17:31:48] [02miraheze/mw-config] 07Universal-Omega 030eb3d6c - Use firejail for PdfHandler [17:31:49] [02mw-config] 07Universal-Omega created branch 03Universal-Omega-patch-1 - 13https://git.io/vbvb3 [17:31:54] [02mw-config] 07Universal-Omega opened pull request 03#3810: Use firejail for PdfHandler - 13https://git.io/JYLxn [17:33:04] miraheze/mw-config - Universal-Omega the build passed. [17:55:30] The cvt-feed isn't relaying for some reason. [18:03:30] IRC RC feeds are backlogged due to the bluepageswiki import onto Gyannipedia wiki [18:04:00] Okay. [18:05:07] !log restart glusterd on gluster[34] [18:05:15] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:07:20] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYtfJ [18:07:22] [02miraheze/dns] 07paladox 03d80fc4d - Depool cp10/11 [18:07:40] does this chat work? [18:07:52] yes [18:13:25] !log reboot cp10 and cp11 [18:13:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:15:04] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.69, 1.71, 1.28 [18:23:05] PROBLEM - cp12 Current Load on cp12 is CRITICAL: CRITICAL - load average: 2.52, 2.17, 1.68 [18:23:36] SPF|Cloud: ye, things got slow because net splits [18:29:05] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.72, 1.94, 1.75 [18:35:06] PROBLEM - cp12 Current Load on cp12 is CRITICAL: CRITICAL - load average: 2.16, 2.23, 1.93 [18:36:59] paladox: is Grafana broken, or is is just me? Asking because "!log upgrade grafana on mon2"/ https://grafana.miraheze.org/explore has been loading for 20 minutes for me. [18:37:00] [ Grafana ] - grafana.miraheze.org [18:37:05] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.43, 1.95, 1.87 [18:37:35] works for me [18:38:03] paladox: Hmm. Guess it's just me then. Thanks! [18:38:41] Mine is just stuck saying "Loading Grafana" [18:39:06] PROBLEM - cp12 Current Load on cp12 is CRITICAL: CRITICAL - load average: 2.35, 2.16, 1.96 [18:40:22] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 8.65, 7.01, 6.02 [18:41:07] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.58, 1.91, 1.89 [18:42:19] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.82, 6.95, 6.12 [18:43:07] PROBLEM - cp12 Current Load on cp12 is CRITICAL: CRITICAL - load average: 2.66, 2.08, 1.94 [18:44:17] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 3.31, 5.60, 5.73 [18:45:31] [02miraheze/dns] 07paladox pushed 032 commits to 03master [+0/-0/±2] 13https://git.io/JYtIA [18:45:32] [02miraheze/dns] 07paladox 030fec253 - Revert "Depool cp10/11" [18:45:34] [02miraheze/dns] 07paladox 03fc3b92a - Depool ca [18:46:07] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYtLv [18:46:08] [02miraheze/dns] 07paladox 03c4bf03f - Revert "Depool ca" [18:46:09] oh phabricator is down [18:46:52] oh, it's not anymore, lol, nvm [18:47:00] We're rebooting a few things [18:47:07] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYtLY [18:47:07] PROBLEM - cp12 Current Load on cp12 is WARNING: WARNING - load average: 1.06, 1.68, 1.82 [18:47:08] [02miraheze/dns] 07paladox 030f480b1 - depool ca [18:49:03] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.91, 6.35, 5.57 [18:49:07] RECOVERY - cp12 Current Load on cp12 is OK: OK - load average: 0.38, 1.20, 1.62 [18:50:18] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 9.62, 6.93, 6.17 [18:50:58] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 8.26, 6.75, 5.79 [18:51:00] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.32, 6.79, 6.08 [18:52:16] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 6.17, 6.75, 6.21 [18:52:53] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.55, 6.93, 5.96 [18:52:59] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 5.09, 6.25, 5.97 [18:56:11] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 7.02, 6.97, 6.21 [18:56:46] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 8.66, 7.33, 6.31 [18:57:04] !log sudo -u www-data php /srv/mediawiki/w/extensions/ManageWiki/maintenance/populateGroupPermissionsWithDefaults.php --wiki lshwiki --overwrite [18:57:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:57:55] !log cp12: apt-get dist-upgrade && reboot [18:57:59] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:58:06] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 5.33, 6.38, 6.09 [18:58:16] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 8.07, 7.27, 6.57 [18:58:41] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 6.96, 7.06, 6.33 [19:00:16] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.71, 7.11, 6.60 [19:00:38] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 6.10, 6.79, 6.32 [19:01:29] [02miraheze/dns] 07paladox pushed 032 commits to 03master [+0/-0/±2] 13https://git.io/JYtqY [19:01:31] [02miraheze/dns] 07paladox 03f00b664 - Revert "depool ca" [19:01:32] [02miraheze/dns] 07paladox 03cbcef5b - Depool sg [19:02:16] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.66, 6.53, 6.44 [19:03:53] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 8.76, 7.76, 6.73 [19:04:32] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 9.25, 8.45, 7.08 [19:05:48] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 5.99, 7.10, 6.62 [19:06:28] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 5.77, 7.34, 6.84 [19:06:47] !log reboot cp3 [19:06:54] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:07:50] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.80, 6.30, 6.38 [19:08:40] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 5.70, 6.56, 6.60 [19:09:11] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.54, 6.95, 6.34 [19:09:30] PROBLEM - cp3 PowerDNS Recursor on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:09:52] PROBLEM - ping6 on cp3 is CRITICAL: PING CRITICAL - Packet loss = 100% [19:09:58] PROBLEM - cp3 Stunnel Http for mon2 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:10:03] PROBLEM - cp3 Current Load on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:10:07] PROBLEM - cp3 APT on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:10:16] cp3 doesn't come back apparently [19:10:18] PROBLEM - cp3 HTTP 4xx/5xx ERROR Rate on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:10:28] PROBLEM - cp3 HTTPS on cp3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:10:30] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [19:10:32] PROBLEM - cp3 Puppet on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:10:43] PROBLEM - dreamsit.com.br - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:10:51] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 128.199.139.216/cpweb, 2400:6180:0:d0::403:f001/cpweb [19:10:52] PROBLEM - cp3 Stunnel Http for mw11 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:11:01] PROBLEM - cp3 NTP time on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:11:09] PROBLEM - cp3 Stunnel Http for mw10 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:11:11] PROBLEM - cp3 Stunnel Http for mw9 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:11:23] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 6.33, 6.73, 6.35 [19:11:31] PROBLEM - cp3 Stunnel Http for test3 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:11:38] PROBLEM - cp3 Stunnel Http for mw8 on cp3 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 10 seconds. [19:11:51] PROBLEM - Host cp3 is DOWN: PING CRITICAL - Packet loss = 100% [19:14:08] afk [19:15:01] PROBLEM - mw10 Current Load on mw10 is CRITICAL: CRITICAL - load average: 9.19, 7.33, 6.74 [19:15:05] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.74, 6.70, 6.55 [19:17:03] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 6.67, 7.17, 6.77 [19:17:05] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 6.41, 6.67, 6.56 [19:19:36] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 6.35, 7.19, 6.75 [19:21:00] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 5.58, 6.40, 6.56 [19:21:39] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 4.97, 6.47, 6.54 [19:25:48] !log depool and repool mw 8,9,10,11 and also dist-upgrade [19:25:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:27:10] PROBLEM - mw10 Current Load on mw10 is WARNING: WARNING - load average: 7.32, 7.09, 6.78 [19:29:26] PROBLEM - mw10 Current Load on mw10 is CRITICAL: connect to address 51.195.236.254 port 5666: Connection refusedconnect to host 51.195.236.254 port 5666: Connection refused [19:29:53] PROBLEM - cp12 Varnish Backends on cp12 is CRITICAL: 1 backends are down. mw10 [19:30:04] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 7.42, 6.80, 6.59 [19:31:16] PROBLEM - cp11 Varnish Backends on cp11 is CRITICAL: 1 backends are down. mw11 [19:31:17] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 10.45, 5.43, 2.20 [19:31:26] RECOVERY - mw10 Current Load on mw10 is OK: OK - load average: 3.50, 1.46, 0.54 [19:31:57] PROBLEM - mw8 Current Load on mw8 is CRITICAL: CRITICAL - load average: 8.88, 5.17, 2.18 [19:32:09] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 1.10, 0.30, 0.10 [19:33:00] !log jobrunner[34]: reboot [19:33:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:33:20] RECOVERY - cp11 Varnish Backends on cp11 is OK: All 7 backends are healthy [19:33:39] !log gluster[34]: reboot [19:34:15] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.62, 5.05, 2.58 [19:34:16] !log gluster[34]: dist-upgrade [19:34:17] RECOVERY - cp12 Varnish Backends on cp12 is OK: All 7 backends are healthy [19:34:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:34:54] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:35:48] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 5.40, 5.30, 2.99 [19:37:06] !log phab2: dist-upgrade & reboot [19:37:13] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:37:44] !log ldap2: reboot [19:37:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:38:16] !log mail2: dist-upgrade & reboot [19:38:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:40:02] !log services[34]: dist-upgrade & reboot [19:40:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:44:12] !log graylog2: dist-upgrade & reboot [19:44:17] !log puppet2: dist-upgrade & reboot [19:44:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:44:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:46:17] PROBLEM - cp11 Stunnel Http for test3 on cp11 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Connection refused - 517 bytes in 0.005 second response time [19:46:37] PROBLEM - test3 MediaWiki Rendering on test3 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Connection refused - 517 bytes in 0.006 second response time [19:47:24] PROBLEM - test3 Check Gluster Clients on test3 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [19:47:25] PROBLEM - cp12 Stunnel Http for test3 on cp12 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Connection refused - 517 bytes in 0.238 second response time [19:47:49] PROBLEM - cp10 Stunnel Http for test3 on cp10 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Connection refused - 517 bytes in 0.009 second response time [19:48:01] !log restart syslog-ng on cloud*, mon2 and ns* [19:48:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:48:13] !log also restarted syslog-ng on db* [19:48:20] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:48:27] PROBLEM - test3 Puppet on test3 is CRITICAL: CRITICAL: Puppet has 336 failures. Last run 2 minutes ago with 336 failures. Failed resources (up to 3 shown) [19:48:49] PROBLEM - graylog2 HTTPS on graylog2 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 311 bytes in 0.012 second response time [19:49:25] RECOVERY - cp12 Stunnel Http for test3 on cp12 is OK: HTTP OK: HTTP/1.1 200 OK - 15216 bytes in 0.355 second response time [19:49:48] RECOVERY - cp10 Stunnel Http for test3 on cp10 is OK: HTTP OK: HTTP/1.1 200 OK - 15202 bytes in 0.008 second response time [19:50:17] RECOVERY - cp11 Stunnel Http for test3 on cp11 is OK: HTTP OK: HTTP/1.1 200 OK - 15210 bytes in 0.006 second response time [19:50:29] !log restart ircrcbot, ircecho and logbot on mon2 [19:50:32] RECOVERY - test3 Puppet on test3 is OK: OK: Puppet is currently enabled, last run 45 seconds ago with 0 failures [19:50:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:50:45] RECOVERY - test3 MediaWiki Rendering on test3 is OK: HTTP OK: HTTP/1.1 200 OK - 20742 bytes in 8.524 second response time [19:50:55] RECOVERY - graylog2 HTTPS on graylog2 is OK: HTTP OK: HTTP/1.1 200 OK - 1670 bytes in 0.020 second response time [19:51:22] !log reboot bacula2 [19:51:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:51:40] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 7.39, 7.55, 5.64 [19:53:37] PROBLEM - graylog2 Puppet on graylog2 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[elasticsearch] [19:53:43] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 4.70, 6.56, 5.52 [19:56:24] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYtnU [19:56:26] [02miraheze/puppet] 07paladox 034993886 - graylog: upgrade elasticsearch to 7.12.0 [19:57:45] RECOVERY - graylog2 Puppet on graylog2 is OK: OK: Puppet is currently enabled, last run 15 seconds ago with 0 failures [20:12:15] [02mw-config] 07Universal-Omega closed pull request 03#3810: Use firejail for PdfHandler - 13https://git.io/JYLxn [20:12:17] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYtCU [20:12:18] [02miraheze/mw-config] 07Universal-Omega 033ce748a - Use firejail for PdfHandler (#3810) [20:12:20] [02miraheze/mw-config] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 [20:12:21] [02mw-config] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 - 13https://git.io/vbvb3 [20:13:38] miraheze/mw-config - Universal-Omega the build passed. [20:16:09] RECOVERY - wiki.mlpwiki.net - reverse DNS on sslhost is OK: rDNS OK - wiki.mlpwiki.net reverse DNS resolves to cp10.miraheze.org [21:13:23] PROBLEM - celeste.ink - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'celeste.ink' expires in 15 day(s) (Sat 10 Apr 2021 21:06:41 GMT +0000). [21:20:39] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYtE1 [21:20:40] [02miraheze/ssl] 07MirahezeSSLBot 033556b32 - Bot: Update SSL cert for celeste.ink [21:27:09] PROBLEM - test3 Puppet on test3 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 24 minutes ago with 0 failures [21:40:35] RECOVERY - celeste.ink - LetsEncrypt on sslhost is OK: OK - Certificate 'celeste.ink' will expire on Wed 23 Jun 2021 20:20:33 GMT +0000. [21:44:41] PROBLEM - cp12 Stunnel Http for test3 on cp12 is CRITICAL: HTTP CRITICAL - No data received from host [21:44:48] PROBLEM - cp11 Stunnel Http for test3 on cp11 is CRITICAL: HTTP CRITICAL - No data received from host [21:45:50] PROBLEM - test3 MediaWiki Rendering on test3 is CRITICAL: connect to address 51.195.236.247 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [21:45:56] PROBLEM - cp10 Stunnel Http for test3 on cp10 is CRITICAL: HTTP CRITICAL - No data received from host [21:45:59] PROBLEM - test3 HTTPS on test3 is CRITICAL: connect to address 51.195.236.247 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [22:05:40] JohnLewis: did we stop using tracking tasks? [22:06:02] Goals for the sole purpose of tracking, yes. [22:06:06] *tasks [22:06:21] plus there is already a task open for ATS [22:06:31] I was in a conversation, would like to know your thoughts on managing tasks [22:06:52] right, then that task should be the main task [22:08:39] https://phabricator.miraheze.org/T5877#139273 [22:08:40] [ ⚓ T5877 Revise MariaDB backup strategy ] - phabricator.miraheze.org [22:11:03] also, about 580 GB space in use per gluster node (total close to 1.2 TB?), but bacula2 only has 980 GB. do you want more space on bacula2? [22:11:21] SPF|Cloud also regarding the goals [22:11:34] JohnLewis wasn't brought in for that. [22:12:07] ATS is a proposal for a goal of course [22:12:20] actual goal planning requires more resources than that [22:13:41] I'm going to re-use the main ATS task and write a check list of things to do. [22:13:55] Also regarding goals. Wouldn't JohnLewis set it as he's a EM? [22:14:32] yes, he is involved, especially as the goal owner [22:15:55] If bacula is being used as the sole backups server, then more space is materially required, as we’re backing up way more than we have space available [22:16:44] 350 GB extra costs +$5/mo ex VAT, I have no objections [22:17:14] but whether to upgrade or not is up to you [22:17:26] Whether 350GB is enough, you’ll have to look at the bacula config file which tells you how much space is needed [22:18:42] SPF|Cloud that means we would get rid of the dbbackup* servers, right? [22:18:43] if the database backups are being moved, 350 GB works [22:19:28] perhaps, it's a tricky question, because T5877#139273 proposes a different way of creating backups [22:21:18] using replicas for backups is a best practice among mysql users, but requires faster storage than the storage we have now at RN [22:21:32] I feel we need clarity over what the system will be before additional resources can be approved, as otherwise we’re guessing at resource estimates [22:21:44] SPF|Cloud updated https://phabricator.miraheze.org/T7037 [22:21:45] [ ⚓ T7037 [New] Server Resource Request for ats ] - phabricator.miraheze.org [22:22:42] creating logical files from masters only requires a server that can store compressed SQL files, these files can end up in bacula, but it is not mandatory [22:24:31] JohnLewis or RhinosF1, can we re-voice MirahezeRC in #miraheze-feed? Not sure if it was restarted or not, or if it just needs to be revoiced when it rejoined the channel following the Freenode server reboot [22:27:16] a logical dump of c3 (/srv/mariadb is 189 GB on the master) is 23 GB [22:31:49] JohnLewis: 1350 GB is enough to store gluster backups and our other misc backups, but without the database backups - regardless of the database backups, the extra space is necessary for gluster backups [22:32:38] the next plan after 1350 GB is 2650 GB, that's way too much for database backups, so I'd like to use a separate server for database backups [22:33:16] Okay, that’s fine for me [22:34:39] whether to store logical database backups in a bacula3 or as plain sql.gz files on a separate server requires discussion with you (as the bacula expert) [22:34:48] [02miraheze/dns] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYto8 [22:34:49] [02miraheze/dns] 07paladox 03e0a4d5c - Add test4 to dns [22:35:00] I'll create a task for the upgrade [22:39:27] https://phabricator.miraheze.org/T7038 [22:39:28] [ ⚓ T7038 Existing Server Resource Request for bacula2 ] - phabricator.miraheze.org [22:42:47] deposited $25 in the RamNode account [22:44:11] Warning: instance resize will cause downtime. The instance will shutdown and the disk image will be copied to a new disk. This will take a while, depending on the disk size. [22:45:15] so as not to leave you with an offline server without notification prior to heading off for today, is this OK? [22:45:24] PROBLEM - bacula2 Bacula Databases db13 on bacula2 is WARNING: WARNING: Full, 238729 files, 84.61GB, 2021-03-10 22:42:00 (2.1 weeks ago) [22:46:08] That’s fine [22:47:02] !log downtime bacula2 for the coming five hours, upgrade in progress: https://phabricator.miraheze.org/T7038 [22:47:03] [ ⚓ T7038 Existing Server Resource Request for bacula2 ] - phabricator.miraheze.org [22:47:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:48:01] "Unable to resize instance bacula2.miraheze.org" [22:50:48] I'll open a ticket with RN [22:53:36] Is MirahezeRC bot code public? [22:58:56] !log remove downtime from bacula2, upgrade issues (ticket opened with RamNode) [22:59:02] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:59:03] night [23:01:12] Sario yes [23:02:03] paladox: do you know offhand if the bot uses sasl? [23:03:03] https://github.com/miraheze/puppet/blob/master/modules/irc/templates/ircrcbot.py#L50 [23:03:04] [ puppet/ircrcbot.py at master · miraheze/puppet · GitHub ] - github.com [23:03:13] Doesn't really tell me there what type it uses [23:07:51] RECOVERY - test3 HTTPS on test3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 669 bytes in 0.008 second response time [23:08:11] RECOVERY - cp11 Stunnel Http for test3 on cp11 is OK: HTTP OK: HTTP/1.1 200 OK - 15281 bytes in 0.010 second response time [23:08:17] RECOVERY - cp12 Stunnel Http for test3 on cp12 is OK: HTTP OK: HTTP/1.1 200 OK - 15289 bytes in 0.324 second response time [23:08:19] RECOVERY - cp10 Stunnel Http for test3 on cp10 is OK: HTTP OK: HTTP/1.1 200 OK - 15281 bytes in 0.006 second response time [23:08:36] RECOVERY - test3 MediaWiki Rendering on test3 is OK: HTTP OK: HTTP/1.1 200 OK - 20820 bytes in 0.186 second response time [23:10:02] RECOVERY - test3 Puppet on test3 is OK: OK: Puppet is currently enabled, last run 1 second ago with 0 failures [23:42:03] RECOVERY - test3 Check Gluster Clients on test3 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [23:44:29] [02miraheze/ManageWiki] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-0/±1] 13https://git.io/JYtMD [23:44:31] [02miraheze/ManageWiki] 07Universal-Omega 037dc0e50 - Fix for Form field disable attribute on ManageWikiNamespaces [23:44:32] [02ManageWiki] 07Universal-Omega created branch 03Universal-Omega-patch-1 - 13https://git.io/vpSns [23:44:34] [02ManageWiki] 07Universal-Omega opened pull request 03#260: Fix for Form field disable attribute on ManageWikiNamespaces - 13https://git.io/JYtMy [23:44:42] [02ManageWiki] 07Universal-Omega edited pull request 03#260: Fix for form field disable attribute on ManageWikiNamespaces - 13https://git.io/JYtMy [23:45:56] miraheze/ManageWiki - Universal-Omega the build passed. [23:46:04] [02miraheze/ManageWiki] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-0/±1] 13https://git.io/JYtMh [23:46:06] [02miraheze/ManageWiki] 07Universal-Omega 037f90be0 - Update ManageWikiTypes.php [23:46:07] [02ManageWiki] 07Universal-Omega synchronize pull request 03#260: Fix for form field disable attribute on ManageWikiNamespaces - 13https://git.io/JYtMy [23:46:32] @Lake, dmehus: ^ [23:46:51] Universal_Omega, thanks [23:46:53] alright, good job :D [23:46:59] want me to close the task as resolved for you then? [23:47:11] miraheze/ManageWiki - Universal-Omega the build passed. [23:47:28] dmehus: no problem, but what task? [23:48:53] Universal_Omega, T7040 [23:49:44] [02ManageWiki] 07Universal-Omega closed pull request 03#260: Fix for form field disable attribute on ManageWikiNamespaces - 13https://git.io/JYtMy [23:49:49] [02miraheze/ManageWiki] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±2] 13https://github.com/miraheze/ManageWiki/compare/62f92a7cc70e...7b63cef6f45d [23:49:49] [ Comparing 62f92a7cc70e...7b63cef6f45d · miraheze/ManageWiki · GitHub ] - github.com [23:49:50] [02miraheze/ManageWiki] 07Universal-Omega 037b63cef - Fix for form field disable attribute on ManageWikiNamespaces (#260) [23:49:53] [02ManageWiki] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/ManageWiki [23:50:01] 👍 [23:50:08] dmehus: thanks! [23:50:16] @Lake thanks for the report. [23:50:36] no problem [23:50:39] Universal_Omega, np [23:50:46] miraheze/ManageWiki - Universal-Omega the build passed. [23:50:47] thanks for the quick fix :) [23:53:43] T7040 [23:53:45] https://phabricator.miraheze.org/T7040 - Investigate issue with Special:ManageWiki/namespaces not being greyed out for users without the managewiki user right, authored by Dmehus, assigned to Universal_Omega, Priority: Normal, Status: Resolved [23:54:46] [02miraheze/ManageWiki] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JYtDM [23:54:47] [02miraheze/ManageWiki] 07Universal-Omega 0391e062d - ManageWikiTypes: remove unused disabled variable from MWN function [23:55:42] miraheze/ManageWiki - Universal-Omega the build passed.