[00:01:48] 10Operations, 10Traffic, 10Wikidata, 10wikiba.se, and 2 others: [Task] move wikiba.se webhosting to wikimedia misc-cluster - https://phabricator.wikimedia.org/T99531 (10Dzahn) >>! In T99531#4511979, @BBlack wrote: > 1) Create a wikiba.se microsite in WMF infra (already done by @Dzahn I believe, sourcing fr... [00:20:52] (03CR) 10Legoktm: [C: 031] Use core default for Parser preprocessor class [mediawiki-config] - 10https://gerrit.wikimedia.org/r/460202 (owner: 10C. Scott Ananian) [01:16:51] heads up - 10 min old Phab account just created this nonsense ticket: https://phabricator.wikimedia.org/T205174 [01:17:09] not sure if that's another front in the spam onslaught or an isolated incident [01:17:50] hi foks! Are you involved in anti-spam on phab too, or just IRC? [01:18:13] Just IRC, I'm afraid [01:18:21] OK, cool [01:19:09] I saw a new phab account create a nonsense ticket and was afraid it was part of a wave of spam [01:20:16] but a new users query shows mostly legit accounts, and no big surge [01:20:53] guess you've got your hands full with just the IRC stuff [01:20:54] a [01:21:01] ah* [01:21:01] yeah [01:21:21] I gotta run, actually [01:21:22] o/ [01:21:25] bye! [01:22:51] twentyafterfour: ^^ [02:16:42] disabled the user [02:18:38] (03PS1) 10MaxSem: Introduce new ArticleCreationWrokflow permissions [mediawiki-config] - 10https://gerrit.wikimedia.org/r/462040 (https://phabricator.wikimedia.org/T204016) [02:18:41] (03PS1) 10MaxSem: Remove old ArticleCreationWorkflows config [mediawiki-config] - 10https://gerrit.wikimedia.org/r/462041 (https://phabricator.wikimedia.org/T204016) [02:26:22] 10Operations, 10Community-Tech, 10MediaWiki-Parser, 10Traffic: Show SVGs in wiki language if available - https://phabricator.wikimedia.org/T205040 (10MaxSem) [02:41:55] (03PS1) 10HaeB: Increase sampling ratio for ReadingDepth [mediawiki-config] - 10https://gerrit.wikimedia.org/r/462042 (https://phabricator.wikimedia.org/T205176) [03:35:54] PROBLEM - MariaDB Slave Lag: s1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 865.55 seconds [03:59:43] (03PS1) 10Jayprakash12345: Enable Extension:NewUserMessage on kn.wikisource [mediawiki-config] - 10https://gerrit.wikimedia.org/r/462045 [04:01:05] PROBLEM - Device not healthy -SMART- on db1069 is CRITICAL: cluster=mysql device=megaraid,7 instance=db1069:9100 job=node site=eqiad https://grafana.wikimedia.org/dashboard/db/host-overview?var-server=db1069&var-datasource=eqiad%2520prometheus%252Fops [04:05:34] RECOVERY - MariaDB Slave Lag: s1 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 274.53 seconds [04:11:14] (03PS2) 10Jayprakash12345: Enable Extension:NewUserMessage on kn.wikisource [mediawiki-config] - 10https://gerrit.wikimedia.org/r/462045 (https://phabricator.wikimedia.org/T204405) [06:28:55] PROBLEM - puppet last run on mw1305 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/profile.d/bash_autologout.sh] [06:32:54] PROBLEM - puppet last run on phab1002 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 7 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/usr/local/bin/puppet-enabled] [06:58:14] RECOVERY - puppet last run on phab1002 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [06:59:24] RECOVERY - puppet last run on mw1305 is OK: OK: Puppet is currently enabled, last run 4 minutes ago with 0 failures [07:38:15] PROBLEM - Varnish traffic drop between 30min ago and now at eqiad on einsteinium is CRITICAL: 57.65 le 60 https://grafana.wikimedia.org/dashboard/db/varnish-http-requests?panelId=6&fullscreen&orgId=1 [07:41:34] RECOVERY - Varnish traffic drop between 30min ago and now at eqiad on einsteinium is OK: (C)60 le (W)70 le 83.56 https://grafana.wikimedia.org/dashboard/db/varnish-http-requests?panelId=6&fullscreen&orgId=1 [07:57:35] PROBLEM - Filesystem available is greater than filesystem size on ms-be1041 is CRITICAL: cluster=swift device=/dev/sdn1 fstype=xfs instance=ms-be1041:9100 job=node mountpoint=/srv/swift-storage/sdn1 site=eqiad https://grafana.wikimedia.org/dashboard/db/host-overview?orgId=1&var-server=ms-be1041&var-datasource=eqiad%2520prometheus%252Fops [09:25:26] PROBLEM - High load average on labstore1007 is CRITICAL: CRITICAL: 80.00% of data above the critical threshold [20.0] https://grafana.wikimedia.org/dashboard/db/labs-monitoring [09:31:56] RECOVERY - High load average on labstore1007 is OK: OK: Less than 50.00% above the threshold [12.0] https://grafana.wikimedia.org/dashboard/db/labs-monitoring [10:08:34] PROBLEM - kubelet operational latencies on kubernetes2003 is CRITICAL: instance=kubernetes2003.codfw.wmnet operation_type={create_container,start_container} https://grafana.wikimedia.org/dashboard/db/kubernetes-kubelets?orgId=1 [10:09:35] RECOVERY - kubelet operational latencies on kubernetes2003 is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/kubernetes-kubelets?orgId=1 [12:16:00] 10Operations, 10Wiki-Loves-Love, 10Wikimedia-Mailing-lists: Create a mailling list for Wiki Loves Love - https://phabricator.wikimedia.org/T203792 (10Psychoslave) Hi @Aklapper, is there something more I should do to make this ticket go forward? [12:58:55] PROBLEM - Device not healthy -SMART- on helium is CRITICAL: cluster=misc device=megaraid,10 instance=helium:9100 job=node site=eqiad https://grafana.wikimedia.org/dashboard/db/host-overview?var-server=helium&var-datasource=eqiad%2520prometheus%252Fops [13:05:04] PROBLEM - HTTP availability for Nginx -SSL terminators- at ulsfo on einsteinium is CRITICAL: cluster=cache_text site=ulsfo https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=4&fullscreen&refresh=1m&orgId=1 [13:05:04] PROBLEM - MediaWiki exceptions and fatals per minute on graphite1001 is CRITICAL: CRITICAL: 90.00% of data above the critical threshold [50.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [13:14:54] RECOVERY - HTTP availability for Nginx -SSL terminators- at ulsfo on einsteinium is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=4&fullscreen&refresh=1m&orgId=1 [13:15:55] RECOVERY - MediaWiki exceptions and fatals per minute on graphite1001 is OK: OK: Less than 70.00% above the threshold [25.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [13:26:46] RECOVERY - Memory correctable errors -EDAC- on wtp2011 is OK: (C)4 ge (W)2 ge 1 https://grafana.wikimedia.org/dashboard/db/host-overview?orgId=1&var-server=wtp2011&var-datasource=codfw%2520prometheus%252Fops [13:39:45] PROBLEM - HTTP availability for Nginx -SSL terminators- at ulsfo on einsteinium is CRITICAL: cluster=cache_text site=ulsfo https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=4&fullscreen&refresh=1m&orgId=1 [13:39:55] PROBLEM - Varnish traffic drop between 30min ago and now at eqiad on einsteinium is CRITICAL: 58.75 le 60 https://grafana.wikimedia.org/dashboard/db/varnish-http-requests?panelId=6&fullscreen&orgId=1 [13:42:04] PROBLEM - MediaWiki exceptions and fatals per minute on graphite1001 is CRITICAL: CRITICAL: 90.00% of data above the critical threshold [50.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [13:45:25] RECOVERY - Varnish traffic drop between 30min ago and now at eqiad on einsteinium is OK: (C)60 le (W)70 le 75.83 https://grafana.wikimedia.org/dashboard/db/varnish-http-requests?panelId=6&fullscreen&orgId=1 [13:50:45] RECOVERY - HTTP availability for Nginx -SSL terminators- at ulsfo on einsteinium is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=4&fullscreen&refresh=1m&orgId=1 [13:50:45] RECOVERY - MediaWiki exceptions and fatals per minute on graphite1001 is OK: OK: Less than 70.00% above the threshold [25.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [14:00:44] PROBLEM - HTTP availability for Nginx -SSL terminators- at ulsfo on einsteinium is CRITICAL: cluster=cache_text site=ulsfo https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=4&fullscreen&refresh=1m&orgId=1 [14:06:14] PROBLEM - MediaWiki exceptions and fatals per minute on graphite1001 is CRITICAL: CRITICAL: 90.00% of data above the critical threshold [50.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [14:16:24] 10Operations, 10Wiki-Loves-Love, 10Wikimedia-Mailing-lists: Create a mailling list for Wiki Loves Love - https://phabricator.wikimedia.org/T203792 (10Aklapper) Whoever is "on duty" in the SRE team (`#wikimedia-operations` on [[ https://www.mediawiki.org/wiki/MediaWiki_on_IRC | Freenode IRC ]]) is supposed to... [14:21:25] RECOVERY - HTTP availability for Nginx -SSL terminators- at ulsfo on einsteinium is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=4&fullscreen&refresh=1m&orgId=1 [14:25:40] <_joe_> uh what's going on [14:43:24] RECOVERY - MediaWiki exceptions and fatals per minute on graphite1001 is OK: OK: Less than 70.00% above the threshold [25.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [15:08:54] PROBLEM - MediaWiki memcached error rate on graphite1001 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [5000.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=1&fullscreen [15:17:45] RECOVERY - MediaWiki memcached error rate on graphite1001 is OK: OK: Less than 40.00% above the threshold [1000.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=1&fullscreen [15:28:45] PROBLEM - puppet last run on labtestcontrol2001 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 6 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/usr/share/diamond/collectors/RabbitMQ/RabbitMQ.py] [15:29:35] PROBLEM - Varnish traffic drop between 30min ago and now at eqiad on einsteinium is CRITICAL: 58.47 le 60 https://grafana.wikimedia.org/dashboard/db/varnish-http-requests?panelId=6&fullscreen&orgId=1 [15:30:44] RECOVERY - Varnish traffic drop between 30min ago and now at eqiad on einsteinium is OK: (C)60 le (W)70 le 72.74 https://grafana.wikimedia.org/dashboard/db/varnish-http-requests?panelId=6&fullscreen&orgId=1 [15:50:46] 10Operations, 10Beta-Cluster-Infrastructure, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Krenair) >>! In T125976#4607109, @thcipriani wrote: > and `openldap::maintenance` isn't probably needed on this mac... [15:54:15] RECOVERY - puppet last run on labtestcontrol2001 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:13:24] PROBLEM - MediaWiki exceptions and fatals per minute on graphite1001 is CRITICAL: CRITICAL: 90.00% of data above the critical threshold [50.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [16:33:52] (03PS7) 10Alex Monk: prometheus: make ferm DNS record type configurable [puppet] - 10https://gerrit.wikimedia.org/r/381073 (https://phabricator.wikimedia.org/T153468) (owner: 10Hashar) [16:44:04] RECOVERY - MediaWiki exceptions and fatals per minute on graphite1001 is OK: OK: Less than 70.00% above the threshold [25.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [16:45:26] (03PS8) 10Alex Monk: prometheus: make ferm DNS record type configurable [puppet] - 10https://gerrit.wikimedia.org/r/381073 (https://phabricator.wikimedia.org/T153468) (owner: 10Hashar) [16:48:40] (03PS9) 10Alex Monk: prometheus: make ferm DNS record type configurable [puppet] - 10https://gerrit.wikimedia.org/r/381073 (https://phabricator.wikimedia.org/T153468) (owner: 10Hashar) [16:59:16] (03PS10) 10Alex Monk: prometheus: make ferm DNS record type configurable [puppet] - 10https://gerrit.wikimedia.org/r/381073 (https://phabricator.wikimedia.org/T153468) (owner: 10Hashar) [17:12:25] PROBLEM - MediaWiki exceptions and fatals per minute on graphite1001 is CRITICAL: CRITICAL: 90.00% of data above the critical threshold [50.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [17:12:30] 10Operations, 10Beta-Cluster-Infrastructure, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Reedy) >>! In T125976#4608070, @Krenair wrote: >>>! In T125976#4607399, @Dzahn wrote: >> It seems like adding the m... [17:14:10] 10Operations, 10Beta-Cluster-Infrastructure, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Krenair) >>! In T125976#4608137, @Reedy wrote: >>>! In T125976#4608070, @Krenair wrote: >>>>! In T125976#4607399, @... [17:14:14] PROBLEM - Host mr1-eqsin.oob is DOWN: PING CRITICAL - Packet loss = 100% [17:14:14] PROBLEM - Host mr1-eqsin.oob IPv6 is DOWN: PING CRITICAL - Packet loss = 100% [17:16:04] 10Operations, 10Beta-Cluster-Infrastructure, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Reedy) Sure, but it's more effort to do so. Plus then storing it somewhere, chances of it not being noticed by some... [17:36:35] RECOVERY - MediaWiki exceptions and fatals per minute on graphite1001 is OK: OK: Less than 70.00% above the threshold [25.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [17:48:13] (03PS11) 10Alex Monk: prometheus: make ferm DNS record type configurable [puppet] - 10https://gerrit.wikimedia.org/r/381073 (https://phabricator.wikimedia.org/T153468) (owner: 10Hashar) [17:48:53] (03CR) 10jerkins-bot: [V: 04-1] prometheus: make ferm DNS record type configurable [puppet] - 10https://gerrit.wikimedia.org/r/381073 (https://phabricator.wikimedia.org/T153468) (owner: 10Hashar) [17:49:06] 10Operations, 10Beta-Cluster-Infrastructure, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Krenair) It doesn't particularly matter how much effort it takes, it is possible. [18:18:43] * Krinkle staging a patch on mwdebug2001.codfw [18:19:15] RECOVERY - Host mr1-eqsin.oob IPv6 is UP: PING OK - Packet loss = 0%, RTA = 223.38 ms [18:19:15] RECOVERY - Host mr1-eqsin.oob is UP: PING OK - Packet loss = 0%, RTA = 220.14 ms [18:28:59] !log krinkle@deploy1001 Synchronized php-1.32.0-wmf.22/extensions/MultimediaViewer/resources/: I0954c42a37668b0, T205162 (duration: 00m 56s) [18:29:06] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [18:29:07] T205162: MediaViewer shows only Black Screen in IE 11 - https://phabricator.wikimedia.org/T205162 [19:52:08] 10Operations, 10Cleanup, 10GitHub-Mirrors, 10OCG-General, and 7 others: Archive mediawiki/extensions/Collection/OfflineContentGenerator and all OCG-related repos - https://phabricator.wikimedia.org/T183891 (10MarcoAurelio) [19:53:35] PROBLEM - Restbase root url on restbase2003 is CRITICAL: HTTP CRITICAL - No data received from host [19:54:35] RECOVERY - Restbase root url on restbase2003 is OK: HTTP OK: HTTP/1.1 200 - 16081 bytes in 0.117 second response time [20:40:15] PROBLEM - Varnish traffic drop between 30min ago and now at eqiad on einsteinium is CRITICAL: 58.28 le 60 https://grafana.wikimedia.org/dashboard/db/varnish-http-requests?panelId=6&fullscreen&orgId=1 [20:42:34] RECOVERY - Varnish traffic drop between 30min ago and now at eqiad on einsteinium is OK: (C)60 le (W)70 le 83.43 https://grafana.wikimedia.org/dashboard/db/varnish-http-requests?panelId=6&fullscreen&orgId=1 [20:55:16] 10Operations, 10Cleanup, 10GitHub-Mirrors, 10OCG-General, and 7 others: Archive mediawiki/extensions/Collection/OfflineContentGenerator and all OCG-related repos - https://phabricator.wikimedia.org/T183891 (10MarcoAurelio) [21:07:09] 10Operations, 10Cleanup, 10GitHub-Mirrors, 10OCG-General, and 7 others: Archive mediawiki/extensions/Collection/OfflineContentGenerator and all OCG-related repos - https://phabricator.wikimedia.org/T183891 (10MarcoAurelio) [21:08:31] 10Operations, 10OCG-General, 10Readers-Community-Engagement, 10Epic, and 3 others: [EPIC] (Proposal) Replicate core OCG features and sunset OCG service - https://phabricator.wikimedia.org/T150871 (10MarcoAurelio) [21:08:39] 10Operations, 10Cleanup, 10GitHub-Mirrors, 10OCG-General, and 6 others: Archive mediawiki/extensions/Collection/OfflineContentGenerator and all OCG-related repos - https://phabricator.wikimedia.org/T183891 (10MarcoAurelio) 05Open>03Resolved Done. All OCG-related repos have been emptied and archived on... [21:09:49] wheee [21:12:16] \o/ [21:13:24] I'm sending my bill to the Office, this one was quite some work :P [21:21:45] (03PS1) 10MarcoAurelio: Disable CongressLookup everywhere [mediawiki-config] - 10https://gerrit.wikimedia.org/r/462173 (https://phabricator.wikimedia.org/T205049) [21:37:35] PROBLEM - MediaWiki memcached error rate on graphite1001 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [5000.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=1&fullscreen [21:39:44] RECOVERY - MediaWiki memcached error rate on graphite1001 is OK: OK: Less than 40.00% above the threshold [1000.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=1&fullscreen [21:48:15] PROBLEM - MediaWiki memcached error rate on graphite1001 is CRITICAL: CRITICAL: 40.00% of data above the critical threshold [5000.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=1&fullscreen [21:50:34] RECOVERY - MediaWiki memcached error rate on graphite1001 is OK: OK: Less than 40.00% above the threshold [1000.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=1&fullscreen [22:04:08] 10Operations, 10Beta-Cluster-Infrastructure, 10Wikidata, 10wikidata-tech-focus, and 3 others: Run mediawiki::maintenance scripts in Beta Cluster - https://phabricator.wikimedia.org/T125976 (10Reedy) >>! In T125976#4608196, @Krenair wrote: > It doesn't particularly matter how much effort it takes, it is pos... [22:50:05] PROBLEM - Check systemd state on ms-be1037 is CRITICAL: CRITICAL - degraded: The system is operational but one or more units failed. [23:19:35] PROBLEM - HTTP availability for Nginx -SSL terminators- at ulsfo on einsteinium is CRITICAL: cluster=cache_text site=ulsfo https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=4&fullscreen&refresh=1m&orgId=1 [23:19:54] PROBLEM - MediaWiki exceptions and fatals per minute on graphite1001 is CRITICAL: CRITICAL: 100.00% of data above the critical threshold [50.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [23:22:35] PROBLEM - HTTP availability for Varnish at ulsfo on einsteinium is CRITICAL: job=varnish-text site=ulsfo https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=3&fullscreen&refresh=1m&orgId=1 [23:28:17] 10Operations, 10Core-Platform-Team, 10WMF-JobQueue, 10User-ArielGlenn: Use PHP7 for RPC requests on jobrunner web servers - https://phabricator.wikimedia.org/T195392 (10Krinkle) [23:28:31] 10Operations, 10Core-Platform-Team, 10WMF-JobQueue, 10User-ArielGlenn: Use PHP7 for web requests on jobrunner servers - https://phabricator.wikimedia.org/T195392 (10Krinkle) [23:29:15] RECOVERY - HTTP availability for Varnish at ulsfo on einsteinium is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=3&fullscreen&refresh=1m&orgId=1 [23:30:38] 10Operations, 10Core-Platform-Team, 10WMF-JobQueue, 10User-ArielGlenn: Use PHP7 for web requests on jobrunner servers - https://phabricator.wikimedia.org/T195392 (10Krinkle) @Jdforrester-WMF Can you confirm that this task is about cron jobs, as opposed to JobQueue jobs? Based on the sub tasks, I think I mi... [23:38:55] PROBLEM - HHVM rendering on mw2283 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:39:25] RECOVERY - HTTP availability for Nginx -SSL terminators- at ulsfo on einsteinium is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/frontend-traffic?panelId=4&fullscreen&refresh=1m&orgId=1 [23:39:54] RECOVERY - HHVM rendering on mw2283 is OK: HTTP OK: HTTP/1.1 200 OK - 75010 bytes in 0.263 second response time [23:41:55] RECOVERY - MediaWiki exceptions and fatals per minute on graphite1001 is OK: OK: Less than 70.00% above the threshold [25.0] https://grafana.wikimedia.org/dashboard/db/mediawiki-graphite-alerts?orgId=1&panelId=2&fullscreen [23:45:00] 10Operations, 10Core-Platform-Team, 10HHVM, 10TechCom-RFC (TechCom-Approved), 10User-ArielGlenn: Migrate to PHP 7 in WMF production - https://phabricator.wikimedia.org/T176370 (10Krinkle) [23:45:17] 10Operations, 10Core-Platform-Team, 10HHVM, 10TechCom-RFC (TechCom-Approved), 10User-ArielGlenn: Migrate to PHP 7 in WMF production - https://phabricator.wikimedia.org/T176370 (10Krinkle) [23:46:30] 10Operations, 10Core-Platform-Team, 10HHVM, 10TechCom-RFC (TechCom-Approved), 10User-ArielGlenn: Migrate to PHP 7 in WMF production - https://phabricator.wikimedia.org/T176370 (10Krinkle) Added a high-level checklist, and ticked off three items based on the snapshot/xml-dump infrastructure having switche... [23:53:44] (03PS1) 10Krinkle: profiler: Include MediaWiki post-send in XHGui profiles [mediawiki-config] - 10https://gerrit.wikimedia.org/r/462189