[00:03:02] (03PS12) 10Ayounsi: [WIP] Puppetize Netbox [puppet] - 10https://gerrit.wikimedia.org/r/387880 (https://phabricator.wikimedia.org/T170144) [00:18:29] !log Finished whisper-mass-resize for frontend.navtiming on graphite2001 (T179622) [00:18:36] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [00:18:37] T179622: Update our Graphite metrics for current retention rules - https://phabricator.wikimedia.org/T179622 [00:20:19] (03PS1) 10Ayounsi: uwsgi: fix dependency for stretch [puppet] - 10https://gerrit.wikimedia.org/r/388750 [00:37:31] 10Operations, 10Incident-20150423-Commons, 10RESTBase, 10Availability, and 6 others: RFC: Request timeouts and retries - https://phabricator.wikimedia.org/T97204#3734884 (10Krinkle) 05Open>03Resolved See also {T97206} [01:04:22] PROBLEM - Check health of redis instance on 6381 on rdb2003 is CRITICAL: CRITICAL ERROR - Redis Library - can not ping 127.0.0.1 on port 6381 [01:04:41] PROBLEM - Check health of redis instance on 6379 on rdb2001 is CRITICAL: CRITICAL ERROR - Redis Library - can not ping 127.0.0.1 on port 6379 [01:04:51] PROBLEM - Check health of redis instance on 6480 on rdb2005 is CRITICAL: CRITICAL: replication_delay is 1509757476 600 - REDIS 2.8.17 on 127.0.0.1:6480 has 1 databases (db0) with 3808687 keys, up 4 minutes 33 seconds - replication_delay is 1509757476 [01:05:31] RECOVERY - Check health of redis instance on 6381 on rdb2003 is OK: OK: REDIS 2.8.17 on 127.0.0.1:6381 has 1 databases (db0) with 8405217 keys, up 5 minutes 20 seconds - replication_delay is 0 [01:05:32] RECOVERY - Check health of redis instance on 6379 on rdb2001 is OK: OK: REDIS 2.8.17 on 127.0.0.1:6379 has 1 databases (db0) with 8511616 keys, up 5 minutes 28 seconds - replication_delay is 0 [01:06:01] PROBLEM - Check health of redis instance on 6481 on rdb2005 is CRITICAL: CRITICAL: replication_delay is 1509757554 600 - REDIS 2.8.17 on 127.0.0.1:6481 has 1 databases (db0) with 3804211 keys, up 5 minutes 51 seconds - replication_delay is 1509757554 [01:06:01] PROBLEM - Check health of redis instance on 6479 on rdb2005 is CRITICAL: CRITICAL: replication_delay is 1509757554 600 - REDIS 2.8.17 on 127.0.0.1:6479 has 1 databases (db0) with 3806338 keys, up 5 minutes 52 seconds - replication_delay is 1509757554 [01:08:01] RECOVERY - Check health of redis instance on 6481 on rdb2005 is OK: OK: REDIS 2.8.17 on 127.0.0.1:6481 has 1 databases (db0) with 3801305 keys, up 7 minutes 51 seconds - replication_delay is 0 [01:08:01] RECOVERY - Check health of redis instance on 6479 on rdb2005 is OK: OK: REDIS 2.8.17 on 127.0.0.1:6479 has 1 databases (db0) with 3803170 keys, up 7 minutes 52 seconds - replication_delay is 0 [01:08:42] RECOVERY - Check health of redis instance on 6480 on rdb2005 is OK: OK: REDIS 2.8.17 on 127.0.0.1:6480 has 1 databases (db0) with 3804669 keys, up 8 minutes 36 seconds - replication_delay is 0 [02:26:21] RECOVERY - MegaRAID on analytics1029 is OK: OK: optimal, 13 logical, 14 physical, WriteBack policy [02:56:22] PROBLEM - MegaRAID on analytics1029 is CRITICAL: CRITICAL: 13 LD(s) must have write cache policy WriteBack, currently using: WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough [03:03:32] PROBLEM - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is CRITICAL: CRITICAL: 50.00% of data above the critical threshold [140.0] [03:28:21] PROBLEM - MariaDB Slave Lag: s1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 814.03 seconds [03:36:21] RECOVERY - MegaRAID on analytics1029 is OK: OK: optimal, 13 logical, 14 physical, WriteBack policy [04:02:31] RECOVERY - MariaDB Slave Lag: s1 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 234.85 seconds [04:06:21] PROBLEM - MegaRAID on analytics1029 is CRITICAL: CRITICAL: 13 LD(s) must have write cache policy WriteBack, currently using: WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough [04:36:21] RECOVERY - MegaRAID on analytics1029 is OK: OK: optimal, 13 logical, 14 physical, WriteBack policy [05:06:21] PROBLEM - MegaRAID on analytics1029 is CRITICAL: CRITICAL: 13 LD(s) must have write cache policy WriteBack, currently using: WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough [05:37:42] 10Operations, 10ops-eqiad, 10DBA: Degraded RAID on db1059 - https://phabricator.wikimedia.org/T179727#3735001 (10Marostegui) a:03Cmjohnson @cmjohnson, can we get the disk replaced? Thanks! [05:41:02] RECOVERY - Work requests waiting in Zuul Gearman server https://grafana.wikimedia.org/dashboard/db/zuul-gearman on contint1001 is OK: OK: Less than 30.00% above the threshold [90.0] [06:10:31] PROBLEM - HHVM rendering on mw2136 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:11:22] RECOVERY - HHVM rendering on mw2136 is OK: HTTP OK: HTTP/1.1 200 OK - 74537 bytes in 0.381 second response time [06:36:31] PROBLEM - HHVM rendering on mw2129 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:37:21] RECOVERY - HHVM rendering on mw2129 is OK: HTTP OK: HTTP/1.1 200 OK - 74537 bytes in 0.407 second response time [09:19:21] PROBLEM - eventstreams on scb1002 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:21:11] RECOVERY - eventstreams on scb1002 is OK: HTTP OK: HTTP/1.1 200 OK - 929 bytes in 0.045 second response time [09:27:21] PROBLEM - Cxserver LVS eqiad on cxserver.svc.eqiad.wmnet is CRITICAL: /v1/page/{language}/{title}{/revision} (Fetch enwiki Oxygen page) timed out before a response was received: / (spec from root) timed out before a response was received: /_info/version (retrieve service version) timed out before a response was received [09:27:22] PROBLEM - Mobileapps LVS eqiad on mobileapps.svc.eqiad.wmnet is CRITICAL: /{domain}/v1/page/media/{title} (retrieve images and videos of en.wp Cat page via media route) timed out before a response was received: /{domain}/v1/feed/onthisday/{type}/{mm}/{dd} (retrieve all events on January 15) timed out before a response was received: /{domain}/v1/page/most-read/{yyyy}/{mm}/{dd} (retrieve the most-read articles for January 1, 2016 [09:27:22] a response was received [09:28:12] RECOVERY - Cxserver LVS eqiad on cxserver.svc.eqiad.wmnet is OK: All endpoints are healthy [09:28:21] RECOVERY - Mobileapps LVS eqiad on mobileapps.svc.eqiad.wmnet is OK: All endpoints are healthy [09:28:22] PROBLEM - cxserver endpoints health on scb1002 is CRITICAL: /v1/mt/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium.) timed out before a response was received: /v1/translate/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium, adapt the links to target language wiki.) timed out before a response was received [09:29:21] RECOVERY - cxserver endpoints health on scb1002 is OK: All endpoints are healthy [09:29:51] PROBLEM - pdfrender on scb1002 is CRITICAL: connect to address 10.64.16.21 and port 5252: Connection refused [09:30:33] (03PS6) 10EddieGP: [DNM] Add cron job for expired userrights maintenance script [puppet] - 10https://gerrit.wikimedia.org/r/382631 (https://phabricator.wikimedia.org/T176754) [09:30:51] RECOVERY - pdfrender on scb1002 is OK: HTTP OK: HTTP/1.1 200 OK - 275 bytes in 0.003 second response time [09:31:09] (03CR) 10EddieGP: [C: 04-1] "Right, we still need to wait for https://gerrit.wikimedia.org/r/#/c/384429/ to be merged." [puppet] - 10https://gerrit.wikimedia.org/r/382631 (https://phabricator.wikimedia.org/T176754) (owner: 10EddieGP) [09:32:31] PROBLEM - mobileapps endpoints health on scb1002 is CRITICAL: /{domain}/v1/page/media/{title} (retrieve images and videos of en.wp Cat page via media route) timed out before a response was received: /{domain}/v1/feed/onthisday/{type}/{mm}/{dd} (retrieve all events on January 15) timed out before a response was received [09:36:31] PROBLEM - mobileapps endpoints health on scb1002 is CRITICAL: /{domain}/v1/feed/onthisday/{type}/{mm}/{dd} (retrieve all events on January 15) timed out before a response was received [09:36:31] PROBLEM - cxserver endpoints health on scb1002 is CRITICAL: /v1/page/{language}/{title}{/revision} (Fetch enwiki Oxygen page) timed out before a response was received [09:37:22] RECOVERY - cxserver endpoints health on scb1002 is OK: All endpoints are healthy [09:37:22] RECOVERY - mobileapps endpoints health on scb1002 is OK: All endpoints are healthy [10:15:01] PROBLEM - cxserver endpoints health on scb1002 is CRITICAL: /v1/page/{language}/{title}{/revision} (Fetch enwiki Oxygen page) timed out before a response was received: /v1/mt/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium.) timed out before a response was received [10:17:51] RECOVERY - cxserver endpoints health on scb1002 is OK: All endpoints are healthy [10:46:55] (03PS1) 10ArielGlenn: rsync xml/sql dumps on an ongoing basis to fallback nfs server [puppet] - 10https://gerrit.wikimedia.org/r/389025 (https://phabricator.wikimedia.org/T178893) [10:58:22] PROBLEM - Apache HTTP on mw2253 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:59:22] RECOVERY - Apache HTTP on mw2253 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 616 bytes in 0.110 second response time [11:06:41] PROBLEM - cxserver endpoints health on scb1002 is CRITICAL: /v1/mt/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium.) timed out before a response was received: /v1/translate/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium, adapt the links to target language wiki.) timed out before a response was received [11:14:41] RECOVERY - cxserver endpoints health on scb1002 is OK: All endpoints are healthy [11:18:14] (03PS17) 10Mobrovac: Improve the checking procedure and emit better messages; v0.1.4 [software/service-checker] - 10https://gerrit.wikimedia.org/r/386116 (https://phabricator.wikimedia.org/T150560) [11:19:14] (03CR) 10jerkins-bot: [V: 04-1] Improve the checking procedure and emit better messages; v0.1.4 [software/service-checker] - 10https://gerrit.wikimedia.org/r/386116 (https://phabricator.wikimedia.org/T150560) (owner: 10Mobrovac) [11:20:12] mobrovac: o/ - cxserver/mobileapps seems flapping today due to timeouts, is it all ok or do we need to investigate? [11:20:29] (03PS18) 10Mobrovac: Improve the checking procedure and emit better messages; v0.1.4 [software/service-checker] - 10https://gerrit.wikimedia.org/r/386116 (https://phabricator.wikimedia.org/T150560) [11:20:39] ciao elukey, just seen it, will take a look [11:20:51] mobrovac: let me know if you need help, I am around [11:20:57] (03CR) 10Mobrovac: Improve the checking procedure and emit better messages; v0.1.4 (033 comments) [software/service-checker] - 10https://gerrit.wikimedia.org/r/386116 (https://phabricator.wikimedia.org/T150560) (owner: 10Mobrovac) [11:21:17] kk cool, thnx elukey, will kepp you posted [11:22:51] PROBLEM - cxserver endpoints health on scb1002 is CRITICAL: /v1/mt/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium.) timed out before a response was received: /v1/translate/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium, adapt the links to target language wiki.) timed out before a response was received [11:23:17] (03CR) 10Mobrovac: "> Overall LGTM, but I have a couple doubts. Also, reviewing the code" [software/service-checker] - 10https://gerrit.wikimedia.org/r/386116 (https://phabricator.wikimedia.org/T150560) (owner: 10Mobrovac) [11:23:42] RECOVERY - cxserver endpoints health on scb1002 is OK: All endpoints are healthy [11:33:45] elukey: it looks like a parsoid transient issue, there were quite a few worker restarts there, which caused cxserver and mobileapps to time-out and be killed by service-runner because of it [11:34:26] so nothing to worry about [11:36:00] mobrovac: thanks! [11:59:41] PROBLEM - trendingedits endpoints health on scb1002 is CRITICAL: /_info/name (retrieve service name) timed out before a response was received [12:01:32] RECOVERY - trendingedits endpoints health on scb1002 is OK: All endpoints are healthy [12:10:41] PROBLEM - mobileapps endpoints health on scb1002 is CRITICAL: /{domain}/v1/page/media/{title} (retrieve images and videos of en.wp Cat page via media route) timed out before a response was received: /{domain}/v1/feed/onthisday/{type}/{mm}/{dd} (retrieve all events on January 15) timed out before a response was received [12:13:32] RECOVERY - mobileapps endpoints health on scb1002 is OK: All endpoints are healthy [12:13:41] PROBLEM - cxserver endpoints health on scb1002 is CRITICAL: /v1/translate/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium, adapt the links to target language wiki.) timed out before a response was received [12:14:32] RECOVERY - cxserver endpoints health on scb1002 is OK: All endpoints are healthy [12:19:01] PROBLEM - trendingedits endpoints health on scb1002 is CRITICAL: /robots.txt (robots.txt check) timed out before a response was received: / (root with no query params) timed out before a response was received: / (spec from root) timed out before a response was received: / (root with wrong query param) timed out before a response was received: /_info/home (redirect to the home page) timed out before a response was received: /_in [12:19:01] e info) timed out before a response was received: /_info/version (retrieve service version) timed out before a response was received: /{domain}/v1/feed/trending-edits{/period} (retrieve trending articles within the last hour) timed out before a response was received [12:20:52] RECOVERY - trendingedits endpoints health on scb1002 is OK: All endpoints are healthy [12:24:01] PROBLEM - trendingedits endpoints health on scb1002 is CRITICAL: /_info/name (retrieve service name) timed out before a response was received: / (root with wrong query param) timed out before a response was received: /_info/home (redirect to the home page) timed out before a response was received: /_info (retrieve service info) timed out before a response was received: /_info/version (retrieve service version) timed out before [12:24:01] ived: /{domain}/v1/feed/trending-edits{/period} (retrieve trending articles within the last hour) timed out before a response was received [12:25:51] RECOVERY - trendingedits endpoints health on scb1002 is OK: All endpoints are healthy [12:36:02] PROBLEM - mobileapps endpoints health on scb1002 is CRITICAL: /{domain}/v1/page/mobile-sections-lead/{title} (retrieve lead section of en.wp Altrincham page via mobile-sections-lead) timed out before a response was received: /{domain}/v1/page/media/{title} (retrieve images and videos of en.wp Cat page via media route) timed out before a response was received: /{domain}/v1/media/image/featured/{yyyy}/{mm}/{dd} (retrieve featured [12:36:02] il 29, 2016) timed out before a response was received: /{domain}/v1/feed/onthisday/{type}/{mm}/{dd} (retrieve all events on January 15) timed out before a response was received: /{domain}/v1/page/definition/{title} (retrieve en-wiktionary definitions for cat) timed out before a response was received: /{domain}/v1/page/most-read/{yyyy}/{mm}/{dd} (retrieve the most-read articles for January 1, 2016 (with aggregated=true)) timed o [12:36:02] e was received [12:36:19] !log trending-edits depool it from scb1002 to investigate [12:36:26] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [12:38:01] PROBLEM - Mobileapps LVS eqiad on mobileapps.svc.eqiad.wmnet is CRITICAL: /{domain}/v1/page/media/{title} (retrieve images and videos of en.wp Cat page via media route) timed out before a response was received: /{domain}/v1/page/mobile-sections/{title} (retrieve en.wp main page via mobile-sections) timed out before a response was received [12:40:01] RECOVERY - Mobileapps LVS eqiad on mobileapps.svc.eqiad.wmnet is OK: All endpoints are healthy [12:40:01] RECOVERY - mobileapps endpoints health on scb1002 is OK: All endpoints are healthy [12:43:41] PROBLEM - HHVM rendering on mw2207 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:44:31] RECOVERY - HHVM rendering on mw2207 is OK: HTTP OK: HTTP/1.1 200 OK - 74491 bytes in 0.295 second response time [12:45:41] PROBLEM - Check systemd state on scb1002 is CRITICAL: CRITICAL - degraded: The system is operational but one or more units failed. [13:01:52] PROBLEM - puppet last run on db2016 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [13:19:45] !log elukey@puppetmaster1001 conftool action : set/pooled=no; selector: name=scb1002.eqiad.wmnet [13:19:50] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [13:25:41] RECOVERY - Check systemd state on scb1002 is OK: OK - running: The system is fully operational [13:26:11] PROBLEM - Text HTTP 5xx reqs/min on graphite1001 is CRITICAL: CRITICAL: 11.11% of data above the critical threshold [1000.0] [13:26:21] RECOVERY - MegaRAID on analytics1029 is OK: OK: optimal, 13 logical, 14 physical, WriteBack policy [13:26:51] RECOVERY - puppet last run on db2016 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [13:34:49] !log elukey@puppetmaster1001 conftool action : set/pooled=yes; selector: name=scb1002.eqiad.wmnet [13:34:56] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [13:35:44] ok scb should now behave correctly, the trending edit service was not healthy and it kept dying (fixed by Marko) [13:36:14] the 503s seems due to silly requests to commons, only few spikes [13:41:13] RECOVERY - Text HTTP 5xx reqs/min on graphite1001 is OK: OK: Less than 1.00% above the threshold [250.0] [13:41:21] nice :) [13:41:40] * elukey afk! [13:46:22] PROBLEM - puppet last run on db1054 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [13:56:21] PROBLEM - MegaRAID on analytics1029 is CRITICAL: CRITICAL: 13 LD(s) must have write cache policy WriteBack, currently using: WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough [14:01:22] PROBLEM - HHVM rendering on mw2200 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:02:21] RECOVERY - HHVM rendering on mw2200 is OK: HTTP OK: HTTP/1.1 200 OK - 74171 bytes in 0.289 second response time [14:11:21] RECOVERY - puppet last run on db1054 is OK: OK: Puppet is currently enabled, last run 1 second ago with 0 failures [14:46:21] RECOVERY - MegaRAID on analytics1029 is OK: OK: optimal, 13 logical, 14 physical, WriteBack policy [15:16:21] PROBLEM - MegaRAID on analytics1029 is CRITICAL: CRITICAL: 13 LD(s) must have write cache policy WriteBack, currently using: WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough [15:56:22] RECOVERY - MegaRAID on analytics1029 is OK: OK: optimal, 13 logical, 14 physical, WriteBack policy [16:11:11] PROBLEM - cxserver endpoints health on scb1002 is CRITICAL: /v1/mt/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium.) timed out before a response was received: /v1/translate/{from}/{to}{/provider} (Machine translate an HTML fragment using Apertium, adapt the links to target language wiki.) timed out before a response was received [16:12:02] RECOVERY - cxserver endpoints health on scb1002 is OK: All endpoints are healthy [16:26:21] PROBLEM - MegaRAID on analytics1029 is CRITICAL: CRITICAL: 13 LD(s) must have write cache policy WriteBack, currently using: WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough, WriteThrough [16:55:47] (03PS3) 10MarcoAurelio: Extension:Translate default permissions for Wikimedia wikis [mediawiki-config] - 10https://gerrit.wikimedia.org/r/385953 (https://phabricator.wikimedia.org/T178793) [17:30:21] PROBLEM - puppet last run on mw1315 is CRITICAL: CRITICAL: Catalog fetch fail. Either compilation failed or puppetmaster has issues [17:31:32] PROBLEM - Nginx local proxy to apache on mw2210 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:31:32] PROBLEM - HHVM rendering on mw2210 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:32:22] RECOVERY - Nginx local proxy to apache on mw2210 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 617 bytes in 0.201 second response time [17:32:22] RECOVERY - HHVM rendering on mw2210 is OK: HTTP OK: HTTP/1.1 200 OK - 74077 bytes in 0.360 second response time [17:34:17] (scheduled downtime for analytics1029) [18:00:21] RECOVERY - puppet last run on mw1315 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [19:06:21] RECOVERY - MegaRAID on analytics1029 is OK: OK: optimal, 13 logical, 14 physical, WriteBack policy [21:36:58] (03Abandoned) 10Tpt: Disables TwoColConflict waiting for compatibility with ProofreadPage [mediawiki-config] - 10https://gerrit.wikimedia.org/r/386582 (https://phabricator.wikimedia.org/T179056) (owner: 10Tpt) [22:17:25] 10Operations, 10Wikimedia-SVG-rendering: Incorrect text positioning in SVG rasterization (scale/transform; font-size; kerning) - https://phabricator.wikimedia.org/T36947#3735627 (10kaldari) Here's another example of how our SVG text rendering has significantly degraded in the past year: Old thumbnail: {F10614... [22:50:49] (03CR) 10Zoranzoki21: [C: 031] Enable draftquality model in ORES extension for enwiki [mediawiki-config] - 10https://gerrit.wikimedia.org/r/388092 (https://phabricator.wikimedia.org/T179596) (owner: 10Ladsgroup) [23:30:52] PROBLEM - Check Varnish expiry mailbox lag on cp4026 is CRITICAL: CRITICAL: expiry mailbox lag is 2011044 [23:40:10] (03PS4) 10Zoranzoki21: Enable the ArticlePlaceholder for Northern Sami (sewiki) [mediawiki-config] - 10https://gerrit.wikimedia.org/r/387077 (https://phabricator.wikimedia.org/T179241)