[00:00:15] (03PS2) 10Alex Monk: cumin: Allow Puppet DB backend to be used within Labs projects that use it [puppet] - 10https://gerrit.wikimedia.org/r/437052 [00:20:29] PROBLEM - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 SERVICE UNAVAILABLE - string OK not found on http://checker.tools.wmflabs.org:80/toolscron - 185 bytes in 0.005 second response time [00:36:29] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1968 bytes in 0.062 second response time [00:43:30] RECOVERY - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is OK: HTTP OK: HTTP/1.1 200 OK - 166 bytes in 0.008 second response time [00:46:23] 10Operations, 10ops-codfw, 10fundraising-tech-ops: frdb2001 RAID disk failure - https://phabricator.wikimedia.org/T196251#4251134 (10Jgreen) [00:51:50] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1971 bytes in 0.079 second response time [00:55:39] PROBLEM - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 SERVICE UNAVAILABLE - string OK not found on http://checker.tools.wmflabs.org:80/toolscron - 185 bytes in 0.004 second response time [01:01:40] PROBLEM - etcd request latencies on neon is CRITICAL: instance=10.64.0.40:6443 operation=compareAndSwap https://grafana.wikimedia.org/dashboard/db/kubernetes-api [01:03:20] PROBLEM - etcd request latencies on argon is CRITICAL: instance=10.64.32.133:6443 operation=compareAndSwap https://grafana.wikimedia.org/dashboard/db/kubernetes-api [01:03:30] PROBLEM - etcd request latencies on chlorine is CRITICAL: instance=10.64.0.45:6443 operation=compareAndSwap https://grafana.wikimedia.org/dashboard/db/kubernetes-api [01:03:30] PROBLEM - Request latencies on neon is CRITICAL: instance=10.64.0.40:6443 verb={PATCH,PUT} https://grafana.wikimedia.org/dashboard/db/kubernetes-api [01:04:30] PROBLEM - Request latencies on argon is CRITICAL: instance=10.64.32.133:6443 verb=PATCH https://grafana.wikimedia.org/dashboard/db/kubernetes-api [01:05:00] PROBLEM - Request latencies on chlorine is CRITICAL: instance=10.64.0.45:6443 verb={PATCH,PUT} https://grafana.wikimedia.org/dashboard/db/kubernetes-api [01:30:39] RECOVERY - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is OK: HTTP OK: HTTP/1.1 200 OK - 166 bytes in 0.005 second response time [01:42:00] PROBLEM - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 SERVICE UNAVAILABLE - string OK not found on http://checker.tools.wmflabs.org:80/toolscron - 185 bytes in 0.005 second response time [01:44:29] PROBLEM - Host labservices1001 is DOWN: PING CRITICAL - Packet loss = 100% [01:55:50] PROBLEM - toolschecker: Start a job and verify on Trusty on checker.tools.wmflabs.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 504 Gateway Time-out - string OK not found on http://checker.tools.wmflabs.org:80/grid/start/trusty - 356 bytes in 60.014 second response time [02:00:50] RECOVERY - Device not healthy -SMART- on mw1230 is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/host-overview?var-server=mw1230&var-datasource=eqiad%2520prometheus%252Fops [02:05:50] RECOVERY - Request latencies on argon is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/kubernetes-api [02:07:00] RECOVERY - etcd request latencies on argon is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/kubernetes-api [02:07:09] RECOVERY - etcd request latencies on chlorine is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/kubernetes-api [02:07:10] RECOVERY - Request latencies on neon is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/kubernetes-api [02:07:30] RECOVERY - Request latencies on chlorine is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/kubernetes-api [02:07:30] RECOVERY - etcd request latencies on neon is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/kubernetes-api [02:18:00] !log rebooting labservices1001; it seems to have crashed [02:18:12] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [02:20:29] RECOVERY - Host labservices1001 is UP: PING OK - Packet loss = 0%, RTA = 0.20 ms [02:27:08] 10Operations, 10ops-eqiad, 10cloud-services-team: Labservices1001 crashed - https://phabricator.wikimedia.org/T196252#4251148 (10Andrew) [02:32:39] RECOVERY - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is OK: HTTP OK: HTTP/1.1 200 OK - 166 bytes in 0.005 second response time [02:44:49] PROBLEM - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 SERVICE UNAVAILABLE - string OK not found on http://checker.tools.wmflabs.org:80/toolscron - 185 bytes in 0.016 second response time [03:06:29] RECOVERY - toolschecker: Start a job and verify on Trusty on checker.tools.wmflabs.org is OK: HTTP OK: HTTP/1.1 200 OK - 166 bytes in 0.373 second response time [03:25:19] PROBLEM - MariaDB Slave Lag: s1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 710.09 seconds [03:32:29] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1981 bytes in 0.077 second response time [03:34:40] PROBLEM - puppet last run on mw1346 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 3 minutes ago with 2 failures. Failed resources (up to 3 shown): File[/usr/share/GeoIP/GeoIP2-City.mmdb.gz],File[/usr/share/GeoIP/GeoIP2-City.mmdb.test] [03:47:49] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1969 bytes in 0.073 second response time [04:02:50] RECOVERY - MariaDB Slave Lag: s1 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 149.92 seconds [04:05:19] RECOVERY - puppet last run on mw1346 is OK: OK: Puppet is currently enabled, last run 5 minutes ago with 0 failures [04:07:49] RECOVERY - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is OK: HTTP OK: HTTP/1.1 200 OK - 166 bytes in 0.012 second response time [04:11:29] PROBLEM - Device not healthy -SMART- on mw1230 is CRITICAL: cluster=api_appserver device=sda instance=mw1230:9100 job=node site=eqiad https://grafana.wikimedia.org/dashboard/db/host-overview?var-server=mw1230&var-datasource=eqiad%2520prometheus%252Fops [04:20:00] PROBLEM - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 SERVICE UNAVAILABLE - string OK not found on http://checker.tools.wmflabs.org:80/toolscron - 185 bytes in 0.005 second response time [04:38:59] (03PS1) 10Nehajha: Man page for webservice Bug: T95097 [software/tools-webservice] - 10https://gerrit.wikimedia.org/r/437054 (https://phabricator.wikimedia.org/T95097) [04:43:28] (03PS2) 10Nehajha: Man page for webservice Bug: T95097 Change-Id: Ia6b6eb81e36a8bcb0815d8849413daf0f2e77616 [software/tools-webservice] - 10https://gerrit.wikimedia.org/r/437054 (https://phabricator.wikimedia.org/T95097) [04:48:38] (03PS3) 10Nehajha: Man page for webservice [software/tools-webservice] - 10https://gerrit.wikimedia.org/r/437054 (https://phabricator.wikimedia.org/T95097) [04:53:49] (03PS3) 10Nehajha: Read command line arguments from a config file [software/tools-webservice] - 10https://gerrit.wikimedia.org/r/435691 (https://phabricator.wikimedia.org/T148872) [05:07:16] 10Operations, 10ops-codfw: Degraded RAID on db2047 - https://phabricator.wikimedia.org/T196246#4251199 (10Marostegui) p:05Triage>03Normal a:03Papaul [05:08:26] 10Operations, 10ops-codfw, 10DBA: Degraded RAID on db2047 - https://phabricator.wikimedia.org/T196246#4251009 (10Marostegui) Can we get a new disk for this host? [05:26:50] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1969 bytes in 0.066 second response time [05:49:10] RECOVERY - Check systemd state on kubernetes2003 is OK: OK - running: The system is fully operational [05:52:30] PROBLEM - Check systemd state on kubernetes2003 is CRITICAL: CRITICAL - degraded: The system is operational but one or more units failed. [05:55:07] (03PS1) 10ArielGlenn: use default installer for francium [puppet] - 10https://gerrit.wikimedia.org/r/437055 [05:56:21] (03CR) 10ArielGlenn: [C: 032] use default installer for francium [puppet] - 10https://gerrit.wikimedia.org/r/437055 (owner: 10ArielGlenn) [06:05:04] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#4251207 (10ArielGlenn) [06:05:15] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#3939825 (10ops-monitoring-bot) Script wmf-auto-reimage was launched by ariel on neodymium.eqiad.wmnet for hosts: ``` francium.eqiad.wmnet ``` The log can be fou... [06:05:43] francium is being reimaged by script, please ignore any whines you might see [06:09:40] PROBLEM - DPKG on francium is CRITICAL: Return code of 255 is out of bounds [06:09:40] PROBLEM - Check whether ferm is active by checking the default input chain on francium is CRITICAL: Return code of 255 is out of bounds [06:09:40] PROBLEM - Disk space on francium is CRITICAL: Return code of 255 is out of bounds [06:09:49] PROBLEM - dhclient process on francium is CRITICAL: Return code of 255 is out of bounds [06:10:10] PROBLEM - configured eth on francium is CRITICAL: Return code of 255 is out of bounds [06:10:19] PROBLEM - MD RAID on francium is CRITICAL: Return code of 255 is out of bounds [06:10:29] PROBLEM - Check size of conntrack table on francium is CRITICAL: Return code of 255 is out of bounds [06:14:09] PROBLEM - puppet last run on francium is CRITICAL: Return code of 255 is out of bounds [06:32:33] PROBLEM - puppet last run on labstore1003 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 7 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/usr/local/lib/nagios/plugins/check_raid] [06:32:44] PROBLEM - puppet last run on mw1323 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 7 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/ImageMagick-6/policy.xml] [06:47:20] 04Critical Alert for device asw2-b-eqiad.mgmt.eqiad.wmnet - Critical syslog messages [06:52:03] RECOVERY - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is OK: HTTP OK: HTTP/1.1 200 OK - 166 bytes in 0.005 second response time [06:57:18] 04̶C̶r̶i̶t̶i̶c̶a̶l Device asw2-b-eqiad.mgmt.eqiad.wmnet recovered from Critical syslog messages [06:57:54] RECOVERY - puppet last run on labstore1003 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [06:58:04] RECOVERY - puppet last run on mw1323 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [07:04:04] PROBLEM - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 SERVICE UNAVAILABLE - string OK not found on http://checker.tools.wmflabs.org:80/toolscron - 185 bytes in 0.006 second response time [07:09:59] (03PS1) 10Elukey: profile::analytics::refinery::job:data_purge: fix webrequest datasource [puppet] - 10https://gerrit.wikimedia.org/r/437056 [07:20:39] (03PS1) 10Alex Monk: Tighten Puppet DB access control - check client certificates [puppet] - 10https://gerrit.wikimedia.org/r/437057 (https://phabricator.wikimedia.org/T194962) [07:21:19] (03CR) 10jerkins-bot: [V: 04-1] Tighten Puppet DB access control - check client certificates [puppet] - 10https://gerrit.wikimedia.org/r/437057 (https://phabricator.wikimedia.org/T194962) (owner: 10Alex Monk) [07:23:52] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#4251254 (10ops-monitoring-bot) Completed auto-reimage of hosts: ``` ['francium.eqiad.wmnet'] ``` and were **ALL** successful. [07:26:53] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#4251256 (10ArielGlenn) [07:27:15] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#3939825 (10ArielGlenn) [07:38:21] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#4251280 (10elukey) [08:10:29] PROBLEM - Disk space on elastic1018 is CRITICAL: DISK CRITICAL - free space: /srv 59451 MB (12% inode=99%) [08:18:19] RECOVERY - Disk space on elastic1018 is OK: DISK OK [08:19:20] RECOVERY - Check systemd state on kubernetes2003 is OK: OK - running: The system is fully operational [08:22:39] PROBLEM - Check systemd state on kubernetes2003 is CRITICAL: CRITICAL - degraded: The system is operational but one or more units failed. [08:34:40] PROBLEM - High lag on wdqs1003 is CRITICAL: 3623 ge 3600 https://grafana.wikimedia.org/dashboard/db/wikidata-query-service?orgId=1&panelId=8&fullscreen [09:03:30] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#4251329 (10TerraCodes) [09:04:01] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#3939825 (10TerraCodes) Since the tasks are marked as done, I think these servers are able to be checked off. [09:37:29] PROBLEM - MD RAID on wtp1043 is CRITICAL: CRITICAL: State: degraded, Active: 3, Working: 3, Failed: 1, Spare: 0 [09:37:35] 10Operations, 10ops-eqiad: Degraded RAID on wtp1043 - https://phabricator.wikimedia.org/T196260#4251333 (10ops-monitoring-bot) [09:57:37] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#4251387 (10TerraCodes) [09:57:59] 10Operations, 10cloud-services-team, 10Epic: replace all Ubuntu (trusty) hosts in production with Debian - https://phabricator.wikimedia.org/T186288#3939825 (10TerraCodes) whoops, not decommissioned yet [10:13:10] RECOVERY - toolschecker: check mtime mod from tools cron job on checker.tools.wmflabs.org is OK: HTTP OK: HTTP/1.1 200 OK - 166 bytes in 0.010 second response time [10:19:10] RECOVERY - Check systemd state on kubernetes2003 is OK: OK - running: The system is fully operational [10:22:30] PROBLEM - Check systemd state on kubernetes2003 is CRITICAL: CRITICAL - degraded: The system is operational but one or more units failed. [10:36:11] ACKNOWLEDGEMENT - MD RAID on wtp1043 is CRITICAL: CRITICAL: State: degraded, Active: 3, Working: 3, Failed: 1, Spare: 0 nagiosadmin RAID handler auto-ack: https://phabricator.wikimedia.org/T196260 [10:54:29] PROBLEM - Device not healthy -SMART- on wtp1043 is CRITICAL: cluster=parsoid device=sda instance=wtp1043:9100 job=node site=eqiad https://grafana.wikimedia.org/dashboard/db/host-overview?var-server=wtp1043&var-datasource=eqiad%2520prometheus%252Fops [10:58:16] (03CR) 10Zhuyifei1999: "I may be wrong (not a man-page writer here), but I don't see any information on the arguments the command supports" (031 comment) [software/tools-webservice] - 10https://gerrit.wikimedia.org/r/437054 (https://phabricator.wikimedia.org/T95097) (owner: 10Nehajha) [13:24:59] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1955 bytes in 0.084 second response time [13:39:26] 10Operations, 10Dumps-Generation, 10Wikimedia-log-errors: High rate of "Memcached error .. CONNECTION FAILURE" on snapshot hosts - https://phabricator.wikimedia.org/T196303#4251960 (10Krinkle) [13:58:49] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1966 bytes in 0.112 second response time [14:19:19] RECOVERY - Check systemd state on kubernetes2003 is OK: OK - running: The system is fully operational [14:22:30] PROBLEM - Check systemd state on kubernetes2003 is CRITICAL: CRITICAL - degraded: The system is operational but one or more units failed. [14:34:39] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1960 bytes in 0.069 second response time [14:41:50] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1964 bytes in 0.117 second response time [15:02:19] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1957 bytes in 0.067 second response time [15:46:19] PROBLEM - kubelet operational latencies on kubernetes1002 is CRITICAL: instance=kubernetes1002.eqiad.wmnet operation_type={create_container,start_container} https://grafana.wikimedia.org/dashboard/db/kubernetes-kubelets?orgId=1 [15:47:20] RECOVERY - kubelet operational latencies on kubernetes1002 is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/kubernetes-kubelets?orgId=1 [16:04:39] PROBLEM - kubelet operational latencies on kubernetes1003 is CRITICAL: instance=kubernetes1003.eqiad.wmnet operation_type={container_status,create_container,image_status,podsandbox_status,remove_container,start_container} https://grafana.wikimedia.org/dashboard/db/kubernetes-kubelets?orgId=1 [16:05:49] RECOVERY - kubelet operational latencies on kubernetes1003 is OK: All metrics within thresholds. https://grafana.wikimedia.org/dashboard/db/kubernetes-kubelets?orgId=1 [16:39:11] 10Operations, 10Analytics, 10hardware-requests: Site: eqiad | hardware request for a dedicated stat analytics host for the Research team - https://phabricator.wikimedia.org/T196080#4252195 (10RobH) >>! In T196080#4249584, @Ottomata wrote: > That is not a bad idea. Although moving folks between stat boxes is... [16:42:30] (03CR) 10Alex Monk: "krenair@deployment-puppetmaster03:~$ sudo curl "https://deployment-puppetdb02.deployment-prep.eqiad.wmflabs/pdb/cmd/v1" --cert /var/lib/pu" [puppet] - 10https://gerrit.wikimedia.org/r/437057 (https://phabricator.wikimedia.org/T194962) (owner: 10Alex Monk) [16:47:47] 10Puppet, 10Beta-Cluster-Infrastructure, 10Patch-For-Review: Set up puppet exported resources to collect ssh host keys for beta - https://phabricator.wikimedia.org/T72792#4252230 (10Krenair) >>! In T72792#4247163, @Krenair wrote: > Also probably isn't handling deployment-snapshot01 as that uses deployment-du... [17:08:57] (03CR) 10BryanDavis: "I would like to see the new documentation source files and makefiles placed in a subdirectory. "docs" would probably be an appropriate nam" (031 comment) [software/tools-webservice] - 10https://gerrit.wikimedia.org/r/437054 (https://phabricator.wikimedia.org/T95097) (owner: 10Nehajha) [17:16:50] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1971 bytes in 0.072 second response time [17:52:30] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1967 bytes in 0.062 second response time [17:59:50] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1973 bytes in 0.093 second response time [18:05:40] PROBLEM - MariaDB Slave Lag: s8 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 930.76 seconds [18:30:00] RECOVERY - MariaDB Slave Lag: s8 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 204.26 seconds [18:40:39] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1959 bytes in 0.091 second response time [19:00:19] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1964 bytes in 0.083 second response time [20:01:09] PROBLEM - High lag on wdqs1003 is CRITICAL: 3609 ge 3600 https://grafana.wikimedia.org/dashboard/db/wikidata-query-service?orgId=1&panelId=8&fullscreen [20:57:29] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1952 bytes in 0.077 second response time [21:14:50] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1967 bytes in 0.070 second response time [21:19:59] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1953 bytes in 0.101 second response time [21:27:19] PROBLEM - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is CRITICAL: HTTP CRITICAL: HTTP/1.1 200 OK - pattern not found - 1970 bytes in 0.073 second response time [21:52:39] RECOVERY - wikidata.org dispatch lag is higher than 300s on www.wikidata.org is OK: HTTP OK: HTTP/1.1 200 OK - 1949 bytes in 0.081 second response time