[00:28:23] 10Operations, 10ops-eqiad, 10Cloud-VPS, 10cloud-services-team: Rack/cable/configure asw2-b-eqiad switch stack - https://phabricator.wikimedia.org/T183585 (10ayounsi) Aiming at doing the asw-b to asw2-b migration on July 31st (3pm UTC, 11am EDT, 8am PDT), 4h. due to people's vacations, we might have to do t... [00:29:49] 10Operations, 10ops-eqiad, 10Cloud-VPS, 10cloud-services-team: Rack/cable/configure asw2-b-eqiad switch stack - https://phabricator.wikimedia.org/T183585 (10ayounsi) [00:34:58] (03CR) 10Reedy: Add fluidsynth to wikimedia servers (032 comments) [puppet] - 10https://gerrit.wikimedia.org/r/445603 (https://phabricator.wikimedia.org/T184598) (owner: 10Reedy) [00:36:18] 10Operations, 10ops-eqiad, 10netops, 10Patch-For-Review: Rack/cable/configure asw2-c-eqiad switch stack - https://phabricator.wikimedia.org/T187962 (10ayounsi) [01:02:11] (03CR) 10Legoktm: Add fluidsynth to wikimedia servers (032 comments) [puppet] - 10https://gerrit.wikimedia.org/r/445603 (https://phabricator.wikimedia.org/T184598) (owner: 10Reedy) [01:11:31] (03CR) 10Reedy: Add fluidsynth to wikimedia servers (031 comment) [puppet] - 10https://gerrit.wikimedia.org/r/445603 (https://phabricator.wikimedia.org/T184598) (owner: 10Reedy) [01:57:08] 10Operations, 10Core-Platform-Team, 10WMF-JobQueue, 10monitoring, and 3 others: Collect error logs from jobchron/jobrunner services in Logstash - https://phabricator.wikimedia.org/T172479 (10Krinkle) 05Open>03declined Per {T198220}. [02:19:23] (03PS2) 10Krinkle: services: Define dc-pairs of the same service together [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443872 [02:21:04] (03CR) 10jerkins-bot: [V: 04-1] services: Define dc-pairs of the same service together [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443872 (owner: 10Krinkle) [02:43:58] (03CR) 10Ebe123: [C: 04-1] Add fluidsynth to wikimedia servers (031 comment) [puppet] - 10https://gerrit.wikimedia.org/r/445603 (https://phabricator.wikimedia.org/T184598) (owner: 10Reedy) [03:23:23] (03CR) 10Krinkle: [C: 032] "To be sure, converted both the before and after state of the array to json, and ran a key-sorted diff over them. identical." [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443872 (owner: 10Krinkle) [03:23:29] (03PS2) 10Krinkle: services: Convert LabsServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443873 [03:23:54] (03PS3) 10Krinkle: services: Convert LabsServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443873 [03:24:11] (03PS2) 10Krinkle: services: Convert ProductionServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443874 [03:24:38] (03PS3) 10Krinkle: services: Convert ProductionServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443874 [03:24:40] 10Operations, 10Wiki-Setup (Rename): Move the Moldovan Wikipedia - https://phabricator.wikimedia.org/T25217 (10Liuxinyu970226) p:05Low>03Lowest [03:24:50] (03CR) 10jerkins-bot: [V: 04-1] services: Define dc-pairs of the same service together [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443872 (owner: 10Krinkle) [03:24:52] (03CR) 10jerkins-bot: [V: 04-1] services: Convert LabsServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443873 (owner: 10Krinkle) [03:24:55] * Krinkle staging on mwdebug1002 and deploy1001 [03:25:39] (03CR) 10jerkins-bot: [V: 04-1] services: Convert LabsServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443873 (owner: 10Krinkle) [03:25:53] (03CR) 10jerkins-bot: [V: 04-1] services: Convert ProductionServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443874 (owner: 10Krinkle) [03:26:02] PROBLEM - MariaDB Slave Lag: s1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 797.29 seconds [03:28:28] (03CR) 10Krinkle: "recheck" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443874 (owner: 10Krinkle) [03:28:37] (03CR) 10Krinkle: [C: 032] services: Define dc-pairs of the same service together [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443872 (owner: 10Krinkle) [03:28:40] (03CR) 10Krinkle: "recheck" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443873 (owner: 10Krinkle) [03:29:53] (03Merged) 10jenkins-bot: services: Define dc-pairs of the same service together [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443872 (owner: 10Krinkle) [03:30:09] (03CR) 10jenkins-bot: services: Define dc-pairs of the same service together [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443872 (owner: 10Krinkle) [03:38:25] (03CR) 10Krinkle: [C: 032] services: Convert LabsServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443873 (owner: 10Krinkle) [03:39:00] !log krinkle@deploy1001 Synchronized wmf-config/ProductionServices.php: Ib079ec90ae515 - clean up (duration: 00m 49s) [03:39:02] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [03:40:10] (03Merged) 10jenkins-bot: services: Convert LabsServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443873 (owner: 10Krinkle) [03:40:23] (03CR) 10jenkins-bot: services: Convert LabsServices.php to static array file [mediawiki-config] - 10https://gerrit.wikimedia.org/r/443873 (owner: 10Krinkle) [03:41:49] !log krinkle@deploy1001 Synchronized wmf-config/: I2bff4eff4eb33b176454 - beta only (duration: 00m 51s) [03:41:52] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [04:10:02] RECOVERY - MariaDB Slave Lag: s1 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 289.47 seconds [04:15:12] PROBLEM - Device not healthy -SMART- on db1069 is CRITICAL: cluster=mysql device=megaraid,0 instance=db1069:9100 job=node site=eqiad https://grafana.wikimedia.org/dashboard/db/host-overview?var-server=db1069&var-datasource=eqiad%2520prometheus%252Fops [04:51:11] PROBLEM - Device not healthy -SMART- on labstore1003 is CRITICAL: cluster=labsnfs device=megaraid,11 instance=labstore1003:9100 job=node site=eqiad https://grafana.wikimedia.org/dashboard/db/host-overview?var-server=labstore1003&var-datasource=eqiad%2520prometheus%252Fops [05:42:25] (03CR) 10Ebe123: [C: 04-1] Add fluidsynth to wikimedia servers (031 comment) [puppet] - 10https://gerrit.wikimedia.org/r/445603 (https://phabricator.wikimedia.org/T184598) (owner: 10Reedy) [06:28:22] PROBLEM - puppet last run on sodium is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/profile.d/bash_autologout.sh] [06:58:51] RECOVERY - puppet last run on sodium is OK: OK: Puppet is currently enabled, last run 3 minutes ago with 0 failures [09:20:01] PROBLEM - Host mc2025 is DOWN: PING CRITICAL - Packet loss = 100% [09:21:41] RECOVERY - Host mc2025 is UP: PING OK - Packet loss = 0%, RTA = 36.10 ms [10:17:50] (03PS1) 10Urbanecm: Initial configuration for zhwikiversity [mediawiki-config] - 10https://gerrit.wikimedia.org/r/445764 (https://phabricator.wikimedia.org/T199577) [10:19:36] (03CR) 10jerkins-bot: [V: 04-1] Initial configuration for zhwikiversity [mediawiki-config] - 10https://gerrit.wikimedia.org/r/445764 (https://phabricator.wikimedia.org/T199577) (owner: 10Urbanecm) [10:20:28] (03PS2) 10Urbanecm: Initial configuration for zhwikiversity [mediawiki-config] - 10https://gerrit.wikimedia.org/r/445764 (https://phabricator.wikimedia.org/T199577) [10:28:11] (03PS1) 10Urbanecm: Initial configuration for wikimania2019wiki [mediawiki-config] - 10https://gerrit.wikimedia.org/r/445765 (https://phabricator.wikimedia.org/T199509) [10:30:51] (03PS1) 10Urbanecm: Add wikimania2019wiki [puppet] - 10https://gerrit.wikimedia.org/r/445766 (https://phabricator.wikimedia.org/T199509) [10:49:12] (03CR) 10Fomafix: "Since https://gerrit.wikimedia.org/r/443687 the values sr-cyrl and sr-latn are already supported by the parameter variant even without I75" [puppet] - 10https://gerrit.wikimedia.org/r/368248 (https://phabricator.wikimedia.org/T117845) (owner: 10Fomafix) [11:13:28] 10Operations, 10Stewards-and-global-tools, 10Wikimedia-Site-requests, 10Patch-For-Review, 10User-notice: Apply editing rate limits for all users - https://phabricator.wikimedia.org/T56515 (10Bugreporter) 05Open>03Resolved a:03Bawolff See rMWcefdcefdb8f15ffdec8345b93aff2036db92d1f7 [11:25:02] PROBLEM - MariaDB Slave Lag: s1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 897.14 seconds [11:28:31] PROBLEM - MariaDB Slave SQL: s8 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:28:31] PROBLEM - MariaDB Slave SQL: x1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:28:32] PROBLEM - MariaDB Slave IO: m3 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:28:32] PROBLEM - MariaDB Slave IO: s2 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:28:41] PROBLEM - MariaDB Slave SQL: s3 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:28:41] PROBLEM - MariaDB Slave IO: s5 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:28:41] PROBLEM - MariaDB Slave SQL: m3 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:28:42] PROBLEM - MariaDB Slave IO: x1 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:28:42] PROBLEM - MariaDB Slave SQL: m2 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:28:42] PROBLEM - MariaDB Slave SQL: s4 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:28:51] PROBLEM - MariaDB Slave IO: m2 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:28:51] PROBLEM - MariaDB Slave SQL: s6 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:28:52] PROBLEM - MariaDB Slave IO: s8 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:28:52] PROBLEM - MariaDB Slave SQL: s7 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:29:02] PROBLEM - MariaDB Slave IO: s6 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:29:02] PROBLEM - MariaDB Slave IO: s4 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:29:12] PROBLEM - MariaDB Slave IO: s7 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:29:12] PROBLEM - MariaDB Slave IO: s1 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:29:12] PROBLEM - MariaDB Slave SQL: s2 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:29:21] PROBLEM - MariaDB Slave SQL: s5 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:29:21] PROBLEM - MariaDB Slave SQL: s1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_state could not connect [11:29:21] PROBLEM - MariaDB Slave IO: s3 on dbstore1002 is CRITICAL: CRITICAL slave_io_state could not connect [11:30:31] dbstore1002 crashed, it is restarting now [11:31:45] Queried about 5144290000 rows [11:36:02] PROBLEM - MariaDB Slave Lag: s4 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:36:11] PROBLEM - MariaDB Slave Lag: s2 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:36:11] PROBLEM - MariaDB Slave Lag: s6 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:36:12] PROBLEM - MariaDB Slave Lag: s7 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:36:32] PROBLEM - MariaDB Slave Lag: s3 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:36:51] PROBLEM - MariaDB Slave Lag: s5 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:36:51] PROBLEM - MariaDB Slave Lag: s8 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:37:01] PROBLEM - MariaDB Slave Lag: x1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:37:01] PROBLEM - MariaDB Slave Lag: m2 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:37:02] PROBLEM - MariaDB Slave Lag: m3 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag could not connect [11:40:38] what [11:50:31] PROBLEM - MariaDB Slave Lag: s1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 2420.77 seconds [11:50:31] PROBLEM - MariaDB Slave Lag: s4 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1461.77 seconds [11:50:32] PROBLEM - MariaDB Slave Lag: s2 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1465.79 seconds [11:50:32] PROBLEM - MariaDB Slave Lag: s6 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1465.79 seconds [11:50:41] PROBLEM - MariaDB Slave Lag: s7 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1472.03 seconds [11:50:41] RECOVERY - MariaDB Slave IO: m3 on dbstore1002 is OK: OK slave_io_state not a slave [11:50:42] RECOVERY - MariaDB Slave SQL: m3 on dbstore1002 is OK: OK slave_sql_state not a slave [11:50:51] RECOVERY - MariaDB Slave SQL: m2 on dbstore1002 is OK: OK slave_sql_state not a slave [11:50:51] RECOVERY - MariaDB Slave IO: m2 on dbstore1002 is OK: OK slave_io_state not a slave [11:50:52] PROBLEM - MariaDB Slave Lag: s3 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1489.29 seconds [11:51:12] PROBLEM - MariaDB Slave Lag: s5 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1505.31 seconds [11:51:12] PROBLEM - MariaDB Slave Lag: s8 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1506.33 seconds [11:51:21] PROBLEM - MariaDB Slave Lag: x1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1512.96 seconds [11:51:22] RECOVERY - MariaDB Slave Lag: m3 on dbstore1002 is OK: OK slave_sql_lag not a slave [11:51:22] RECOVERY - MariaDB Slave Lag: m2 on dbstore1002 is OK: OK slave_sql_lag not a slave [11:53:03] jynus: need help? [11:54:20] Ah, it just came back [11:55:12] RECOVERY - MariaDB Slave SQL: s6 on dbstore1002 is OK: OK slave_sql_state Slave_SQL_Running: Yes [11:55:21] RECOVERY - MariaDB Slave IO: s8 on dbstore1002 is OK: OK slave_io_state Slave_IO_Running: Yes [11:55:21] RECOVERY - MariaDB Slave SQL: s7 on dbstore1002 is OK: OK slave_sql_state Slave_SQL_Running: Yes [11:55:31] RECOVERY - MariaDB Slave IO: s6 on dbstore1002 is OK: OK slave_io_state Slave_IO_Running: Yes [11:55:32] RECOVERY - MariaDB Slave IO: s4 on dbstore1002 is OK: OK slave_io_state Slave_IO_Running: Yes [11:55:41] RECOVERY - MariaDB Slave IO: s7 on dbstore1002 is OK: OK slave_io_state Slave_IO_Running: Yes [11:55:41] RECOVERY - MariaDB Slave IO: s1 on dbstore1002 is OK: OK slave_io_state Slave_IO_Running: Yes [11:55:42] RECOVERY - MariaDB Slave SQL: s2 on dbstore1002 is OK: OK slave_sql_state Slave_SQL_Running: Yes [11:55:42] RECOVERY - MariaDB Slave SQL: s5 on dbstore1002 is OK: OK slave_sql_state Slave_SQL_Running: Yes [11:55:42] RECOVERY - MariaDB Slave SQL: s1 on dbstore1002 is OK: OK slave_sql_state Slave_SQL_Running: Yes [11:55:51] RECOVERY - MariaDB Slave IO: s3 on dbstore1002 is OK: OK slave_io_state Slave_IO_Running: Yes [11:56:01] RECOVERY - MariaDB Slave SQL: x1 on dbstore1002 is OK: OK slave_sql_state Slave_SQL_Running: Yes [11:56:02] RECOVERY - MariaDB Slave SQL: s8 on dbstore1002 is OK: OK slave_sql_state Slave_SQL_Running: Yes [11:56:11] RECOVERY - MariaDB Slave IO: s2 on dbstore1002 is OK: OK slave_io_state Slave_IO_Running: Yes [11:56:11] RECOVERY - MariaDB Slave SQL: s3 on dbstore1002 is OK: OK slave_sql_state Slave_SQL_Running: Yes [11:56:12] RECOVERY - MariaDB Slave IO: s5 on dbstore1002 is OK: OK slave_io_state Slave_IO_Running: Yes [11:56:21] RECOVERY - MariaDB Slave SQL: s4 on dbstore1002 is OK: OK slave_sql_state Slave_SQL_Running: Yes [11:56:21] RECOVERY - MariaDB Slave IO: x1 on dbstore1002 is OK: OK slave_io_state Slave_IO_Running: Yes [11:56:51] RECOVERY - MariaDB Slave Lag: x1 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 0.14 seconds [11:58:52] RECOVERY - MariaDB Slave Lag: s5 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 212.77 seconds [12:00:22] I have created T199614 for the tracking [12:00:23] T199614: dbstore1002 MySQL crashed and got restarted - https://phabricator.wikimedia.org/T199614 [12:04:11] ACKNOWLEDGEMENT - MariaDB Slave Lag: s1 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 2607.68 seconds Jcrespo crashed, recovering [12:04:11] ACKNOWLEDGEMENT - MariaDB Slave Lag: s2 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1028.55 seconds Jcrespo crashed, recovering [12:04:11] ACKNOWLEDGEMENT - MariaDB Slave Lag: s3 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1469.97 seconds Jcrespo crashed, recovering [12:04:11] ACKNOWLEDGEMENT - MariaDB Slave Lag: s4 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1648.68 seconds Jcrespo crashed, recovering [12:04:11] ACKNOWLEDGEMENT - MariaDB Slave Lag: s7 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1589.72 seconds Jcrespo crashed, recovering [12:04:11] ACKNOWLEDGEMENT - MariaDB Slave Lag: s8 on dbstore1002 is CRITICAL: CRITICAL slave_sql_lag Replication lag: 1834.51 seconds Jcrespo crashed, recovering [12:04:51] RECOVERY - MariaDB Slave Lag: s6 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 0.08 seconds [12:07:01] RECOVERY - MariaDB Slave Lag: s2 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 188.54 seconds [12:10:24] (03CR) 10Liuxinyu970226: [C: 031] Initial configuration for zhwikiversity [mediawiki-config] - 10https://gerrit.wikimedia.org/r/445764 (https://phabricator.wikimedia.org/T199577) (owner: 10Urbanecm) [12:16:02] RECOVERY - MariaDB Slave Lag: s3 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 151.70 seconds [12:19:41] RECOVERY - MariaDB Slave Lag: s8 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 271.79 seconds [12:24:31] RECOVERY - MariaDB Slave Lag: s7 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 243.42 seconds [12:30:52] RECOVERY - MariaDB Slave Lag: s4 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 219.91 seconds [12:32:01] RECOVERY - MariaDB Slave Lag: s1 on dbstore1002 is OK: OK slave_sql_lag Replication lag: 238.11 seconds [14:47:21] PROBLEM - CirrusSearch eqiad 95th percentile latency on graphite1001 is CRITICAL: CRITICAL: 20.00% of data above the critical threshold [1000.0] https://grafana.wikimedia.org/dashboard/db/elasticsearch-percentiles?panelId=19&fullscreen&orgId=1&var-cluster=eqiad&var-smoothing=1 [14:51:31] PROBLEM - recommendation_api endpoints health on scb2002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [14:52:42] RECOVERY - recommendation_api endpoints health on scb2002 is OK: All endpoints are healthy [14:53:31] PROBLEM - recommendation_api endpoints health on scb1004 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [14:54:41] PROBLEM - recommendation_api endpoints health on scb1001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [14:55:41] RECOVERY - recommendation_api endpoints health on scb1004 is OK: All endpoints are healthy [14:56:01] PROBLEM - recommendation_api endpoints health on scb2004 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [14:56:51] PROBLEM - recommendation_api endpoints health on scb1002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [14:56:51] RECOVERY - recommendation_api endpoints health on scb1001 is OK: All endpoints are healthy [14:57:21] PROBLEM - recommendation_api endpoints health on scb2006 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) timed out before a response was received [14:59:21] RECOVERY - recommendation_api endpoints health on scb2004 is OK: All endpoints are healthy [15:00:21] PROBLEM - recommendation_api endpoints health on scb2003 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:00:32] RECOVERY - recommendation_api endpoints health on scb2006 is OK: All endpoints are healthy [15:02:21] RECOVERY - recommendation_api endpoints health on scb1002 is OK: All endpoints are healthy [15:02:31] PROBLEM - recommendation_api endpoints health on scb1001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) timed out before a response was received [15:02:41] PROBLEM - recommendation_api endpoints health on scb2002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:03:32] RECOVERY - recommendation_api endpoints health on scb1001 is OK: All endpoints are healthy [15:03:42] PROBLEM - recommendation_api endpoints health on scb2004 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:03:51] PROBLEM - recommendation_api endpoints health on scb2005 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:03:52] PROBLEM - recommendation_api endpoints health on scb2001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) timed out before a response was received [15:04:31] PROBLEM - recommendation_api endpoints health on scb1004 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:04:52] RECOVERY - recommendation_api endpoints health on scb2005 is OK: All endpoints are healthy [15:04:52] RECOVERY - recommendation_api endpoints health on scb2004 is OK: All endpoints are healthy [15:05:01] RECOVERY - recommendation_api endpoints health on scb2001 is OK: All endpoints are healthy [15:05:41] RECOVERY - recommendation_api endpoints health on scb1004 is OK: All endpoints are healthy [15:05:51] RECOVERY - recommendation_api endpoints health on scb2003 is OK: All endpoints are healthy [15:06:02] PROBLEM - recommendation_api endpoints health on scb2006 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:07:01] PROBLEM - recommendation_api endpoints health on scb1001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) timed out before a response was received [15:08:01] RECOVERY - recommendation_api endpoints health on scb1001 is OK: All endpoints are healthy [15:08:12] PROBLEM - recommendation_api endpoints health on scb2005 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:08:12] PROBLEM - recommendation_api endpoints health on scb2004 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:08:12] PROBLEM - recommendation_api endpoints health on scb2001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:08:21] RECOVERY - recommendation_api endpoints health on scb2002 is OK: All endpoints are healthy [15:08:21] RECOVERY - recommendation_api endpoints health on scb2006 is OK: All endpoints are healthy [15:09:11] PROBLEM - recommendation_api endpoints health on scb2003 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:10:21] RECOVERY - recommendation_api endpoints health on scb2003 is OK: All endpoints are healthy [15:10:32] RECOVERY - recommendation_api endpoints health on scb2001 is OK: All endpoints are healthy [15:11:11] PROBLEM - recommendation_api endpoints health on scb1004 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:11:21] PROBLEM - recommendation_api endpoints health on scb1002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:11:21] PROBLEM - recommendation_api endpoints health on scb1001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:11:32] RECOVERY - recommendation_api endpoints health on scb2005 is OK: All endpoints are healthy [15:11:41] PROBLEM - recommendation_api endpoints health on scb2006 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:12:12] PROBLEM - recommendation_api endpoints health on scb1003 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:13:32] PROBLEM - recommendation_api endpoints health on scb2003 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:13:51] PROBLEM - recommendation_api endpoints health on scb2001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:13:51] PROBLEM - recommendation_api endpoints health on scb2002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:13:52] RECOVERY - recommendation_api endpoints health on scb2006 is OK: All endpoints are healthy [15:14:32] RECOVERY - recommendation_api endpoints health on scb1003 is OK: All endpoints are healthy [15:14:42] RECOVERY - recommendation_api endpoints health on scb2003 is OK: All endpoints are healthy [15:16:02] RECOVERY - recommendation_api endpoints health on scb2002 is OK: All endpoints are healthy [15:16:51] RECOVERY - recommendation_api endpoints health on scb1004 is OK: All endpoints are healthy [15:16:52] RECOVERY - recommendation_api endpoints health on scb1001 is OK: All endpoints are healthy [15:17:11] PROBLEM - recommendation_api endpoints health on scb2005 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:17:12] RECOVERY - recommendation_api endpoints health on scb2001 is OK: All endpoints are healthy [15:17:21] PROBLEM - recommendation_api endpoints health on scb2006 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) timed out before a response was received [15:17:52] PROBLEM - recommendation_api endpoints health on scb1003 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:18:02] PROBLEM - recommendation_api endpoints health on scb2003 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:19:21] PROBLEM - recommendation_api endpoints health on scb2002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:19:31] RECOVERY - recommendation_api endpoints health on scb2006 is OK: All endpoints are healthy [15:20:31] PROBLEM - recommendation_api endpoints health on scb2001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:21:21] RECOVERY - recommendation_api endpoints health on scb1002 is OK: All endpoints are healthy [15:22:42] PROBLEM - recommendation_api endpoints health on scb2006 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:23:22] PROBLEM - recommendation_api endpoints health on scb1004 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:23:51] RECOVERY - recommendation_api endpoints health on scb2002 is OK: All endpoints are healthy [15:23:52] RECOVERY - recommendation_api endpoints health on scb2001 is OK: All endpoints are healthy [15:25:01] RECOVERY - recommendation_api endpoints health on scb2005 is OK: All endpoints are healthy [15:25:42] RECOVERY - recommendation_api endpoints health on scb1004 is OK: All endpoints are healthy [15:25:51] RECOVERY - recommendation_api endpoints health on scb2003 is OK: All endpoints are healthy [15:26:52] PROBLEM - recommendation_api endpoints health on scb1002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:26:52] PROBLEM - recommendation_api endpoints health on scb1001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:27:02] PROBLEM - recommendation_api endpoints health on scb2002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:27:12] PROBLEM - recommendation_api endpoints health on scb2001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:27:21] RECOVERY - recommendation_api endpoints health on scb2006 is OK: All endpoints are healthy [15:28:11] PROBLEM - recommendation_api endpoints health on scb2005 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:30:02] PROBLEM - recommendation_api endpoints health on scb1004 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:30:32] PROBLEM - recommendation_api endpoints health on scb2006 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:31:12] RECOVERY - recommendation_api endpoints health on scb1002 is OK: All endpoints are healthy [15:31:12] RECOVERY - recommendation_api endpoints health on scb1003 is OK: All endpoints are healthy [15:31:12] RECOVERY - recommendation_api endpoints health on scb1004 is OK: All endpoints are healthy [15:31:22] PROBLEM - recommendation_api endpoints health on scb2003 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:33:42] RECOVERY - recommendation_api endpoints health on scb2002 is OK: All endpoints are healthy [15:34:31] PROBLEM - recommendation_api endpoints health on scb1003 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:34:32] PROBLEM - recommendation_api endpoints health on scb1004 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:34:41] PROBLEM - recommendation_api endpoints health on scb1002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200): /{domain}/v1/translation/articles/{source}{/seed} (bad seed) timed out before a response was received [15:35:51] RECOVERY - recommendation_api endpoints health on scb1001 is OK: All endpoints are healthy [15:36:02] RECOVERY - recommendation_api endpoints health on scb2005 is OK: All endpoints are healthy [15:37:02] PROBLEM - recommendation_api endpoints health on scb2002 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:39:11] PROBLEM - recommendation_api endpoints health on scb1001 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:39:21] PROBLEM - recommendation_api endpoints health on scb2005 is CRITICAL: /{domain}/v1/translation/articles/{source}{/seed} (normal source and target with seed) is CRITICAL: Test normal source and target with seed returned the unexpected status 404 (expecting: 200) [15:40:11] RECOVERY - recommendation_api endpoints health on scb1004 is OK: All endpoints are healthy [15:40:11] RECOVERY - recommendation_api endpoints health on scb1002 is OK: All endpoints are healthy [15:40:12] RECOVERY - recommendation_api endpoints health on scb2003 is OK: All endpoints are healthy [15:40:21] RECOVERY - recommendation_api endpoints health on scb1001 is OK: All endpoints are healthy [15:40:31] RECOVERY - recommendation_api endpoints health on scb2005 is OK: All endpoints are healthy [15:40:32] RECOVERY - recommendation_api endpoints health on scb2001 is OK: All endpoints are healthy [15:40:41] RECOVERY - recommendation_api endpoints health on scb2006 is OK: All endpoints are healthy [15:41:12] RECOVERY - recommendation_api endpoints health on scb1003 is OK: All endpoints are healthy [15:41:31] RECOVERY - recommendation_api endpoints health on scb2002 is OK: All endpoints are healthy [15:41:32] RECOVERY - recommendation_api endpoints health on scb2004 is OK: All endpoints are healthy [15:53:12] RECOVERY - CirrusSearch eqiad 95th percentile latency on graphite1001 is OK: OK: Less than 20.00% above the threshold [500.0] https://grafana.wikimedia.org/dashboard/db/elasticsearch-percentiles?panelId=19&fullscreen&orgId=1&var-cluster=eqiad&var-smoothing=1 [17:49:21] PROBLEM - Host cp5001 is DOWN: PING CRITICAL - Packet loss = 100% [17:55:21] PROBLEM - IPsec on kafka-jumbo1001 is CRITICAL: Strongswan CRITICAL - ok: 132 connecting: cp5001_v4, cp5001_v6 [17:55:21] PROBLEM - IPsec on cp2011 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:55:22] PROBLEM - IPsec on cp2008 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:55:22] PROBLEM - IPsec on cp1071 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [17:55:22] PROBLEM - IPsec on cp1073 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [17:55:31] PROBLEM - IPsec on cp1050 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [17:55:32] PROBLEM - IPsec on cp1074 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [17:55:32] PROBLEM - IPsec on cp1048 is CRITICAL: Strongswan CRITICAL - ok: 64 connecting: cp5001_v4 not-conn: cp5001_v6 [17:55:41] PROBLEM - IPsec on cp2020 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:55:51] PROBLEM - IPsec on cp1062 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [17:55:51] PROBLEM - IPsec on cp1072 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [17:55:51] PROBLEM - IPsec on kafka-jumbo1004 is CRITICAL: Strongswan CRITICAL - ok: 132 connecting: cp5001_v4, cp5001_v6 [17:55:52] PROBLEM - IPsec on kafka-jumbo1002 is CRITICAL: Strongswan CRITICAL - ok: 132 connecting: cp5001_v4, cp5001_v6 [17:55:52] PROBLEM - IPsec on kafka-jumbo1005 is CRITICAL: Strongswan CRITICAL - ok: 132 connecting: cp5001_v4, cp5001_v6 [17:55:52] PROBLEM - IPsec on cp1099 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [17:55:52] PROBLEM - IPsec on cp1064 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [17:56:01] PROBLEM - IPsec on cp2002 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:56:01] PROBLEM - IPsec on cp2014 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:56:01] PROBLEM - IPsec on cp2024 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:56:02] PROBLEM - IPsec on cp2026 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:56:02] PROBLEM - IPsec on cp1049 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [17:56:11] PROBLEM - IPsec on kafka-jumbo1003 is CRITICAL: Strongswan CRITICAL - ok: 132 connecting: cp5001_v4, cp5001_v6 [17:56:11] PROBLEM - IPsec on cp2005 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:56:11] PROBLEM - IPsec on kafka-jumbo1006 is CRITICAL: Strongswan CRITICAL - ok: 132 connecting: cp5001_v4, cp5001_v6 [17:56:12] PROBLEM - IPsec on cp2017 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:56:12] PROBLEM - IPsec on cp2022 is CRITICAL: Strongswan CRITICAL - ok: 78 not-conn: cp5001_v4, cp5001_v6 [17:56:21] PROBLEM - IPsec on cp1063 is CRITICAL: Strongswan CRITICAL - ok: 64 not-conn: cp5001_v4, cp5001_v6 [18:33:01] PROBLEM - Check health of redis instance on 6382 on rdb1004 is CRITICAL: CRITICAL ERROR - Redis Library - can not ping 127.0.0.1 on port 6382 [18:34:01] RECOVERY - Check health of redis instance on 6382 on rdb1004 is OK: OK: REDIS 2.8.17 on 127.0.0.1:6382 has 1 databases (db0) with 8136442 keys, up 10 days 17 hours [18:48:50] (03PS4) 10Urbanecm: Initial configuration for satwiki [mediawiki-config] - 10https://gerrit.wikimedia.org/r/442871 (https://phabricator.wikimedia.org/T198400) [20:04:52] PROBLEM - puppet last run on scb2003 is CRITICAL: CRITICAL: Puppet has 29 failures. Last run 4 minutes ago with 29 failures. Failed resources (up to 3 shown): Exec[ip addr add 2620:0:860:101:10:192:0:33/64 dev eth0],Exec[absent_ensure_members],Exec[ops_ensure_members],Exec[wikidev_ensure_members] [20:35:31] RECOVERY - puppet last run on scb2003 is OK: OK: Puppet is currently enabled, last run 4 minutes ago with 0 failures [21:03:30] 10Operations, 10Cloud-Services, 10Cloud-VPS, 10IPv6: Enable ipv6 on labs - https://phabricator.wikimedia.org/T37947 (10Salvidrim) FWIW, UTRS could benefit from being able to handle the growing number of IPv6 blocks for similar reasons to ACC. [21:24:35] 10Operations, 10Cloud-Services, 10Cloud-VPS, 10IPv6: Enable ipv6 on labs - https://phabricator.wikimedia.org/T37947 (10Krenair) It should probably be noted on this task that work to move to neutron has resumed at {T167293}, after which it is hoped that IPv6 should be doable without too much trouble. [23:19:13] Krinkle, https://meta.wikimedia.org/wiki/Wikimedia_servers#Hosting says "the WMF 2010�2015 strategic plan reach target includes �additional caching centers in key locations to manage increased traffic from Latin America, Asia and the Middle East, as well as to ensure reasonable and consistent load times no matter where a reader is located�." [23:19:27] I assume that's eqsin? [23:19:34] uh, why did the bots quit [23:19:49] It's possible. [23:20:00] just a few years late I guess [23:20:12] wtf why did shinken-wm quit at the same time as logmsgbot and icinga-wm? [23:20:26] shinken-wm lives in labs and should be completely independent of those two [23:21:06] some networking thing? [23:41:06] either same networking output from out side, or same freenode server having issues. [23:48:10] Krinkle, what happened with eqdfw and eqord? [23:48:29] Meh, decided they're not relevant to the Hosting section of that page [23:48:33] those used to be networking, like knams [23:48:35] Networking is already covered elsewhere [23:48:46] oh knams also went [23:48:47] nothing happened to them physically (afaik) [23:50:18] https://wikitech.wikimedia.org/wiki/Clusters [23:53:56] eqdfw is definitely still active [23:55:16] yeah we were talking about https://meta.wikimedia.org/w/index.php?title=Wikimedia_servers&diff=prev&oldid=18212035&diffmode=source