[00:31:28] PROBLEM - search indices - check lucene status page on search1003 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:35:17] RECOVERY - search indices - check lucene status page on search1003 is OK: HTTP OK: HTTP/1.1 200 OK - 269 bytes in 0.004 second response time [00:57:34] RECOVERY - udp2log log age for lucene on oxygen is OK: OK: all log files active [01:00:38] PROBLEM - udp2log log age for lucene on oxygen is CRITICAL: CRITICAL: log files /a/log/lucene/lucene.log, have not been written in a critical amount of time. For most logs, this is 4 hours. For slow logs, this is 4 days. [01:22:29] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:23:19] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.124 second response time [01:32:20] RECOVERY - NTP on ssl3003 is OK: NTP OK: Offset 0.002796411514 secs [01:32:59] RECOVERY - NTP on ssl3002 is OK: NTP OK: Offset 0.003968477249 secs [01:56:29] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:57:18] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.126 second response time [02:03:52] !log LocalisationUpdate completed (1.22wmf4) at Mon May 27 02:03:50 UTC 2013 [02:06:26] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:07:15] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.128 second response time [02:15:05] !log LocalisationUpdate ResourceLoader cache refresh completed at Mon May 27 02:15:05 UTC 2013 [02:15:15] PROBLEM - Puppet freshness on db1032 is CRITICAL: No successful Puppet run in the last 10 hours [02:21:17] PROBLEM - Puppet freshness on virt1 is CRITICAL: No successful Puppet run in the last 10 hours [02:21:17] PROBLEM - Puppet freshness on virt3 is CRITICAL: No successful Puppet run in the last 10 hours [02:21:17] PROBLEM - Puppet freshness on virt4 is CRITICAL: No successful Puppet run in the last 10 hours [02:22:25] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:23:16] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.123 second response time [02:23:16] morebots is missing. [02:24:15] PROBLEM - Puppet freshness on pdf1 is CRITICAL: No successful Puppet run in the last 10 hours [02:24:17] PROBLEM - Puppet freshness on pdf2 is CRITICAL: No successful Puppet run in the last 10 hours [02:27:25] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:28:18] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.125 second response time [02:52:27] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:53:17] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.127 second response time [03:09:15] PROBLEM - Puppet freshness on db45 is CRITICAL: No successful Puppet run in the last 10 hours [03:17:05] PROBLEM - Host mw27 is DOWN: PING CRITICAL - Packet loss = 100% [03:18:05] RECOVERY - Host mw27 is UP: PING OK - Packet loss = 0%, RTA = 26.52 ms [03:22:26] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:23:08] Susan: morebots ran out on you again [03:23:15] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.167 second response time [03:23:38] I re-opened the bug. [03:23:40] Poor morebots. [03:30:27] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:32:15] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.203 second response time [03:41:19] more poorbots [03:52:29] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:53:19] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.126 second response time [04:16:16] PROBLEM - Puppet freshness on ms-fe3001 is CRITICAL: No successful Puppet run in the last 10 hours [04:17:16] PROBLEM - Puppet freshness on erzurumi is CRITICAL: No successful Puppet run in the last 10 hours [04:20:16] PROBLEM - Puppet freshness on cp1029 is CRITICAL: No successful Puppet run in the last 10 hours [04:49:14] PROBLEM - Puppet freshness on stat1002 is CRITICAL: No successful Puppet run in the last 10 hours [04:54:24] PROBLEM - NTP on ssl3002 is CRITICAL: NTP CRITICAL: No response from NTP server [05:00:44] PROBLEM - NTP on ssl3003 is CRITICAL: NTP CRITICAL: No response from NTP server [05:12:32] PROBLEM - SSH on lvs6 is CRITICAL: Server answer: [05:13:31] RECOVERY - SSH on lvs6 is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1.1 (protocol 2.0) [05:26:31] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:27:21] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.137 second response time [06:35:31] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [06:36:31] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 2.477 second response time [08:01:35] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:02:25] RECOVERY - NTP on ssl3002 is OK: NTP OK: Offset 0.0001685619354 secs [08:02:26] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.129 second response time [08:11:04] PROBLEM - Puppet freshness on stat1 is CRITICAL: No successful Puppet run in the last 10 hours [08:16:03] PROBLEM - Puppet freshness on lvs1004 is CRITICAL: No successful Puppet run in the last 10 hours [08:16:03] PROBLEM - Puppet freshness on lvs1005 is CRITICAL: No successful Puppet run in the last 10 hours [08:16:03] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: No successful Puppet run in the last 10 hours [08:18:03] PROBLEM - Puppet freshness on mc15 is CRITICAL: No successful Puppet run in the last 10 hours [08:31:33] RECOVERY - NTP on ssl3003 is OK: NTP OK: Offset 0.002451181412 secs [12:05:42] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:06:31] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.124 second response time [12:15:19] PROBLEM - Puppet freshness on db1032 is CRITICAL: No successful Puppet run in the last 10 hours [12:21:17] Hello ori-l. I prepared an improvement to throttle.php needed for the NIH editing session tomorrow. https://gerrit.wikimedia.org/r/#/c/65644/ Could you look at this? [12:21:19] PROBLEM - Puppet freshness on virt1 is CRITICAL: No successful Puppet run in the last 10 hours [12:21:19] PROBLEM - Puppet freshness on virt3 is CRITICAL: No successful Puppet run in the last 10 hours [12:21:19] PROBLEM - Puppet freshness on virt4 is CRITICAL: No successful Puppet run in the last 10 hours [12:24:19] PROBLEM - Puppet freshness on pdf1 is CRITICAL: No successful Puppet run in the last 10 hours [12:24:19] PROBLEM - Puppet freshness on pdf2 is CRITICAL: No successful Puppet run in the last 10 hours [12:32:33] New patchset: GWicke; "New Parsoid Varnish puppetization" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/63890 [12:34:44] 97 [12:50:33] New patchset: ArielGlenn; "Fix for compatibility with help2man and Debian Policy" [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/64343 [12:55:10] New review: ArielGlenn; "I moved the version string into the Makefile and moved the man pages to separate targets while I was..." [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/64343 [13:09:29] PROBLEM - Puppet freshness on db45 is CRITICAL: No successful Puppet run in the last 10 hours [13:35:25] paravoid_, around? [13:37:30] i'm monitoring zero log, and it seems that there are a few entries that get X-ORIG-CLIENT-IP, but still many entries that don't and they still have 3 values in XFF. Could this be because they are cached, or because not all varnish server were updated? [14:02:29] RECOVERY - udp2log log age for lucene on oxygen is OK: OK: all log files active [14:16:19] PROBLEM - Puppet freshness on ms-fe3001 is CRITICAL: No successful Puppet run in the last 10 hours [14:17:19] PROBLEM - Puppet freshness on erzurumi is CRITICAL: No successful Puppet run in the last 10 hours [14:20:19] PROBLEM - Puppet freshness on cp1029 is CRITICAL: No successful Puppet run in the last 10 hours [14:25:10] New review: ArielGlenn; "Kent signed off on my tweaks so in it goes." [operations/dumps] (ariel); V: 2 C: 2; - https://gerrit.wikimedia.org/r/64343 [14:25:10] Change merged: ArielGlenn; [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/64343 [14:49:53] PROBLEM - Puppet freshness on stat1002 is CRITICAL: No successful Puppet run in the last 10 hours [15:35:28] New review: Tim Landscheidt; "petan apparently installed something local with the same name, but in /usr/bin and only on tools-log..." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/64847 [16:40:35] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:41:25] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.127 second response time [16:51:56] New review: ArielGlenn; "Let's hold off on this. We already have projects in beta using wgInstantCommons (set in CommonSettin..." [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/62606 [17:03:01] !log reedy synchronized php-1.22wmf5 'Initial file sync out' [17:04:27] !log reedy synchronized w [17:04:57] !log reedy synchronized docroot [17:11:54] !log reedy synchronized php-1.22wmf5/extensions/Diff/ [17:12:30] !log reedy synchronized php-1.22wmf5/extensions/DataValues/ [17:13:38] !log reedy synchronized php-1.22wmf5/extensions/Wikibase/ [17:14:04] New patchset: Reedy; "1.22wmf5 stuffs" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/65683 [17:14:18] Change merged: Reedy; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/65683 [17:15:17] !log reedy Started syncing Wikimedia installation... : test2wiki to 1.22wmf5 and build localisation cache [17:17:27] apergos: What's needed to get the wikidata related tables dumped as part of the usual dumps process? [17:17:51] any private data in 'em? [17:17:59] none at all [17:18:05] they are just tracking tables [17:18:13] guess I need to see how big they are compared to everything else [17:18:29] this is on wikidatawiki or on the rest? [17:18:35] just wikidata [17:18:46] items per site is probably most interesting to folks [17:18:48] well that wiki dump process is a mess right now [17:18:57] the terms one might be a bit larger [17:19:01] which table is items per site? [17:19:10] wb_items_per_site [17:19:11] which reminds me Reedy :-P [17:19:40] https://gerrit.wikimedia.org/r/#/c/65236/ [17:21:56] uh oh [17:22:13] really. "only" 30 million rows [17:22:34] apergos: yes [17:22:35] well it's small enough to mysqldump so I could do that, [17:22:52] it's like all the langlink tables from wikipedia combined into one [17:22:56] gotta think about how to add that table only to the wikidata dump without having a bunch of ugly special cases in there [17:23:10] I can hardly wait for all the infobox data to make it over [17:23:16] that'll kill us for sure [17:23:24] :o [17:24:09] good think this project hasn't been around 10 years already... [17:24:12] *thing [17:24:26] * aude nods [17:24:46] well, the data itself is like regular wiki text data [17:25:05] deleting 45k cronspam messages... wonder if evolution will keel over and die horribly in the middle [17:25:15] and we will have more indexing in places, maybe like solr soonish [17:28:38] it would be nice for folks to be able to find the q-thing they want easily [17:28:50] by typing in the translation in their language [17:29:09] without having to have a redirect or something, I mean, and without having to look through a bunch of other results, if "go" just went there [17:30:24] !log reedy synchronized wmf-config/squid.php 'Repush' [17:31:32] apergos: actually that's coming quite soon [17:32:02] fancy redirects based on page title (e.g. en.wikidata.org/wiki/New_York_City -> www.wikidata.org/wiki/Q60) [17:32:08] yep [17:32:20] where you don't have to have an actual page in there with the redirect [17:32:22] the site link table would help tool and bot folks with that [17:32:25] that wil be a life saver [17:32:26] if they want to do that [17:35:47] wow this is really lame [17:35:59] how do we dump flagged revs only for the wikis that have it: [17:36:06] return self.dbName in self.config.flaggedRevsList [17:36:08] bummer... [17:36:14] hmmmm [17:36:23] we can do better than that [17:38:34] hahaha [17:38:39] but why do better when we can be lazy [17:38:46] ariel@fenari:/home/w/common$ ls *dblist [17:38:51] :) [17:38:54] ...wikidata.dblist [17:39:15] that's hilarious. I was about to write a little routine to check for tables and stuff, all nice [17:39:17] meh :-D [17:41:05] apergos: must be new since yesterday :) [17:41:23] 3a4b9fa8 (Reedy 2013-05-26 12:28:40 +0000 1) wikidatawiki [17:41:48] I did it so we can list all wikidata repos as one config group ;) [17:42:01] * aude nods [17:42:44] please give me a one line summary of what's in the table. it will appear on the dump page as the description of the tble contents. e.g. for flagged revs we have [17:43:08] flaggedpages [17:43:17] This contains a row for each flagged article, containing th\ [17:43:17] e stable revision ID, if the lastest edit was flagged, and how long edits have been pending. [17:43:18] apergos: for the items per site one? [17:43:20] yes [17:43:23] ok [17:43:59] see what we need, human-readable description for people who just want to know what data they are getting [17:44:15] tracking table of wikipedia sites links (by page title, site id) and wikidata item ids [17:44:33] although technically we could have wikivoyage and other clients at some point [17:44:40] the site id = global site id like enwiki [17:44:49] so can easily accommodate non wikipedias [17:45:17] so instead of putting wikipedia I can put 'wikimedia project' ? [17:45:26] sure [17:47:27] is the site id the db name or some other thing? [17:47:43] for the particular wiki I mean [17:47:44] it's almost always or always the same for wikimedia [17:47:53] ok [17:47:54] it's stored in the site table of mediawiki [17:48:05] could be different [17:48:16] it comes from sitematrix [17:48:50] the 'site global key' from the site table eh? [17:48:56] yes [17:48:59] ok great [17:49:47] the wikidata ids are numeric ids [17:50:07] * apergos goes to find a sample row (it wasn't in the scrollback) [17:50:10] the pages (don't ask me why) are in non db key format (e.g. with spaces instead of underscores) [17:51:05] i think there is a primary key (id) column, useful for maintenance operations like schema changes [17:51:28] heh as soon as people get the data they will start complaining [17:51:31] ah well [17:52:02] heh [17:52:49] so this link [17:52:55] scuse me for being such a b00b [17:52:58] *noob! [17:52:59] grr [17:53:01] anyways, [17:53:03] lol [17:53:16] it's not a link on the site's page including the item [17:53:20] ? [17:53:43] * aude confused [17:53:55] | 55 | 3596065 | abwiki | Џьгьарда | [17:53:57] ok there's a row [17:54:00] right [17:54:03] http://ab.wikipedia.org/w/index.php?title=%D0%8F%D1%8C%D0%B3%D1%8C%D0%B0%D1%80%D0%B4%D0%B0&action=edit [17:54:06] here's the page on that site [17:54:14] so it's Q3596065 [17:54:45] it's a bit hard to see if there is some ref to it in that page [17:54:50] cause templates etc [17:54:51] http://ab.wikipedia.org/w/index.php?title=%D0%8F%D1%8C%D0%B3%D1%8C%D0%B0%D1%80%D0%B4%D0%B0 [17:54:55] it's in the sidebar [17:55:13] "edit links" and how we know which interwiki links to put there [17:55:43] and then which properties (available in the connected item) can be used on that page [17:55:46] ok so what I'm asking is [17:55:49] the link in the table [17:55:58] is due to [17:56:18] http://www.wikidata.org/wiki/Q3596065#sitelinks [17:56:21] because it's listed there [17:56:25] yep [17:56:31] just took me a minute to get to the page [17:56:35] ok [17:57:07] it's obviously much better to have the tracking table to look this up in these cases [17:57:57] oh no it's got the bleeping [17:58:17] the other tables that would be interesting are wb_entity_per_page (should be smaller, 13 million or however items we have) [17:58:24] wb_terms might be more challenging [17:58:48] grrrr [17:59:21] i don't see much use for wb_changes and wb_changes_dispatch (those are better for the labs / toolserver), although those would be smallish....especially dispatch table [17:59:21] I want to ne able to see the raw wikitext for the item [17:59:34] I mean, there is a page Qblah [17:59:41] ok [17:59:42] and it has wikitext, I've seen it in the dumps >_< [17:59:47] it does [17:59:56] it's the same place all the other wikitext is [17:59:59] but I can't just click 'edit' or even 'view source' [18:00:03] no [18:00:23] http://www.wikidata.org/wiki/Special:EntityData/Q3596065 is closest to raw [18:00:41] although it's not exactly the same as the internal blob [18:00:50] no it sure isn't [18:00:54] I'll rant about that later [18:00:56] * aude can't see the internal blob without downloading the dump [18:01:20] and neither can anyone else unless they have access to the cluster [18:01:33] the json would be "encoded" and then stuffed into xml [18:01:41] so when people add an item [18:01:53] they add 'links' which basically amount to [18:02:04] in this languag hre is the name of that thing [18:02:21] apergos: yes [18:02:38] okey dokey I can turn this into a one line summary, sorry to eat so much of your time [18:02:46] not a problem [18:02:47] link means a zilion different things so... [18:02:54] denny and lots of people will be super happy [18:02:59] * aude too [18:07:16] For each WikiData item, this contains rows with the corresnponding page name on a given wiki project. [18:07:25] yes [18:07:29] sold. [18:07:37] plain english is hard. [18:07:47] * aude nods [18:07:55] although it's "Wikidata" [18:08:10] woops [18:08:24] fixed [18:08:30] denny might be slightly annoyed with "WikiData" [18:08:36] I have to test this and then deployment will take a little while too [18:08:38] it's like Wikisource and not WikiSource [18:08:49] ok [18:08:51] except it isn't [18:09:13] because everything is all on www.wikidata.org [18:09:19] it's more like *cough* [18:09:22] wikispecies :-P [18:09:28] heh [18:09:30] * apergos ducks... [18:09:55] I wonder if my local test copy of wikidata has anything in that table [18:09:56] * apergos looks [18:11:09] heh [18:11:15] mysql> select count(*) from wb_items_per_site; [18:11:22] | 0 | [18:11:32] what maintenance script to I run to populate that sucker? [18:11:48] PROBLEM - Puppet freshness on stat1 is CRITICAL: No successful Puppet run in the last 10 hours [18:11:55] there is one for entity per page speciifcally [18:12:09] but not items per site? [18:12:18] not sure we have one specific for site links but we have one for "rebuild all data" :) [18:12:30] Don't we have some other scripts that we were going to run? [18:12:31] it'd be like 2 lines of code [18:12:40] Reedy: that's entity per page [18:12:44] rebuld all data works for me [18:12:51] ok [18:13:13] we use it on our test machine so it works [18:13:40] oh ffs [18:14:14] found it [18:14:25] ah in the extension dir, right [18:14:29] * aude assumes it does site links too [18:14:37] yes in Wikibase/lib/maintenance [18:15:07] doo dee doo [18:15:10] yep, running it now [18:15:17] !log reedy synchronized php-1.22wmf5/cache/ [18:15:25] hahahaha [18:15:27] foreach ( $pages as $pageRow ) { [18:15:28] $page->doEditUpdates( $revision, $GLOBALS['wgUser'] ); [18:15:31] in a nutshell [18:16:01] I'm not sure what the ascii art is though :-( [18:16:08] hah [18:16:45] !log reedy Started syncing Wikimedia installation... : Take 2 [18:16:48] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: No successful Puppet run in the last 10 hours [18:16:48] PROBLEM - Puppet freshness on lvs1004 is CRITICAL: No successful Puppet run in the last 10 hours [18:16:48] PROBLEM - Puppet freshness on lvs1005 is CRITICAL: No successful Puppet run in the last 10 hours [18:16:52] * aude thinks it's a kitten  [18:17:09] or some animal [18:17:11] it's definitely not a kitten [18:17:40] mysql> select count(*) from wb_items_per_site; [18:17:46] | 1484 | [18:17:53] nice [18:18:06] will do a little test tomorrow, now that it's set up [18:18:12] k [18:18:16] (after 9 pm, pretty done for the day) [18:18:21] thanks for the assist! [18:18:26] thank you! [18:18:48] PROBLEM - Puppet freshness on mc15 is CRITICAL: No successful Puppet run in the last 10 hours [18:18:52] apergos: Can you run chown l10nupdate:l10nupdate /a/common/php-1.22wmf5/cache/l10n on tin for me please? [18:18:59] sec [18:19:44] just the dir right? [18:19:51] yeah [18:19:51] (done) [18:20:01] scap just carried on when it couldn't write the cdb files [18:20:02] thanks [18:20:04] ok, anything else you need? cause I"m about to disappear [18:20:54] I don't think so [18:21:27] k [18:21:30] good luck! [18:21:39] I might swing by again later just to peek in [18:21:50] * aude grants reedy root access [18:21:53] if i could [18:24:49] !log reedy Started syncing Wikimedia installation... : Take 3 [18:31:28] PROBLEM - Apache HTTP on mw1070 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:32:18] RECOVERY - Apache HTTP on mw1070 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 747 bytes in 0.599 second response time [18:39:00] !log reedy Finished syncing Wikimedia installation... : Take 3 [18:40:22] Yay [18:41:50] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: testwiki and mediawikiwiki to 1.22wmf5 [18:42:21] New patchset: Reedy; "testwiki, test2wiki and mediawikiwiki to 1.22wmf5" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/65685 [18:42:31] Change merged: Reedy; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/65685 [18:43:13] test2 works fine with wikidata stuff [19:55:52] SAL not updating today? [19:56:13] stuff hasn't been logged... where's morebots? [19:57:04] eek 21.50 -!- morebots [~morebots@wikitech-static.wikimedia.org] has quit [Ping timeout: 256 seconds] [19:57:08] 04.23 < Susan> morebots is missing. [19:57:10] 05.23 < p858snake|l> Susan: morebots ran out on you again [19:57:13] 05.23 < Susan> Poor morebots. [20:18:43] !log restarted morebots [20:18:52] Logged the message, Master [20:29:47] !log reedy synchronized php-1.22wmf5/extensions/UniversalLanguageSelector 'Revert back to 1.22wmf4 version of ULS' [20:29:56] Logged the message, Master [20:41:22] New patchset: ArielGlenn; "include dump of (one of the) wikidata tables" [operations/dumps] (ariel) - https://gerrit.wikimedia.org/r/65689 [22:16:12] PROBLEM - Puppet freshness on db1032 is CRITICAL: No successful Puppet run in the last 10 hours [22:22:12] PROBLEM - Puppet freshness on virt1 is CRITICAL: No successful Puppet run in the last 10 hours [22:22:12] PROBLEM - Puppet freshness on virt4 is CRITICAL: No successful Puppet run in the last 10 hours [22:22:12] PROBLEM - Puppet freshness on virt3 is CRITICAL: No successful Puppet run in the last 10 hours [22:25:12] PROBLEM - Puppet freshness on pdf1 is CRITICAL: No successful Puppet run in the last 10 hours [22:25:12] PROBLEM - Puppet freshness on pdf2 is CRITICAL: No successful Puppet run in the last 10 hours [22:43:28] PROBLEM - NTP on ssl3003 is CRITICAL: NTP CRITICAL: No response from NTP server [22:55:16] PROBLEM - NTP on ssl3002 is CRITICAL: NTP CRITICAL: No response from NTP server [23:09:49] PROBLEM - Puppet freshness on db45 is CRITICAL: No successful Puppet run in the last 10 hours [23:13:50] New patchset: Tim Landscheidt; "Tool Labs: Add more user requested packages to exec_environ." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/65705