[00:00:12] New patchset: Lcarr; "Make test puppet repo act like production (pull from git)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2096 [00:17:24] New patchset: Bhartshorne; "put in a default so things don't break on new servers" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2103 [00:17:44] New review: Bhartshorne; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2103 [00:17:45] Change merged: Bhartshorne; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2103 [00:19:31] New patchset: Asher; "snapshot db26" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2104 [00:19:54] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2104 [00:19:54] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2104 [00:21:02] is something wrong in Europe? [00:21:20] shouldn't be pir^2 see anything ? [00:21:35] in #wikipedia-en : [00:21:42] Is it just me or is Wikipedia having some outages in Europe? [00:21:51] It's downloading rather slowly and without styles? [00:21:59] pir^2: Thanks [00:22:24] is it still bad? [00:24:05] ? [00:24:35] about 2 hours ago we had some p-loss between the DC in the states and in europe but should be fixed now [00:26:02] Qcoder00: you're still getting those issues ? [00:26:12] Yeah [00:26:14] :( [00:26:24] can you give me your ip and a traceroute ? [00:26:35] O_O [00:27:03] Leslie Carr: Not easily [00:27:13] it does seem to be clearing now so panic over [00:28:45] !log asher synchronized wmf-config/db.php 'temporarily pulling db18' [00:28:47] Logged the message, Master [00:33:49] !log asher synchronized wmf-config/db.php 'returning db18, now replicating heartbeat db' [00:33:51] Logged the message, Master [00:49:49] New patchset: Lcarr; "fixing software repo for puppetmaster in prod" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2105 [00:50:34] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2105 [00:50:35] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2105 [00:53:44] PROBLEM - DPKG on ms-be1 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [00:55:35] PROBLEM - Disk space on ms-be1 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [00:58:18] New patchset: Lcarr; "trying moving the git clone to last" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2106 [00:58:41] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2106 [00:58:41] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2106 [01:00:14] PROBLEM - RAID on ms-be1 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [01:01:02] !log asher synchronized wmf-config/db.php 'adding db32 to s1 at low weight, new enwiki snapshot host' [01:01:03] Logged the message, Master [01:35:16] New patchset: Pyoungmeister; "copy/paste error" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2107 [01:35:49] New review: Pyoungmeister; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2107 [01:35:49] Change merged: Pyoungmeister; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2107 [01:57:48] New patchset: Asher; "snapshot db32" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2108 [01:58:14] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2108 [01:58:14] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2108 [02:03:52] PROBLEM - Frontend Squid HTTP on cp1002 is CRITICAL: Connection refused [02:04:41] replag on s1 is now over 90,000 seconds and over 24 hours. :( [02:06:22] !log LocalisationUpdate completed (1.18) at Thu Jan 26 02:06:22 UTC 2012 [02:06:24] Logged the message, Master [02:06:52] PROBLEM - Backend Squid HTTP on cp1002 is CRITICAL: Connection refused [02:09:32] PROBLEM - Memcached on ms-fe1 is CRITICAL: Connection refused [02:10:52] PROBLEM - Memcached on ms-fe2 is CRITICAL: Connection refused [02:23:38] PROBLEM - Misc_Db_Lag on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 1710s [02:32:58] PROBLEM - MySQL replication status on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 2316s [02:35:59] New patchset: Catrope; "Fix puppet restart for udp2log-aft" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2109 [02:37:00] !log Started the udp2log process for the AFT logger manually on emery [02:37:02] Logged the message, Mr. Obvious [02:43:08] RECOVERY - MySQL replication status on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 0s [02:44:18] RECOVERY - Misc_Db_Lag on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 0s [02:52:58] RECOVERY - Apache HTTP on srv197 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.206 second response time [03:53:56] New patchset: Catrope; "Fix puppet restart for udp2log-aft" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2109 [03:54:13] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2109 [04:17:48] RECOVERY - MySQL disk space on es1004 is OK: DISK OK [04:24:18] RECOVERY - Disk space on es1004 is OK: DISK OK [04:40:18] PROBLEM - MySQL slave status on es1004 is CRITICAL: CRITICAL: Slave running: expected Yes, got No [06:04:17] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: Puppet has not run in the last 10 hours [06:20:06] PROBLEM - Puppet freshness on lvs1003 is CRITICAL: Puppet has not run in the last 10 hours [08:13:15] PROBLEM - Puppet freshness on knsq9 is CRITICAL: Puppet has not run in the last 10 hours [10:01:50] PROBLEM - Disk space on es1004 is CRITICAL: DISK CRITICAL - free space: /a 442696 MB (3% inode=99%): [10:06:47] PROBLEM - MySQL disk space on es1004 is CRITICAL: DISK CRITICAL - free space: /a 412796 MB (3% inode=99%): [10:28:30] New patchset: Hashar; "integration site now mobile aware" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2110 [10:28:48] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2110 [11:13:14] RECOVERY - MySQL slave status on es1004 is OK: OK: [12:43:56] Hai. I've confirmation all these are CC-BY: http://www.youtube.com/user/WorldEconomicForum/videos Converting most, and retaining decent quality, is resulting in multiple 100MB+ files (see https://commons.wikimedia.org/wiki/File:WEF-_Davos_2012_-_TIME_Davos_Debate_on_Capitalism_p1.ogv as **1 of 3**). Any way round the limit for files which, in some cases, are going to be 1GB+? [12:47:03] PROBLEM - Puppet freshness on virt3 is CRITICAL: Puppet has not run in the last 10 hours [12:47:03] PROBLEM - Puppet freshness on sodium is CRITICAL: Puppet has not run in the last 10 hours [13:13:28] gmaxwell, you've been suggested as the person most likely able to help with *very* large media files [13:15:26] I'll have one ~300MB .ogv ready to go in about 15 minutes, and about another 2GB (across 3 files) by late afternoon [13:18:20] brianmc, do you have to include them somewhere? [13:19:02] because if not, you might want to be lazy like me, upload to archive.org and let them do all the processing and such [13:19:18] if you upload some dozens GB they don't care, e.g. http://www.archive.org/details/evodevo [13:19:53] brianmc, you might want to take a look at https://commons.wikimedia.org/wiki/Help:Server-side_upload too [13:20:08] I'll be editing from the originals into more manageable chunks for Wikinews use - OpenShot just doesn't like ogv so working with MP4 and DV [13:20:30] Will check the server-side stuff guillom, thanks [13:20:36] np [13:21:09] * brianmc remembers his 1st hard drive,… 10MB [13:24:36] Attention please, we have a very weird problem at the german WP. [13:25:29] Due to unknown reasons, logged i users see profane words in the Article [[:de:Hebräisches_Alphabet]]. [13:25:43] http://de.wikipedia.org/wiki/Hebr%C3%A4isches_Alphabet [13:25:51] It is in the "Buchstaben"-Section [13:26:22] The strange thing is: These words are not in the editable text. [13:26:34] They come out of nowhere, other users see them too. [13:27:35] probably template vandalism? [13:35:28] guillom, yes https://de.wikipedia.org/w/index.php?title=Vorlage:He&action=history [14:30:14] I'm doing a batch of file renaming on enwiki. One particular file seems to be resistant to renaming - it keeps returning "the target filename is invalid" [14:30:34] I've tried various alternate versions of the new name [14:30:45] and I've asked two other admins to try it also, and they got the same error message [14:31:16] http://en.wikipedia.org/wiki/File:NMS.PNG (which I'm trying to rename to "file:Northeast Middle School (Midland, Michigan).png" or some variant thereon [14:31:39] (I've also tried with and without 'file' prepended) [15:02:30] database locked [15:08:19] Romaine: would be helpful to say which one [15:09:14] nl-wiki, was for regular database maintenance [15:09:26] is over already [15:41:29] Dragonfly6-7: Im trying to take a look at your question, can you throw +filemover on my account? [15:55:31] PROBLEM - check_minfraud3 on payments1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:00:31] RECOVERY - check_minfraud3 on payments1 is OK: HTTP OK: HTTP/1.1 200 OK - 8644 bytes in 0.223 second response time [16:07:32] zzz =_= [16:14:21] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: Puppet has not run in the last 10 hours [16:30:21] PROBLEM - Puppet freshness on lvs1003 is CRITICAL: Puppet has not run in the last 10 hours [16:40:31] PROBLEM - check_minfraud3 on payments2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:45:31] RECOVERY - check_minfraud3 on payments2 is OK: HTTP OK: HTTP/1.1 200 OK - 8644 bytes in 0.225 second response time [17:49:41] PROBLEM - Host srv199 is DOWN: PING CRITICAL - Packet loss = 100% [17:56:44] RECOVERY - Host srv199 is UP: PING OK - Packet loss = 0%, RTA = 0.41 ms [18:03:34] PROBLEM - Apache HTTP on srv199 is CRITICAL: Connection refused [18:13:54] RECOVERY - Apache HTTP on srv199 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.030 second response time [18:22:54] PROBLEM - Puppet freshness on knsq9 is CRITICAL: Puppet has not run in the last 10 hours [18:54:20] New review: Lcarr; "good, looks like it will now actually look for the right pidfile :)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2109 [18:57:57] New patchset: Mark Bergsma; "Added eqiad service IPs for lvs realservers" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2112 [18:58:13] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2112 [18:58:20] New review: Mark Bergsma; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2112 [18:58:21] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2112 [19:06:46] New patchset: Mark Bergsma; "Copied text-squid role class into role/cache.pp, renamed to role::cache::squid::text" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2113 [19:07:23] New review: Mark Bergsma; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2113 [19:07:24] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2113 [19:11:12] New patchset: Mark Bergsma; "Fix variable name" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2114 [19:11:29] New review: Mark Bergsma; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2114 [19:11:29] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2114 [19:14:23] New patchset: Mark Bergsma; "Work around Puppet bug" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2115 [19:14:39] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2115 [19:15:01] New review: Mark Bergsma; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2115 [19:15:01] Change merged: Mark Bergsma; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2115 [19:21:00] New patchset: Asher; "path to support legacy mysql installs" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2116 [19:21:16] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2116 [19:21:19] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2116 [19:21:19] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2116 [19:23:07] New patchset: RobH; "upated added new simple shell script for ipmi mgmt" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2084 [19:23:24] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2084 [19:27:03] New patchset: Asher; "provide socket path for older installs" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2117 [19:27:20] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2117 [19:27:23] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2117 [19:27:23] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2117 [19:32:55] New patchset: Asher; "upgrading mysql on db37" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2118 [19:33:12] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2118 [19:33:18] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2118 [19:33:18] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2118 [19:34:45] !log asher synchronized wmf-config/db.php 'pulling db37 from s7 for upgrades' [19:34:47] Logged the message, Master [19:35:56] busy binasher is busy [19:36:05] Anything new or exciting to expect from the upgrades? [19:43:17] Reedy: its mostly for the sake of standardization and a bunch of bug fixes but the new build has a better group commit implementation and the ability to control concurrency on a per user basis, which could be useful for preventing pileups crippling db's (https://www.facebook.com/notes/mysql-at-facebook/early-results-from-admission_control/415232480932). i'm also fixing configs and implementing pt-heartbeat for better replication monitoring and pseu [19:43:18] global transaction id's (http://www.mysqlperformanceblog.com/2011/11/04/emulating-global-transaction-id-with-pt-heartbeat/) [19:51:04] New patchset: RobH; "added in misc::mgmt to include ipmitool and ipmi script" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2119 [19:51:20] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2119 [19:57:40] binasher: pseu what? [19:58:46] New review: Mark Bergsma; "(no comment)" [operations/puppet] (production); V: 0 C: 0; - https://gerrit.wikimedia.org/r/2119 [19:58:52] New review: Mark Bergsma; "(no comment)" [operations/puppet] (production); V: 0 C: 0; - https://gerrit.wikimedia.org/r/2084 [20:07:36] New patchset: RobH; "upated added new simple shell script for ipmi mgmt updated Change-Id: I33e6afa9b9d34e8bead610f7a2d4cb713065b88b" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2084 [20:07:53] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2084 [20:09:22] New patchset: RobH; "added in misc::mgmt to include ipmitool and ipmi script" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2119 [20:09:54] Change abandoned: RobH; "combined to another patch by mistake, abandoning this one" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2119 [20:10:37] !log Created "spoofuser" AntiSpoof table in the central auth database [20:10:39] Logged the message, Master [20:12:35] mark: I forget what you said about the "» system_role { "misc::ipmimgmthost": description => "IPMI Management Host" }" line [20:12:43] sorry ;/ [20:12:52] ack, wrong channel, eant to be in ops [20:17:43] !log asher synchronized wmf-config/db.php 'db37 back in s7' [20:17:44] Logged the message, Master [20:17:46] New review: Demon; "(no comment)" [test/mediawiki/core] (master); V: 1 C: 2; - https://gerrit.wikimedia.org/r/1841 [20:17:47] Change merged: Demon; [test/mediawiki/core] (master) - https://gerrit.wikimedia.org/r/1841 [20:21:06] New patchset: Jgreen; "adding mysql::packages to storage3's config" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2120 [20:21:24] New review: gerrit2; "Change did not pass lint check. You will need to send an amended patchset for this (see: https://lab..." [operations/puppet] (production); V: -1 - https://gerrit.wikimedia.org/r/2120 [20:23:30] New patchset: Jgreen; "adding mysql::packages to storage3's config, plus a comma so puppet stops chundering" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2120 [20:23:47] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2120 [20:23:49] New review: Jgreen; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2120 [20:23:50] Change merged: Jgreen; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2120 [20:25:03] Could someone tell me why I always get duplicate versions on importing a page? http://commons.wikimedia.org/w/index.php?title=Institution_talk:Datei:Weltreligionen.png&action=history [20:25:30] import is from dewikipedia. I definitely hit the import only once. [20:26:30] import finished with this error - but all versions are imported. " Request: POST http://commons.wikimedia.org/w/index.php?title=Special:Import&action=submit, from 208.80.152.17 via sq61.wikimedia.org (squid/2.7.STABLE9) to 208.80.152.74 (208.80.152.74) [20:26:30] (21:22:58) Saibo: Error: ERR_READ_TIMEOUT, errno [No Error] at Thu, 26 Jan 2012 20:21:54 GMT " [20:28:44] since when is this chanel logged? bah [20:28:47] *don't like* [20:30:36] Saibo, why did you import it in the first place? [20:30:52] wrong answer ;) [20:31:06] Nemo_bis: because I want to import the file to commons [20:31:10] that is the desc page [20:31:42] and what's the need to import the history [20:31:57] because the file page is like an article [20:32:02] look at it.. [20:32:21] and question back to you: why not? [20:32:50] yes, and article don't need to be imported [20:33:10] huh? why? [20:33:22] that's only de.wiki idiosyncrasy [20:33:28] however, just answer my question [20:33:35] have you ever read the terms of use? [20:33:39] I'm looking into it [20:33:48] no, why should I? [20:34:31] Saibo, because everyone is supposed to [20:34:38] especially sysops [20:35:00] anyway, are you sure you didn't click twice? [20:35:15] 99,5% sure - that is the fifth import like this [20:35:16] ;D [20:35:29] it's not strange for import to fail, but I never heard of it autorepeating [20:35:49] maybe it is because the source page is a file page - I don't know [20:36:22] nah [20:36:34] re. terms of use: of topic here [20:37:27] your question is offtopic too :-p [20:37:33] wtf? [20:37:36] :D [20:37:53] the stupid server kitties are not doing what I want!1111 [20:38:03] that is the channel for that kind of problems [20:38:05] i'm less offtopic than you because you kill server kittens for stupid german idiosyncrasies [20:38:29] anyway, did the source page always have the same number of revisions? [20:38:38] I see it imported a different number each time [20:39:47] everytime a different numer, yes [20:41:45] and in different order even [20:42:09] no pattern at all [20:42:23] perhaps you should try and use importupload, just to see [20:42:35] I think I do not have the rights for othis [20:42:50] no one in commons has - if I understand correctly [20:43:10] sure, ask them to a steward [20:43:20] https://commons.wikimedia.org/wiki/Special:ListGroupRights → only "import"ers have [20:43:27] nobody needs them because imports are mostly useless :-p [20:43:46] well, anyway - this stupid function should simply work what it was made for! :D [20:44:21] naah [20:44:24] you do not buy a ferrari to go shopping - a fiat should bring you to the shopping center, too [20:44:28] :D [20:44:29] transwiki import has never worked [20:44:41] wrong example [20:44:47] lol, why? [20:44:55] have a better? ;) [20:45:23] transwiki import is a monocycle: it's hard to get up the mountain, but by design [20:46:35] or at least that's what the vendor says to clients who don't live in plain cities like mine [20:46:36] not mentioned in the advert that it is just a monocycle! [20:46:46] tsk, how do you know [20:46:52] you didn't read the terms of use [20:47:10] I just read https://commons.wikimedia.org/wiki/Special:Import - it doesn't tell me that it is BS [20:47:32] what about https://meta.wikimedia.org/wiki/Help:Import [20:47:34] well, yes, the help page in dewp says that many many versions do not work [20:47:43] enwp's help page is nearly empty [20:47:49] and meta... well.. meta [20:47:55] who cares for meta? [20:47:56] :D [20:48:00] thx [20:48:23] pff [20:48:38] where else can you look for info about weird features like import [20:48:45] or parser weirdness [20:49:11] or why the string parse functions are off... [20:49:13] grml [20:49:14] hmm, that page used to say "select source and click import repeatedly until it succeeds at last" IIRC [20:50:11] and don't forget the nice page I created recently https://meta.wikimedia.org/wiki/Importer [20:52:00] Nemo_bis: that page is not really useful to me - but, yes, very nice page! :P [20:53:06] it would if you wanted to use importupload [20:53:07] why is the "MediaWiki Handbook" on meta, btw? [20:53:16] shouldn't it be on mediawikiwiki or something? [20:55:41] Nemo_bis: oh, Nemo_bis, in fact I read the meta page yesterday... no useful info [20:55:41] because Meta existed way before mww [20:55:53] indeed [20:56:04] I hoped it still contained that nice useful suggestion [20:56:06] not even mentioned that it is more than buggy and surely fails for many versions [20:58:14] apparently someone liked to think it was usable [20:58:25] dunno, I see some recent changes like https://www.mediawiki.org/wiki/Special:Code/MediaWiki/108232 [20:58:31] ialex, ^ [21:00:11] anyway, Saibo, you should probably ask de.wiki importers whether they noticed any difference lately; nobody abuses import like them and could know better [21:00:29] Nemo_bis: what is there with that rev? [21:00:56] ialex, could that or something else cause an import to happen multiple times creating duplicates? [21:01:01] * Saibo sends some German tanks over Nemo_bis [21:01:12] Nemo_bis: that rev is not live yet [21:02:01] ialex, or perhaps some previous one; see https://commons.wikimedia.org/w/index.php?title=Special:Log&page=Institution+talk%3ADatei%3AWeltreligionen.png [21:02:54] Nemo_bis: no idea [21:04:45] New patchset: Bhartshorne; "moving tampa swift cluster from test to prod configuration" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2121 [21:05:01] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2121 [21:06:03] New review: Bhartshorne; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2121 [21:06:04] Change merged: Bhartshorne; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2121 [21:07:41] Saibo, this piece is useful though https://meta.wikimedia.org/w/index.php?title=Help:Import&diff=616192&oldid=616183 [21:09:21] you should check the timestamp via API [21:09:22] hm.. not imported.. well [21:09:29] but they have the same stamps [21:10:18] how do you know [21:10:28] hm? you see in the history [21:10:57] "same time up to the second" - up to the second is shown in the history [21:12:29] I will try to catch DerHexer if he is around ... the Master of Imports ;) [21:13:17] no, seconds are shown here https://commons.wikimedia.org/w/api.php?action=query&prop=revisions&titles=Institution_talk:Datei:Weltreligionen.png&rvprop=timestamp&rvlimit=500 [21:14:19] that is the same like in history [21:14:50] (if you enabled a date/time format with seconds, of course) [21:15:22] however, versions have the same seconds - which means they shouldn't have been imported.. hmm [21:20:25] anyway, Saibo, https://bugzilla.wikimedia.org/show_bug.cgi?id=33975 [21:20:44] not that someone should care to fix it IMHO [21:22:19] Nemo_bis: thanks, will watch [21:22:41] * Nemo_bis unhappy, can't blame de.wiki [21:32:57] New patchset: Lcarr; "moving all of the misc:: and generic:: webserver classes to own class" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2122 [21:33:25] <^demon> !log gallium: cleared a bunch of junk from /tmp [21:33:27] Logged the message, Master [21:48:06] New review: RobH; "Self review is the best kind of review" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2084 [21:48:07] Change merged: RobH; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2084 [21:51:57] New patchset: RobH; "tagged sockpuppet into misc::ipmimgmthost role" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2123 [21:52:42] New review: RobH; "seems fine, just adding a misc role to sockpuppet" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2123 [21:56:22] Change abandoned: RobH; "(no reason)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2123 [22:09:40] New patchset: RobH; "added in ipmi mgmt host misc to sockpuppet" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2124 [22:09:57] New patchset: Asher; "new mysql monitoring, test on two dbs" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2125 [22:10:17] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2124 [22:10:17] New review: RobH; "added ipmi mgmt host entry for sockpuppet" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2124 [22:10:18] Change merged: RobH; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2124 [22:12:39] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2122 [22:12:40] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2122 [22:14:43] New patchset: Asher; "new mysql monitoring, test on two dbs" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2125 [22:15:01] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2125 [22:15:02] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2125 [22:15:03] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2125 [22:17:49] New patchset: RobH; "removing my change to site.pp" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2126 [22:18:07] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2126 [22:18:08] New review: RobH; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2126 [22:18:15] New review: RobH; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2126 [22:18:16] Change merged: RobH; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2126 [22:20:21] New patchset: Asher; "fix typo" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2127 [22:20:37] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2127 [22:20:42] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2127 [22:20:42] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2127 [22:23:03] New patchset: RobH; "renaming to more easily read misc::management::ipmi" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2128 [22:25:14] New patchset: Asher; "update class name" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2129 [22:26:05] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2129 [22:26:05] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2129 [22:27:04] New patchset: RobH; "renaming to more easily read misc::management::ipmi" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2128 [22:27:21] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2128 [22:28:35] New patchset: Asher; "typo" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2130 [22:28:53] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2130 [22:28:53] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2130 [22:28:58] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2130 [22:28:59] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2130 [22:33:02] New patchset: RobH; "renaming to more easily read misc::management::ipmi" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2128 [22:33:19] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2128 [22:36:11] New patchset: RobH; "renaming to more easily read misc::management::ipmi added in sockpuppet to role of ipmi mgmt Change-Id: I828cf708396493e413580839bc6fc1fde5314d4f" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2128 [22:36:27] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2128 [22:41:52] New review: Catrope; "(no comment)" [operations/puppet] (production) C: 1; - https://gerrit.wikimedia.org/r/2128 [22:41:59] PROBLEM - RAID on srv193 is CRITICAL: Connection refused by host [22:42:10] New review: RobH; "self review is like self help, doomed to fail" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2128 [22:42:11] Change merged: RobH; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2128 [22:42:29] PROBLEM - mobile traffic loggers on cp1043 is CRITICAL: Connection refused by host [22:42:29] PROBLEM - Disk space on cp1043 is CRITICAL: Connection refused by host [22:42:49] PROBLEM - Disk space on es2 is CRITICAL: Connection refused by host [22:42:49] PROBLEM - RAID on es2 is CRITICAL: Connection refused by host [22:43:29] PROBLEM - Disk space on snapshot3 is CRITICAL: Connection refused by host [22:43:39] PROBLEM - DPKG on srv193 is CRITICAL: Connection refused by host [22:44:09] PROBLEM - RAID on cp1043 is CRITICAL: Connection refused by host [22:44:31] New patchset: Bhartshorne; "adding LVS addresses to ms-fe boxen. Removing ms-be as ganglia aggregators." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2131 [22:44:39] PROBLEM - MySQL disk space on es2 is CRITICAL: Connection refused by host [22:44:48] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2131 [22:45:06] New review: Bhartshorne; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2131 [22:45:07] Change merged: Bhartshorne; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2131 [22:45:49] PROBLEM - Disk space on srv193 is CRITICAL: Connection refused by host [22:46:10] PROBLEM - DPKG on cp1041 is CRITICAL: Connection refused by host [22:46:19] PROBLEM - RAID on bast1001 is CRITICAL: Connection refused by host [22:46:19] PROBLEM - MySQL disk space on db1018 is CRITICAL: Connection refused by host [22:46:39] PROBLEM - DPKG on ms5 is CRITICAL: Connection refused by host [22:46:49] PROBLEM - RAID on ganglia1001 is CRITICAL: Connection refused by host [22:47:19] PROBLEM - RAID on virt2 is CRITICAL: Connection refused by host [22:47:39] PROBLEM - MySQL disk space on db22 is CRITICAL: Connection refused by host [22:47:59] PROBLEM - RAID on ms5 is CRITICAL: Connection refused by host [22:48:09] PROBLEM - jenkins_service_running on aluminium is CRITICAL: Connection refused by host [22:48:39] PROBLEM - DPKG on db1007 is CRITICAL: Connection refused by host [22:48:39] PROBLEM - DPKG on cp1043 is CRITICAL: Connection refused by host [22:48:49] PROBLEM - Disk space on cp1041 is CRITICAL: Connection refused by host [22:48:49] PROBLEM - RAID on db22 is CRITICAL: Connection refused by host [22:49:09] PROBLEM - RAID on snapshot3 is CRITICAL: Connection refused by host [22:49:49] PROBLEM - Disk space on ms5 is CRITICAL: Connection refused by host [22:50:39] PROBLEM - Disk space on es3 is CRITICAL: Connection refused by host [22:50:39] PROBLEM - DPKG on virt2 is CRITICAL: Connection refused by host [22:50:39] PROBLEM - DPKG on db1008 is CRITICAL: Connection refused by host [22:50:49] PROBLEM - RAID on db1018 is CRITICAL: Connection refused by host [22:50:49] PROBLEM - RAID on db1008 is CRITICAL: Connection refused by host [22:50:59] PROBLEM - DPKG on ganglia1001 is CRITICAL: Connection refused by host [22:50:59] PROBLEM - MySQL disk space on db1020 is CRITICAL: Connection refused by host [22:50:59] PROBLEM - RAID on es1003 is CRITICAL: Connection refused by host [22:50:59] PROBLEM - Disk space on es1003 is CRITICAL: Connection refused by host [22:50:59] PROBLEM - DPKG on es2 is CRITICAL: Connection refused by host [22:51:00] PROBLEM - MySQL disk space on es3 is CRITICAL: Connection refused by host [22:51:49] PROBLEM - DPKG on snapshot3 is CRITICAL: Connection refused by host [22:51:49] PROBLEM - DPKG on snapshot1 is CRITICAL: Connection refused by host [22:51:59] RECOVERY - RAID on srv193 is OK: OK: no RAID installed [22:52:19] PROBLEM - DPKG on bast1001 is CRITICAL: Connection refused by host [22:52:29] PROBLEM - mobile traffic loggers on cp1041 is CRITICAL: Connection refused by host [22:52:29] PROBLEM - RAID on cp1041 is CRITICAL: Connection refused by host [22:52:29] RECOVERY - mobile traffic loggers on cp1043 is OK: PROCS OK: 2 processes with command name varnishncsa [22:52:39] PROBLEM - Disk space on virt2 is CRITICAL: Connection refused by host [22:52:39] PROBLEM - Disk space on virt4 is CRITICAL: Connection refused by host [22:52:39] PROBLEM - Disk space on db1008 is CRITICAL: Connection refused by host [22:52:39] PROBLEM - DPKG on db22 is CRITICAL: Connection refused by host [22:52:39] RECOVERY - Disk space on cp1043 is OK: DISK OK [22:52:49] PROBLEM - Disk space on db22 is CRITICAL: Connection refused by host [22:52:49] PROBLEM - MySQL disk space on db1007 is CRITICAL: Connection refused by host [22:52:49] PROBLEM - Disk space on srv223 is CRITICAL: Connection refused by host [22:52:49] RECOVERY - Disk space on es2 is OK: DISK OK [22:52:49] RECOVERY - RAID on es2 is OK: OK: State is Optimal, checked 2 logical device(s) [22:52:59] PROBLEM - Disk space on db26 is CRITICAL: Connection refused by host [22:52:59] PROBLEM - Disk space on ganglia1001 is CRITICAL: Connection refused by host [22:52:59] PROBLEM - Disk space on es4 is CRITICAL: Connection refused by host [22:53:09] PROBLEM - RAID on es4 is CRITICAL: Connection refused by host [22:53:09] PROBLEM - Disk space on db1020 is CRITICAL: Connection refused by host [22:53:29] PROBLEM - Disk space on db1007 is CRITICAL: Connection refused by host [22:53:49] RECOVERY - DPKG on srv193 is OK: All packages OK [22:53:49] RECOVERY - Disk space on snapshot3 is OK: DISK OK [22:53:49] PROBLEM - Disk space on snapshot1 is CRITICAL: Connection refused by host [22:53:49] PROBLEM - Disk space on bast1001 is CRITICAL: Connection refused by host [22:53:59] PROBLEM - DPKG on srv238 is CRITICAL: Connection refused by host [22:53:59] RECOVERY - Disk space on ms-fe2 is OK: DISK OK [22:53:59] PROBLEM - DPKG on srv190 is CRITICAL: Connection refused by host [22:54:09] PROBLEM - RAID on srv223 is CRITICAL: Connection refused by host [22:54:19] PROBLEM - Disk space on cp1042 is CRITICAL: Connection refused by host [22:54:19] PROBLEM - MySQL disk space on db1008 is CRITICAL: Connection refused by host [22:54:19] RECOVERY - RAID on cp1043 is OK: OK: Active: 4, Working: 4, Failed: 0, Spare: 0 [22:54:29] PROBLEM - DPKG on aluminium is CRITICAL: Connection refused by host [22:54:29] PROBLEM - RAID on aluminium is CRITICAL: Connection refused by host [22:54:29] PROBLEM - Disk space on srv276 is CRITICAL: Connection refused by host [22:54:39] PROBLEM - RAID on db1020 is CRITICAL: Connection refused by host [22:54:39] PROBLEM - Disk space on db1018 is CRITICAL: Connection refused by host [22:54:39] PROBLEM - RAID on srv276 is CRITICAL: Connection refused by host [22:54:49] RECOVERY - MySQL disk space on es2 is OK: DISK OK [22:54:59] PROBLEM - DPKG on srv276 is CRITICAL: Connection refused by host [22:55:09] RECOVERY - Memcached on ms-fe2 is OK: TCP OK - 0.001 second response time on port 11211 [22:55:09] PROBLEM - MySQL disk space on db26 is CRITICAL: Connection refused by host [22:55:49] PROBLEM - DPKG on db1018 is CRITICAL: Connection refused by host [22:55:59] RECOVERY - Disk space on srv193 is OK: DISK OK [22:56:09] PROBLEM - Disk space on srv238 is CRITICAL: Connection refused by host [22:56:19] PROBLEM - mobile traffic loggers on cp1042 is CRITICAL: Connection refused by host [22:56:19] PROBLEM - Disk space on srv190 is CRITICAL: Connection refused by host [22:56:19] RECOVERY - DPKG on cp1041 is OK: All packages OK [22:56:19] PROBLEM - Puppet freshness on sodium is CRITICAL: Puppet has not run in the last 10 hours [22:56:19] PROBLEM - Puppet freshness on virt3 is CRITICAL: Puppet has not run in the last 10 hours [22:56:29] PROBLEM - Disk space on aluminium is CRITICAL: Connection refused by host [22:56:29] RECOVERY - RAID on bast1001 is OK: OK: no RAID installed [22:56:39] PROBLEM - RAID on db1007 is CRITICAL: Connection refused by host [22:56:39] PROBLEM - RAID on db1002 is CRITICAL: Connection refused by host [22:56:39] RECOVERY - MySQL disk space on db1018 is OK: DISK OK [22:56:39] PROBLEM - DPKG on db1020 is CRITICAL: Connection refused by host [22:56:49] PROBLEM - DPKG on db25 is CRITICAL: Connection refused by host [22:56:49] PROBLEM - RAID on db26 is CRITICAL: Connection refused by host [22:56:49] RECOVERY - DPKG on ms5 is OK: All packages OK [22:56:59] RECOVERY - RAID on ganglia1001 is OK: OK: Active: 2, Working: 2, Failed: 0, Spare: 0 [22:57:19] PROBLEM - RAID on db1034 is CRITICAL: Connection refused by host [22:57:39] PROBLEM - RAID on virt4 is CRITICAL: Connection refused by host [22:57:39] RECOVERY - RAID on virt2 is OK: OK: State is Optimal, checked 2 logical device(s) [22:57:59] PROBLEM - DPKG on srv223 is CRITICAL: Connection refused by host [22:57:59] RECOVERY - MySQL disk space on db22 is OK: DISK OK [22:57:59] PROBLEM - Disk space on db13 is CRITICAL: Connection refused by host [22:58:09] PROBLEM - DPKG on srv239 is CRITICAL: Connection refused by host [22:58:09] RECOVERY - RAID on ms5 is OK: OK: Active: 50, Working: 50, Failed: 0, Spare: 0 [22:58:19] RECOVERY - jenkins_service_running on aluminium is OK: PROCS OK: 3 processes with args jenkins [22:58:19] PROBLEM - Disk space on srv272 is CRITICAL: Connection refused by host [22:58:39] PROBLEM - Disk space on db53 is CRITICAL: Connection refused by host [22:58:39] PROBLEM - MySQL disk space on es4 is CRITICAL: Connection refused by host [22:58:49] PROBLEM - DPKG on db1002 is CRITICAL: Connection refused by host [22:58:49] RECOVERY - DPKG on cp1043 is OK: All packages OK [22:58:49] RECOVERY - DPKG on db1007 is OK: All packages OK [22:58:59] PROBLEM - DPKG on db26 is CRITICAL: Connection refused by host [22:58:59] RECOVERY - Disk space on cp1041 is OK: DISK OK [22:58:59] PROBLEM - RAID on snapshot1 is CRITICAL: Connection refused by host [22:58:59] PROBLEM - Disk space on db25 is CRITICAL: Connection refused by host [22:58:59] RECOVERY - RAID on db22 is OK: OK: 1 logical device(s) checked [22:59:09] PROBLEM - DPKG on db52 is CRITICAL: Connection refused by host [22:59:09] PROBLEM - MySQL disk space on db13 is CRITICAL: Connection refused by host [22:59:09] PROBLEM - RAID on db53 is CRITICAL: Connection refused by host [22:59:19] RECOVERY - RAID on snapshot3 is OK: OK: no RAID installed [22:59:19] PROBLEM - Disk space on db1001 is CRITICAL: Connection refused by host [22:59:19] PROBLEM - DPKG on es3 is CRITICAL: Connection refused by host [22:59:29] PROBLEM - RAID on es3 is CRITICAL: Connection refused by host [22:59:29] PROBLEM - RAID on snapshot2 is CRITICAL: Connection refused by host [22:59:29] PROBLEM - Disk space on srv201 is CRITICAL: Connection refused by host [22:59:49] PROBLEM - RAID on srv238 is CRITICAL: Connection refused by host [22:59:59] RECOVERY - RAID on ms-fe2 is OK: OK: Active: 2, Working: 2, Failed: 0, Spare: 0 [22:59:59] PROBLEM - RAID on cp1042 is CRITICAL: Connection refused by host [23:00:09] RECOVERY - Disk space on ms5 is OK: DISK OK [23:00:29] PROBLEM - DPKG on mw3 is CRITICAL: Connection refused by host [23:00:39] PROBLEM - Disk space on srv239 is CRITICAL: Connection refused by host [23:00:49] RECOVERY - Disk space on es3 is OK: DISK OK [23:00:59] RECOVERY - DPKG on db1008 is OK: All packages OK [23:00:59] PROBLEM - DPKG on virt4 is CRITICAL: Connection refused by host [23:00:59] RECOVERY - RAID on db1008 is OK: OK: State is Optimal, checked 2 logical device(s) [23:00:59] RECOVERY - RAID on db1018 is OK: OK: State is Optimal, checked 2 logical device(s) [23:01:09] RECOVERY - DPKG on virt2 is OK: All packages OK [23:01:09] PROBLEM - DPKG on es4 is CRITICAL: Connection refused by host [23:01:09] PROBLEM - Disk space on db47 is CRITICAL: Connection refused by host [23:01:09] RECOVERY - DPKG on ganglia1001 is OK: All packages OK [23:01:09] PROBLEM - DPKG on db53 is CRITICAL: Connection refused by host [23:01:10] RECOVERY - MySQL disk space on db1020 is OK: DISK OK [23:01:10] RECOVERY - Disk space on es1003 is OK: DISK OK [23:01:11] RECOVERY - RAID on es1003 is OK: OK: State is Optimal, checked 2 logical device(s) [23:01:19] PROBLEM - Disk space on srv192 is CRITICAL: Connection refused by host [23:01:19] PROBLEM - MySQL disk space on db1034 is CRITICAL: Connection refused by host [23:01:29] RECOVERY - DPKG on es2 is OK: All packages OK [23:01:29] PROBLEM - Disk space on db52 is CRITICAL: Connection refused by host [23:01:29] RECOVERY - MySQL disk space on es3 is OK: DISK OK [23:01:29] PROBLEM - DPKG on db1017 is CRITICAL: Connection refused by host [23:01:29] PROBLEM - RAID on searchidx2 is CRITICAL: Connection refused by host [23:01:39] RECOVERY - DPKG on ms-fe2 is OK: All packages OK [23:01:59] PROBLEM - Disk space on mw3 is CRITICAL: Connection refused by host [23:01:59] RECOVERY - DPKG on snapshot3 is OK: All packages OK [23:01:59] RECOVERY - DPKG on snapshot1 is OK: All packages OK [23:02:09] PROBLEM - RAID on mw1115 is CRITICAL: Connection refused by host [23:02:10] PROBLEM - DPKG on snapshot2 is CRITICAL: Connection refused by host [23:02:19] PROBLEM - DPKG on srv192 is CRITICAL: Connection refused by host [23:02:29] PROBLEM - RAID on srv190 is CRITICAL: Connection refused by host [23:02:29] .oO(spam...) [23:02:39] RECOVERY - mobile traffic loggers on cp1041 is OK: PROCS OK: 2 processes with command name varnishncsa [23:02:49] PROBLEM - DPKG on cp1042 is CRITICAL: Connection refused by host [23:02:49] PROBLEM - MySQL disk space on db1002 is CRITICAL: Connection refused by host [23:02:49] RECOVERY - Disk space on virt2 is OK: DISK OK [23:02:49] RECOVERY - Disk space on virt4 is OK: DISK OK [23:02:49] RECOVERY - Disk space on db1008 is OK: DISK OK [23:02:59] RECOVERY - DPKG on bast1001 is OK: All packages OK [23:02:59] PROBLEM - Disk space on db1002 is CRITICAL: Connection refused by host [23:02:59] RECOVERY - Disk space on db22 is OK: DISK OK [23:02:59] RECOVERY - MySQL disk space on db1007 is OK: DISK OK [23:03:09] RECOVERY - Disk space on srv223 is OK: DISK OK [23:03:09] RECOVERY - DPKG on db22 is OK: All packages OK [23:03:09] RECOVERY - RAID on cp1041 is OK: OK: Active: 4, Working: 4, Failed: 0, Spare: 0 [23:03:09] PROBLEM - MySQL disk space on db1001 is CRITICAL: Connection refused by host [23:03:19] RECOVERY - Disk space on db26 is OK: DISK OK [23:03:19] PROBLEM - MySQL disk space on db25 is CRITICAL: Connection refused by host [23:03:19] RECOVERY - Disk space on ganglia1001 is OK: DISK OK [23:03:19] RECOVERY - RAID on es4 is OK: OK: State is Optimal, checked 2 logical device(s) [23:03:29] RECOVERY - Disk space on es4 is OK: DISK OK [23:03:39] RECOVERY - Disk space on db1020 is OK: DISK OK [23:03:39] RECOVERY - Disk space on db1007 is OK: DISK OK [23:03:49] PROBLEM - Disk space on mw56 is CRITICAL: Connection refused by host [23:03:49] PROBLEM - DPKG on searchidx2 is CRITICAL: Connection refused by host [23:03:49] PROBLEM - Disk space on snapshot2 is CRITICAL: Connection refused by host [23:03:59] RECOVERY - Disk space on snapshot1 is OK: DISK OK [23:03:59] RECOVERY - Disk space on bast1001 is OK: DISK OK [23:03:59] PROBLEM - RAID on srv201 is CRITICAL: Connection refused by host [23:04:09] RECOVERY - DPKG on srv238 is OK: All packages OK [23:04:29] RECOVERY - DPKG on srv190 is OK: All packages OK [23:04:29] PROBLEM - RAID on srv272 is CRITICAL: Connection refused by host [23:04:39] RECOVERY - MySQL disk space on db1008 is OK: DISK OK [23:04:39] PROBLEM - RAID on db1001 is CRITICAL: Connection refused by host [23:04:39] RECOVERY - DPKG on aluminium is OK: All packages OK [23:04:39] RECOVERY - Disk space on srv276 is OK: DISK OK [23:04:39] RECOVERY - RAID on aluminium is OK: OK: Active: 2, Working: 2, Failed: 0, Spare: 0 [23:04:49] PROBLEM - Disk space on db1004 is CRITICAL: Connection refused by host [23:04:49] PROBLEM - MySQL disk space on db1017 is CRITICAL: Connection refused by host [23:04:49] RECOVERY - Disk space on db1018 is OK: DISK OK [23:04:49] RECOVERY - RAID on db1020 is OK: OK: State is Optimal, checked 2 logical device(s) [23:04:59] PROBLEM - RAID on db25 is CRITICAL: Connection refused by host [23:04:59] RECOVERY - RAID on srv223 is OK: OK: no RAID installed [23:04:59] RECOVERY - RAID on srv276 is OK: OK: no RAID installed [23:04:59] PROBLEM - DPKG on db13 is CRITICAL: Connection refused by host [23:04:59] PROBLEM - DPKG on db1034 is CRITICAL: Connection refused by host [23:05:00] PROBLEM - MySQL disk space on db53 is CRITICAL: Connection refused by host [23:05:09] PROBLEM - Disk space on db1033 is CRITICAL: Connection refused by host [23:05:09] RECOVERY - DPKG on srv276 is OK: All packages OK [23:05:30] RECOVERY - MySQL disk space on db26 is OK: DISK OK [23:05:59] PROBLEM - DPKG on mw1115 is CRITICAL: Connection refused by host [23:05:59] PROBLEM - DPKG on srv201 is CRITICAL: Connection refused by host [23:05:59] PROBLEM - Disk space on searchidx2 is CRITICAL: Connection refused by host [23:05:59] RECOVERY - DPKG on db1018 is OK: All packages OK [23:06:09] PROBLEM - RAID on srv191 is CRITICAL: Connection refused by host [23:06:19] PROBLEM - DPKG on srv272 is CRITICAL: Connection refused by host [23:06:29] RECOVERY - Disk space on srv190 is OK: DISK OK [23:06:29] RECOVERY - mobile traffic loggers on cp1042 is OK: PROCS OK: 2 processes with command name varnishncsa [23:06:29] PROBLEM - MySQL disk space on db52 is CRITICAL: Connection refused by host [23:06:39] PROBLEM - DPKG on db1001 is CRITICAL: Connection refused by host [23:06:39] PROBLEM - DPKG on db1033 is CRITICAL: Connection refused by host [23:06:39] RECOVERY - Disk space on aluminium is OK: DISK OK [23:06:39] PROBLEM - Disk space on db1034 is CRITICAL: Connection refused by host [23:06:49] PROBLEM - MySQL disk space on db1004 is CRITICAL: Connection refused by host [23:06:49] PROBLEM - MySQL disk space on db1033 is CRITICAL: Connection refused by host [23:06:49] RECOVERY - DPKG on db1020 is OK: All packages OK [23:06:49] RECOVERY - RAID on db1007 is OK: OK: State is Optimal, checked 2 logical device(s) [23:06:49] RECOVERY - RAID on db1002 is OK: OK: State is Optimal, checked 2 logical device(s) [23:06:59] RECOVERY - DPKG on db25 is OK: All packages OK [23:06:59] RECOVERY - RAID on db26 is OK: OK: 1 logical device(s) checked [23:07:09] PROBLEM - RAID on db52 is CRITICAL: Connection refused by host [23:07:29] PROBLEM - RAID on mw3 is CRITICAL: Connection refused by host [23:07:29] PROBLEM - Disk space on mw1115 is CRITICAL: Connection refused by host [23:07:29] RECOVERY - RAID on db1034 is OK: OK: State is Optimal, checked 2 logical device(s) [23:07:39] PROBLEM - MySQL disk space on db47 is CRITICAL: Connection refused by host [23:07:49] RECOVERY - RAID on virt4 is OK: OK: State is Optimal, checked 2 logical device(s) [23:08:09] RECOVERY - DPKG on srv223 is OK: All packages OK [23:08:09] RECOVERY - Disk space on db13 is OK: DISK OK [23:08:19] RECOVERY - DPKG on srv239 is OK: All packages OK [23:08:29] RECOVERY - Disk space on srv272 is OK: DISK OK [23:08:49] RECOVERY - Disk space on db53 is OK: DISK OK [23:08:49] RECOVERY - MySQL disk space on es4 is OK: DISK OK [23:08:59] RECOVERY - DPKG on db1002 is OK: All packages OK [23:08:59] PROBLEM - Disk space on db1017 is CRITICAL: Connection refused by host [23:09:09] RECOVERY - DPKG on db26 is OK: All packages OK [23:09:09] PROBLEM - DPKG on db47 is CRITICAL: Connection refused by host [23:09:09] RECOVERY - RAID on snapshot1 is OK: OK: no RAID installed [23:09:09] RECOVERY - Disk space on db25 is OK: DISK OK [23:09:19] RECOVERY - DPKG on db52 is OK: All packages OK [23:09:19] RECOVERY - MySQL disk space on db13 is OK: DISK OK [23:09:19] RECOVERY - RAID on db53 is OK: OK: State is Optimal, checked 12 logical device(s) [23:09:29] RECOVERY - Disk space on db1001 is OK: DISK OK [23:09:29] RECOVERY - DPKG on es3 is OK: All packages OK [23:09:29] PROBLEM - RAID on db16 is CRITICAL: Connection refused by host [23:09:40] RECOVERY - Disk space on srv201 is OK: DISK OK [23:09:40] RECOVERY - RAID on snapshot2 is OK: OK: no RAID installed [23:09:40] RECOVERY - RAID on es3 is OK: OK: State is Optimal, checked 2 logical device(s) [23:09:40] RECOVERY - Disk space on srv238 is OK: DISK OK [23:09:49] RECOVERY - RAID on srv238 is OK: OK: no RAID installed [23:09:59] PROBLEM - DPKG on srv191 is CRITICAL: Connection refused by host [23:10:09] PROBLEM - mobile traffic loggers on cp1044 is CRITICAL: Connection refused by host [23:10:09] RECOVERY - RAID on cp1042 is OK: OK: Active: 4, Working: 4, Failed: 0, Spare: 0 [23:10:20] PROBLEM - RAID on mw56 is CRITICAL: Connection refused by host [23:10:22] PROBLEM - RAID on srv192 is CRITICAL: Connection refused by host [23:10:39] PROBLEM - Disk space on srv191 is CRITICAL: Connection refused by host [23:10:49] RECOVERY - DPKG on mw3 is OK: All packages OK [23:10:59] PROBLEM - RAID on db1017 is CRITICAL: Connection refused by host [23:11:00] RECOVERY - Disk space on srv239 is OK: DISK OK [23:11:11] PROBLEM - MySQL disk space on db1019 is CRITICAL: Connection refused by host [23:11:11] PROBLEM - RAID on es1001 is CRITICAL: Connection refused by host [23:11:19] RECOVERY - Disk space on db47 is OK: DISK OK [23:11:19] PROBLEM - DPKG on db11 is CRITICAL: Connection refused by host [23:11:19] RECOVERY - DPKG on es4 is OK: All packages OK [23:11:19] RECOVERY - DPKG on db53 is OK: All packages OK [23:11:19] RECOVERY - DPKG on virt4 is OK: All packages OK [23:11:20] PROBLEM - Disk space on es1001 is CRITICAL: Connection refused by host [23:11:29] PROBLEM - RAID on db1033 is CRITICAL: Connection refused by host [23:11:39] RECOVERY - Disk space on srv192 is OK: DISK OK [23:11:39] RECOVERY - MySQL disk space on db1034 is OK: DISK OK [23:11:39] RECOVERY - RAID on searchidx2 is OK: OK: State is Optimal, checked 4 logical device(s) [23:11:52] PROBLEM - DPKG on mw56 is CRITICAL: Connection refused by host [23:11:52] RECOVERY - Disk space on db52 is OK: DISK OK [23:11:52] RECOVERY - DPKG on db1017 is OK: All packages OK [23:11:59] PROBLEM - Disk space on db1035 is CRITICAL: Connection refused by host [23:12:19] RECOVERY - DPKG on snapshot2 is OK: All packages OK [23:12:19] PROBLEM - RAID on nfs1 is CRITICAL: Connection refused by host [23:12:19] RECOVERY - Disk space on mw3 is OK: DISK OK [23:12:19] RECOVERY - RAID on mw1115 is OK: OK: no RAID installed [23:12:29] PROBLEM - Disk space on nfs1 is CRITICAL: Connection refused by host [23:12:29] PROBLEM - DPKG on db1003 is CRITICAL: Connection refused by host [23:12:29] RECOVERY - RAID on srv190 is OK: OK: no RAID installed [23:12:39] PROBLEM - DPKG on nfs1 is CRITICAL: Connection refused by host [23:12:39] RECOVERY - DPKG on srv192 is OK: All packages OK [23:12:59] PROBLEM - DPKG on db1004 is CRITICAL: Connection refused by host [23:12:59] RECOVERY - DPKG on cp1042 is OK: All packages OK [23:13:16] PROBLEM - Disk space on db16 is CRITICAL: Connection refused by host [23:13:16] PROBLEM - Disk space on db1003 is CRITICAL: Connection refused by host [23:13:16] PROBLEM - RAID on db1004 is CRITICAL: Connection refused by host [23:13:16] RECOVERY - MySQL disk space on db1002 is OK: DISK OK [23:13:16] PROBLEM - DPKG on es1001 is CRITICAL: Connection refused by host [23:13:26] PROBLEM - RAID on db1003 is CRITICAL: Connection refused by host [23:13:37] PROBLEM - RAID on db11 is CRITICAL: Connection refused by host [23:14:06] RECOVERY - DPKG on mw56 is OK: All packages OK [23:14:56] PROBLEM - MySQL disk space on db1035 is CRITICAL: Connection refused by host [23:14:56] RECOVERY - RAID on db1033 is OK: OK: State is Optimal, checked 2 logical device(s) [23:15:06] RECOVERY - MySQL disk space on db47 is OK: DISK OK [23:15:06] RECOVERY - MySQL disk space on db52 is OK: DISK OK [23:15:06] PROBLEM - MySQL disk space on es1001 is CRITICAL: Connection refused by host [23:15:46] RECOVERY - Disk space on mw56 is OK: DISK OK [23:15:46] RECOVERY - MySQL disk space on db1001 is OK: DISK OK [23:15:46] RECOVERY - RAID on nfs1 is OK: OK: Active: 4, Working: 4, Failed: 0, Spare: 0 [23:16:06] RECOVERY - RAID on srv201 is OK: OK: no RAID installed [23:16:26] RECOVERY - Disk space on db1002 is OK: DISK OK [23:16:36] PROBLEM - RAID on cp1044 is CRITICAL: Connection refused by host [23:16:36] RECOVERY - DPKG on db1004 is OK: All packages OK [23:16:46] PROBLEM - RAID on db1019 is CRITICAL: Connection refused by host [23:16:46] PROBLEM - Disk space on db11 is CRITICAL: Connection refused by host [23:16:46] RECOVERY - Disk space on db1017 is OK: DISK OK [23:16:46] RECOVERY - Disk space on db16 is OK: DISK OK [23:16:46] RECOVERY - DPKG on db1033 is OK: All packages OK [23:16:56] RECOVERY - MySQL disk space on db53 is OK: DISK OK [23:16:56] RECOVERY - RAID on db1004 is OK: OK: State is Optimal, checked 2 logical device(s) [23:17:06] RECOVERY - MySQL disk space on db25 is OK: DISK OK [23:17:26] RECOVERY - DPKG on mw1115 is OK: All packages OK [23:17:46] RECOVERY - Disk space on nfs1 is OK: DISK OK [23:17:56] RECOVERY - Disk space on searchidx2 is OK: DISK OK [23:18:06] RECOVERY - RAID on srv191 is OK: OK: no RAID installed [23:18:06] RECOVERY - DPKG on srv201 is OK: All packages OK [23:18:06] RECOVERY - RAID on srv272 is OK: OK: no RAID installed [23:18:36] RECOVERY - Disk space on cp1042 is OK: DISK OK [23:18:36] RECOVERY - Disk space on snapshot2 is OK: DISK OK [23:18:36] PROBLEM - MySQL disk space on db1003 is CRITICAL: Connection refused by host [23:18:36] RECOVERY - Disk space on db1004 is OK: DISK OK [23:18:36] RECOVERY - RAID on db1001 is OK: OK: State is Optimal, checked 2 logical device(s) [23:18:46] PROBLEM - MySQL disk space on db11 is CRITICAL: Connection refused by host [23:18:46] RECOVERY - DPKG on db13 is OK: All packages OK [23:19:16] RECOVERY - RAID on db1017 is OK: OK: State is Optimal, checked 2 logical device(s) [23:19:26] RECOVERY - DPKG on es1001 is OK: All packages OK [23:19:36] RECOVERY - RAID on mw3 is OK: OK: no RAID installed [23:19:56] RECOVERY - MySQL disk space on db1017 is OK: DISK OK [23:20:06] RECOVERY - DPKG on srv191 is OK: All packages OK [23:20:06] RECOVERY - Disk space on db1033 is OK: DISK OK [23:20:06] RECOVERY - RAID on db25 is OK: OK: 1 logical device(s) checked [23:20:16] RECOVERY - DPKG on srv272 is OK: All packages OK [23:20:26] RECOVERY - RAID on db52 is OK: OK: State is Optimal, checked 2 logical device(s) [23:20:46] RECOVERY - Disk space on db1034 is OK: DISK OK [23:20:46] RECOVERY - MySQL disk space on db1033 is OK: DISK OK [23:20:56] RECOVERY - DPKG on db47 is OK: All packages OK [23:20:56] RECOVERY - RAID on es1001 is OK: OK: State is Optimal, checked 2 logical device(s) [23:21:36] RECOVERY - Disk space on mw1115 is OK: DISK OK [23:21:36] RECOVERY - DPKG on db1034 is OK: All packages OK [23:21:46] RECOVERY - DPKG on nfs1 is OK: All packages OK [23:22:06] RECOVERY - Disk space on srv191 is OK: DISK OK [23:22:06] RECOVERY - RAID on srv192 is OK: OK: no RAID installed [23:22:16] RECOVERY - DPKG on searchidx2 is OK: All packages OK [23:22:36] RECOVERY - RAID on db1003 is OK: OK: State is Optimal, checked 2 logical device(s) [23:23:06] RECOVERY - MySQL disk space on db1004 is OK: DISK OK [23:23:06] RECOVERY - RAID on db16 is OK: OK: 1 logical device(s) checked [23:23:06] RECOVERY - RAID on db11 is OK: OK: 1 logical device(s) checked [23:23:16] RECOVERY - RAID on mw56 is OK: OK: no RAID installed [23:23:26] RECOVERY - mobile traffic loggers on cp1044 is OK: PROCS OK: 2 processes with command name varnishncsa [23:24:06] RECOVERY - DPKG on db1001 is OK: All packages OK [23:24:56] RECOVERY - MySQL disk space on db1019 is OK: DISK OK [23:25:06] RECOVERY - Disk space on db1035 is OK: DISK OK [23:25:06] RECOVERY - Disk space on es1001 is OK: DISK OK [23:25:06] RECOVERY - DPKG on db11 is OK: All packages OK [23:25:06] RECOVERY - MySQL disk space on db1035 is OK: DISK OK [23:25:16] PROBLEM - DPKG on es1002 is CRITICAL: Connection refused by host [23:25:26] RECOVERY - MySQL disk space on es1001 is OK: DISK OK [23:26:06] RECOVERY - DPKG on db1003 is OK: All packages OK [23:26:56] RECOVERY - Disk space on db11 is OK: DISK OK [23:26:56] RECOVERY - RAID on cp1044 is OK: OK: Active: 4, Working: 4, Failed: 0, Spare: 0 [23:26:56] RECOVERY - RAID on db1019 is OK: OK: State is Optimal, checked 2 logical device(s) [23:27:06] RECOVERY - Disk space on db1003 is OK: DISK OK [23:27:16] PROBLEM - RAID on es1002 is CRITICAL: Connection refused by host [23:27:26] PROBLEM - Disk space on es1002 is CRITICAL: Connection refused by host [23:28:56] RECOVERY - MySQL disk space on db11 is OK: DISK OK [23:29:06] RECOVERY - MySQL disk space on db1003 is OK: DISK OK [23:29:06] PROBLEM - MySQL disk space on es1002 is CRITICAL: Connection refused by host [23:34:20] New patchset: Bhartshorne; "correcting container name for eqiad test swift cluster" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2132 [23:34:35] can someone please have a look on an api issue? [23:34:46] New review: Bhartshorne; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/2132 [23:34:46] Change merged: Bhartshorne; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2132 [23:34:47] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2132 [23:35:36] RECOVERY - DPKG on es1002 is OK: All packages OK [23:37:16] PROBLEM - mysqld processes on db32 is CRITICAL: PROCS CRITICAL: 1 process with command name mysqld [23:37:26] RECOVERY - RAID on es1002 is OK: OK: State is Optimal, checked 2 logical device(s) [23:37:35] matanya: what is the issue ? [23:37:36] RECOVERY - Disk space on es1002 is OK: DISK OK [23:38:15] does deletedrevs give you the history of a deleted page? [23:39:26] RECOVERY - MySQL disk space on es1002 is OK: DISK OK [23:40:44] New patchset: Asher; "fix mysqld check, since doesn't match mysqld_safe" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2133 [23:41:01] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2133 [23:41:06] PROBLEM - mysqld processes on db36 is CRITICAL: PROCS CRITICAL: 1 process with command name mysqld [23:41:35] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2133 [23:41:36] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2133 [23:47:36] RECOVERY - mysqld processes on db32 is OK: PROCS OK: 1 process with command name mysqld [23:51:14] New patchset: Asher; "override the incorrect pid_file def in the packaged config" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2134 [23:51:30] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2134 [23:51:30] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/2134 [23:51:36] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/2134 [23:51:37] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2134 [23:52:26] PROBLEM - RAID on ms-fe2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [23:54:16] RECOVERY - Frontend Squid HTTP on cp1002 is OK: HTTP OK HTTP/1.0 200 OK - 27535 bytes in 0.200 seconds [23:54:34] !log reedy synchronized php-1.18/includes/specials/SpecialBlockList.php 'r110095' [23:54:36] Logged the message, Master [23:54:36] PROBLEM - DPKG on ms-fe2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [23:56:15] PROBLEM - MySQL disk space on db30 is CRITICAL: Connection refused by host [23:56:39] !log catrope synchronized wmf-config/CommonSettings.php 'Adding $wgMoodbarConfig["feedbackDashboardUrl"]' [23:56:40] Logged the message, Master [23:57:50] binasher: did you changed the access(-tables) on s7 short time ago? [23:58:04] !log catrope synchronized php-1.18/extensions/MoodBar/ 'Update MoodBar' [23:58:05] PROBLEM - Disk space on ms-fe2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [23:58:05] Logged the message, Master [23:58:43] DaBPunkt: nagios user grants for the heartbeat database [23:59:07] ok, that's the reason our nagios has no access anymore on our s7 :-)