[00:05:13] gn8 folks [01:50:33] PROBLEM - Puppet freshness on lvs1003 is CRITICAL: Puppet has not run in the last 10 hours [01:50:33] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: Puppet has not run in the last 10 hours [01:51:03] PROBLEM - MySQL Replication Heartbeat on db42 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [02:06:43] !log LocalisationUpdate completed (1.18) at Mon Jan 30 02:06:42 UTC 2012 [02:06:46] Logged the message, Master [02:19:03] PROBLEM - MySQL replication status on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 1571s [02:26:03] PROBLEM - Misc_Db_Lag on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 1991s [02:37:13] RECOVERY - Misc_Db_Lag on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 19s [02:41:23] RECOVERY - MySQL replication status on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 31s [02:45:13] PROBLEM - Puppet freshness on brewster is CRITICAL: Puppet has not run in the last 10 hours [03:43:01] Hi! I've been looking at a few dumps, and it seems like http://dumps.wikimedia.org/zhwiktionary/latest/zhwiktionary-latest-all-titles-in-ns0.gz is incomplete [03:43:48] At least 瑞典語 is missing [03:48:14] (while the article does exist: https://zh.wiktionary.org/wiki/%E7%91%9E%E5%85%B8%E8%AF%AD [03:50:36] PROBLEM - Puppet freshness on knsq9 is CRITICAL: Puppet has not run in the last 10 hours [03:50:51] anyway.. I just wanted to let you know - I've got to go now [04:17:56] RECOVERY - Disk space on es1004 is OK: DISK OK [04:22:46] RECOVERY - MySQL disk space on es1004 is OK: DISK OK [04:37:22] PROBLEM - MySQL slave status on es1004 is CRITICAL: CRITICAL: Slave running: expected Yes, got No [05:48:45] PROBLEM - MySQL Replication Heartbeat on db42 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [09:53:56] PROBLEM - Disk space on es1004 is CRITICAL: DISK CRITICAL - free space: /a 440397 MB (3% inode=99%): [10:02:36] PROBLEM - MySQL disk space on es1004 is CRITICAL: DISK CRITICAL - free space: /a 390001 MB (3% inode=99%): [10:29:26] PROBLEM - RAID on searchidx2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [10:40:26] RECOVERY - RAID on searchidx2 is OK: OK: State is Optimal, checked 4 logical device(s) [10:50:40] PROBLEM - MySQL Replication Heartbeat on db42 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [11:15:50] RECOVERY - MySQL slave status on es1004 is OK: OK: [11:21:37] hi - the zh-wikt title dump seems to be incorrect [11:21:59] at least one word is missing: 瑞典語 [11:24:05] is that common? [11:43:24] Hallo. Which version of rsvg do we have on the cluster? [12:01:41] PROBLEM - Puppet freshness on lvs1003 is CRITICAL: Puppet has not run in the last 10 hours [12:01:41] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: Puppet has not run in the last 10 hours [12:55:36] PROBLEM - Puppet freshness on brewster is CRITICAL: Puppet has not run in the last 10 hours [13:11:36] New patchset: Dzahn; "REVIEW REQUESTED major cleanup and refactoring and some parameterization of udp2log class, but should be no substantive changes" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/2083 [13:24:55] New review: Dzahn; "looks good. just fixed wrapped line 48 and cosmetic changes (whitespace), but is it missing class ud..." [operations/puppet] (production); V: 1 C: 1; - https://gerrit.wikimedia.org/r/2083 [13:45:51] RECOVERY - MySQL Slave Delay on db42 is OK: OK replication delay 0 seconds [13:50:31] RECOVERY - MySQL Replication Heartbeat on db42 is OK: OK replication delay 0 seconds [14:00:51] PROBLEM - Puppet freshness on knsq9 is CRITICAL: Puppet has not run in the last 10 hours [15:14:01] hexmode, shoud bug like this https://bugzilla.wikimedia.org/show_bug.cgi?id=13462 be moved under wikidiff2, given that things like https://bugzilla.wikimedia.org/show_bug.cgi?id=13462#c6 could be applied to it? [15:14:30] Nemo_bis: let me look 1s [15:17:36] Nemo_bis: Yeah, I think so. Diffs are horrible except when they aren't. [15:17:58] heh :) [15:18:21] the docs say wikidiff2 is already supposedly doing word-level diff, so dwdiff might be a useful comparison [15:18:22] But at least we just got wikidiff to highlight single char differences instead of the whole word [15:18:49] yes, that's helpful [15:18:56] Nemo_bis: does dwdiff do single char diffs? [15:19:06] apparently not [15:19:12] but I didn't test last version [15:19:19] Nikerabbit knows more [15:20:20] Nikerabbit pointed me to a similar horrible diff [15:20:53] yep [15:20:57] there are many of those [15:21:29] on many wikis there are guidelines which tell people to split their edits to get usable diffs [15:21:42] and even warning templates for ugly diffs :-O [15:21:44] dwdiff can use delimeters, that's not quite character level diff [15:24:28] Nikerabbit, so can that approach somehow reused in wikidiff2 without losing functionality or is it alternative? [15:24:57] I have no idea, the algorithms are probably very different [15:34:04] hexmode, has the new wikidiff2 been deployed after this comment? https://bugzilla.wikimedia.org/show_bug.cgi?id=16935#c1 [15:37:23] Nemo_bis: yep [15:38:43] thanks [17:06:42] Hello, any one there to help me.. Actually i am in the confusion.. [17:06:48] see http://or.wikipedia.org/wiki/?diff=1 [17:07:08] is this the 1st edit in or.wikipedia [17:07:10] ? [17:14:13] woosters: hi, any idea how to push https://bugzilla.wikimedia.org/show_bug.cgi?id=33509 forward? it's more an office bureaucratic task than a technical one... [17:15:09] let me look into it and will get back to u, saper [17:16:44] thanks [17:17:07] I'm cc'd on the bug in any cae [17:17:08] case [17:18:20] currently who owns those 2 dns entries? [17:20:58] woosters: they point to some czech provider we use [17:21:07] we == Wikimedia Polska (Polish chapter) [17:21:21] so it would require us to change registrar [17:21:22] the domain seems to be owned by the Foundation, though [17:21:31] why? [17:21:49] you need to login to the registrar and change NS entries [17:22:12] we like to get all under one umbrella ... easier to maintain [17:22:20] renewal etc [17:22:28] would that be a problem? [17:22:32] for u? [17:22:54] given the time to service this is 4 weeks now maybe transferring a domain to WMPL would be a good idea :) [17:23:08] I'd love to have NS's changed *first* if possible [17:23:26] since we are now limited by capabilities of the czech provider form [17:23:54] changing registrars won't impact NS entries normally [17:26:22] ok, let us work on your request first then. Will keep u posted [17:26:44] thanks! [18:07:46] PROBLEM - MySQL Slave Delay on db42 is CRITICAL: CRIT replication delay 441 seconds [18:16:04] !log nikerabbit synchronized php-1.18/extensions/Translate/ 'I18ndeploy r110310 - Translate help links' [18:16:07] Logged the message, Master [18:17:49] !log nikerabbit synchronized php-1.18/extensions/WebFonts/resources/ext.webfonts.fontlist.js 'I18ndeploy r110311 - bug 33599' [18:17:50] Logged the message, Master [18:19:37] !log nikerabbit synchronized php-1.18/extensions/Narayam/resources/ext.narayam.rules.as.js 'I18ndeploy r110311 - bug 33924' [18:19:38] Logged the message, Master [18:30:06] RECOVERY - MySQL Slave Delay on db42 is OK: OK replication delay 23 seconds [18:57:13] !log asher synchronized wmf-config/db.php 'adding db54 to s2' [18:57:14] Logged the message, Master [19:27:52] PROBLEM - MySQL Slave Delay on db42 is CRITICAL: CRIT replication delay 267 seconds [19:28:00] !log asher synchronized wmf-config/db.php 'raising db54 weight' [19:28:02] Logged the message, Master [19:39:12] RECOVERY - MySQL Slave Delay on db42 is OK: OK replication delay 1 seconds [19:48:29] !log awjrichards synchronizing Wikimedia installation... : Syncing CentralNotice to r110026 of trunk, includes important fix for 1.19 compatibility [19:48:31] Logged the message, Master [19:50:56] sync done. [19:51:33] PROBLEM - ps1-d2-sdtpa-infeed-load-tower-A-phase-Z on ps1-d2-sdtpa is CRITICAL: ps1-d2-sdtpa-infeed-load-tower-A-phase-Z CRITICAL - *2513* [22:12:22] PROBLEM - Puppet freshness on lvs1003 is CRITICAL: Puppet has not run in the last 10 hours [22:12:22] PROBLEM - Puppet freshness on lvs1006 is CRITICAL: Puppet has not run in the last 10 hours [22:58:34] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 0 MB (0% inode=60%): /var/lib/ureadahead/debugfs 0 MB (0% inode=60%): [22:58:44] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 0 MB (0% inode=60%): /var/lib/ureadahead/debugfs 0 MB (0% inode=60%): [23:06:34] PROBLEM - Puppet freshness on brewster is CRITICAL: Puppet has not run in the last 10 hours [23:07:25] gn8 folks [23:10:24] RECOVERY - Disk space on srv223 is OK: DISK OK [23:43:44] RECOVERY - Disk space on srv219 is OK: DISK OK [23:51:40] RECOVERY - Squid on brewster is OK: TCP OK - 0.003 second response time on port 8080 [23:56:44] !log synchronized i18n files for DonationInterface on payments cluster to r110342 [23:56:45] Logged the message, Master