[00:03:50] !log preilly synchronized php-1.19/extensions/MobileFrontend/MobileFrontend.body.php 'fix beta logo' [00:03:54] Logged the message, Master [00:05:43] hi [00:06:29] is there anything I can do to have my account deleted on labs.wiki? [00:08:12] in case I don't agree with its terms of use, totally different from the rest of wiki projects and revealed only today [00:09:18] Teles: Ask in #wikimedia-labs [00:25:26] !log preilly synchronized php-1.19/extensions/MobileFrontend/MobileFrontend.body.php 'fix beta logo' [00:25:29] Logged the message, Master [00:50:29] PROBLEM - check_job_queue on spence is CRITICAL: JOBQUEUE CRITICAL - the following wikis have more than 9,999 jobs: , enwiki (36014) [01:06:32] !log awjrichards synchronized php/extensions/MobileFrontend/MobileFrontend.body.php 'r114342' [01:06:36] Logged the message, Master [01:24:43] PROBLEM - Puppet freshness on aluminium is CRITICAL: Puppet has not run in the last 10 hours [02:18:10] !log LocalisationUpdate completed (1.19) at Wed Mar 21 02:18:10 UTC 2012 [02:18:14] Logged the message, Master [02:45:11] What does "GXHC_gx_session_id_FutureTenseContentServer" mean? [03:03:43] PROBLEM - Puppet freshness on db59 is CRITICAL: Puppet has not run in the last 10 hours [03:11:04] RECOVERY - Puppet freshness on aluminium is OK: puppet ran at Wed Mar 21 03:10:59 UTC 2012 [03:19:19] RECOVERY - Host magnesium is UP: PING OK - Packet loss = 0%, RTA = 27.70 ms [03:32:22] PROBLEM - Host magnesium is DOWN: PING CRITICAL - Packet loss = 100% [03:34:04] ACKNOWLEDGEMENT - Host magnesium is DOWN: PING CRITICAL - Packet loss = 100% daniel_zahn RT-2669 [03:44:49] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [05:22:40] !log tstarling synchronized php-1.19/tests/parser/parserTests.txt [05:22:43] Logged the message, Master [05:23:03] !log tstarling synchronized php-1.19/includes/parser/StripState.php [05:23:07] Logged the message, Master [05:23:26] !log tstarling synchronized php-1.19/includes/parser/Parser.php [05:23:29] Logged the message, Master [05:23:51] !log tstarling synchronized php-1.19/includes/parser/CoreParserFunctions.php [05:23:55] Logged the message, Master [05:24:37] RECOVERY - MySQL Replication Heartbeat on db1033 is OK: OK replication delay 0 seconds [05:24:55] RECOVERY - MySQL Slave Delay on db1033 is OK: OK replication delay 0 seconds [06:39:47] PROBLEM - MySQL Replication Heartbeat on db46 is CRITICAL: CRIT replication delay 321 seconds [06:40:23] PROBLEM - MySQL Replication Heartbeat on db1040 is CRITICAL: CRIT replication delay 357 seconds [06:41:53] RECOVERY - MySQL Replication Heartbeat on db46 is OK: OK replication delay 0 seconds [06:42:11] PROBLEM - Disk space on search1016 is CRITICAL: DISK CRITICAL - free space: /a 4212 MB (3% inode=99%): [06:42:20] PROBLEM - Disk space on search1015 is CRITICAL: DISK CRITICAL - free space: /a 3548 MB (3% inode=99%): [06:43:48] PROBLEM - MySQL Slave Delay on db1040 is CRITICAL: CRIT replication delay 234 seconds [06:45:36] RECOVERY - MySQL Replication Heartbeat on db1040 is OK: OK replication delay 0 seconds [06:45:54] RECOVERY - MySQL Slave Delay on db1040 is OK: OK replication delay 0 seconds [07:34:45] PROBLEM - Host ms-be4 is DOWN: PING CRITICAL - Packet loss = 100% [07:41:48] PROBLEM - LVS Lucene on search-pool1.svc.pmtpa.wmnet is CRITICAL: Connection timed out [07:48:40] RECOVERY - LVS Lucene on search-pool1.svc.pmtpa.wmnet is OK: TCP OK - 0.001 second response time on port 8123 [07:49:25] PROBLEM - Lucene on search9 is CRITICAL: Connection timed out [07:49:52] PROBLEM - Lucene on search3 is CRITICAL: Connection timed out [08:33:13] PROBLEM - LVS Lucene on search-pool1.svc.pmtpa.wmnet is CRITICAL: Connection refused [08:37:25] RECOVERY - LVS Lucene on search-pool1.svc.pmtpa.wmnet is OK: TCP OK - 0.006 second response time on port 8123 [08:38:37] RECOVERY - Lucene on search9 is OK: TCP OK - 0.002 second response time on port 8123 [09:01:22] RECOVERY - Host ms-be4 is UP: PING OK - Packet loss = 0%, RTA = 0.63 ms [09:05:07] PROBLEM - Lucene on search9 is CRITICAL: Connection timed out [09:49:52] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [09:51:58] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [09:59:55] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [09:59:55] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [10:29:46] PROBLEM - LVS Lucene on search-pool1.svc.pmtpa.wmnet is CRITICAL: Connection timed out [10:33:40] RECOVERY - LVS Lucene on search-pool1.svc.pmtpa.wmnet is OK: TCP OK - 0.001 second response time on port 8123 [10:41:10] RECOVERY - MySQL Slave Delay on db36 is OK: OK replication delay 27 seconds [10:42:49] RECOVERY - MySQL Replication Heartbeat on db36 is OK: OK replication delay 0 seconds [10:48:40] PROBLEM - LVS Lucene on search-pool1.svc.pmtpa.wmnet is CRITICAL: Connection timed out [10:51:17] RECOVERY - LVS Lucene on search-pool1.svc.pmtpa.wmnet is OK: TCP OK - 0.005 second response time on port 8123 [11:14:41] PROBLEM - LVS Lucene on search-pool1.svc.pmtpa.wmnet is CRITICAL: Connection timed out [11:16:29] RECOVERY - LVS Lucene on search-pool1.svc.pmtpa.wmnet is OK: TCP OK - 0.001 second response time on port 8123 [11:50:23] PROBLEM - RAID on searchidx2 is CRITICAL: CHECK_NRPE: Socket timeout after 10 seconds. [11:52:29] RECOVERY - RAID on searchidx2 is OK: OK: State is Optimal, checked 4 logical device(s) [12:01:47] PROBLEM - LVS Lucene on search-pool1.svc.pmtpa.wmnet is CRITICAL: Connection timed out [12:03:44] RECOVERY - LVS Lucene on search-pool1.svc.pmtpa.wmnet is OK: TCP OK - 0.003 second response time on port 8123 [12:15:03] mmm i m tryng to use the wikimedia api to get langlinks, but there seems to be a bug, maybe i am doing something wrong ? [12:15:06] http://fr.wikipedia.org/w/api.php?action=query&prop=iwlinks&titles=Anke%20Katrin%20Eissmann&iwlimit=500 [12:15:25] while there are three langlinks in the article [12:15:27] [[de:Anke Eißmann]] [12:15:27] [[en:Anke Katrin Eißmann]] [12:15:28] [[es:Anke Eißmann]] [12:15:48] cache issue ? or something else ? linked with the ß character ? [12:16:12] mmm nvm [12:17:36] i used the wrong api query :/ [12:23:50] PROBLEM - LVS Lucene on search-pool1.svc.pmtpa.wmnet is CRITICAL: Connection timed out [12:38:41] PROBLEM - Puppet freshness on linne is CRITICAL: Puppet has not run in the last 10 hours [12:38:41] PROBLEM - Puppet freshness on ms2 is CRITICAL: Puppet has not run in the last 10 hours [13:04:52] Hi, it seems we have some problems with the ZIM file generation. It's impossible to generate a ZIM of the following book (~100 articles) http://fr.wikipedia.org/wiki/Utilisateur:Ludo29/Livres/Road_trip [13:05:03] Une erreur est survenue sur le serveur de rendu : RuntimeError: RuntimeError: command failed with returncode 9: ['mw-zip', '-o', u'cache/0b/0b5c12b4a2346240/collection.zip', '-m', u'cache/0b/0b5c12b4a2346240/metabook.json', '--status', u'qserve://localhost:14311/0b5c12b4a2346240:makezip', '--config', u'https://fr.wikipedia.org/w', '--template-blacklist', u'MediaWiki:PDF Template Blacklist', '--template-exclusion-category' [13:05:04] , u"Exclure lors de l'impression", '--print-template-prefix', u'Imprimer', '--print-template-pattern', u'$1/Imprimer'] Last Output: 2012-03-21T10:34:29 mwlib.options.warn >> Both --print-template-pattern and --print-template-prefix (deprecated) specified. Using --print-template-pattern only. 1% creating nuwiki in u'cache/0b/0b5c12b4a2346240/tmpHdA7HG/nuwiki' /home/pp/local/lib/python2.6/site-packages/mwlib/metabook.py:225 [13:05:04] : DeprecationWarning: deprecated call get('mw_license_url') if l.get('mw_license_url'): /home/pp/local/lib/python2.6/site-packages/mwlib/metabook.py:240: DeprecationWarning: deprecated call get('mw_rights_text') if l.get('mw_rights_text'): /home/pp/local/lib/python2.6/site-packages/mwlib/metabook.py:241: DeprecationWarning: deprecated __getitem__ ['mw_rights_text'] wikitext = l['mw_rights_text'] /home/pp/local/lib/python2 [13:05:11] .6/site-packages/mwlib/metabook.py:242: DeprecationWarning: deprecated call get('mw_rights_page') if l.get('mw_rights_page'): /home/pp/local/lib/python2.6/site-packages/mwlib/metabook.py:244: DeprecationWarning: deprecated call get('mw_rights_url') if l.get('mw_rights_url'): /home/pp/local/lib/python2.6/site-packages/mwlib/metabook.py:245: DeprecationWarning: deprecated __getitem__ ['mw_rights_url'] wikitext += '\n\n' + l [13:05:17] ['mw_rights_url'] /home/pp/local/lib/python2.6/site-packages/mwlib/metabook.py:250: DeprecationWarning: deprecated call get('name') retval.append(license(title=l.get('name', u'License'), in function system, file /home/pp/local/bin/nslave.py, line 63 in function qaddw, file /home/pp/local/lib/python2.6/site-packages/qs/slave.py, line 66 [13:07:03] PROBLEM - Puppet freshness on db59 is CRITICAL: Puppet has not run in the last 10 hours [13:12:18] PROBLEM - Puppet freshness on aluminium is CRITICAL: Puppet has not run in the last 10 hours [13:46:21] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [14:04:30] PROBLEM - Disk space on srv221 is CRITICAL: DISK CRITICAL - free space: / 205 MB (2% inode=61%): /var/lib/ureadahead/debugfs 205 MB (2% inode=61%): [14:08:42] PROBLEM - Disk space on srv221 is CRITICAL: DISK CRITICAL - free space: / 105 MB (1% inode=61%): /var/lib/ureadahead/debugfs 105 MB (1% inode=61%): [14:26:25] PROBLEM - MySQL Replication Heartbeat on db24 is CRITICAL: CRIT replication delay 181 seconds [14:26:34] RECOVERY - Disk space on srv221 is OK: DISK OK [14:26:43] PROBLEM - MySQL Slave Delay on db24 is CRITICAL: CRIT replication delay 185 seconds [15:35:58] Is db24 broken? [15:38:04] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 171 MB (2% inode=61%): /var/lib/ureadahead/debugfs 171 MB (2% inode=61%): [15:41:49] RECOVERY - Disk space on search1016 is OK: DISK OK [15:42:07] PROBLEM - Disk space on srv224 is CRITICAL: DISK CRITICAL - free space: / 0 MB (0% inode=61%): /var/lib/ureadahead/debugfs 0 MB (0% inode=61%): [15:42:16] PROBLEM - Disk space on srv221 is CRITICAL: DISK CRITICAL - free space: / 0 MB (0% inode=61%): /var/lib/ureadahead/debugfs 0 MB (0% inode=61%): [15:44:13] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 215 MB (3% inode=61%): /var/lib/ureadahead/debugfs 215 MB (3% inode=61%): [15:44:31] PROBLEM - Disk space on srv222 is CRITICAL: DISK CRITICAL - free space: / 0 MB (0% inode=61%): /var/lib/ureadahead/debugfs 0 MB (0% inode=61%): [15:48:34] PROBLEM - Disk space on srv221 is CRITICAL: DISK CRITICAL - free space: / 199 MB (2% inode=61%): /var/lib/ureadahead/debugfs 199 MB (2% inode=61%): [15:54:52] PROBLEM - Disk space on srv221 is CRITICAL: DISK CRITICAL - free space: / 199 MB (2% inode=61%): /var/lib/ureadahead/debugfs 199 MB (2% inode=61%): [15:56:49] RECOVERY - Disk space on srv223 is OK: DISK OK [15:56:58] RECOVERY - Disk space on srv221 is OK: DISK OK [15:56:58] RECOVERY - Disk space on srv219 is OK: DISK OK [15:59:13] PROBLEM - Disk space on srv222 is CRITICAL: DISK CRITICAL - free space: / 199 MB (2% inode=61%): /var/lib/ureadahead/debugfs 199 MB (2% inode=61%): [16:01:01] RECOVERY - Disk space on srv224 is OK: DISK OK [16:07:37] PROBLEM - Disk space on srv222 is CRITICAL: DISK CRITICAL - free space: / 119 MB (1% inode=61%): /var/lib/ureadahead/debugfs 119 MB (1% inode=61%): [16:16:10] RECOVERY - Disk space on srv222 is OK: DISK OK [16:18:07] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 271 MB (3% inode=61%): /var/lib/ureadahead/debugfs 271 MB (3% inode=61%): [16:22:28] PROBLEM - Disk space on srv222 is CRITICAL: DISK CRITICAL - free space: / 99 MB (1% inode=61%): /var/lib/ureadahead/debugfs 99 MB (1% inode=61%): [16:26:49] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 271 MB (3% inode=61%): /var/lib/ureadahead/debugfs 271 MB (3% inode=61%): [16:37:19] RECOVERY - Disk space on srv222 is OK: DISK OK [16:45:25] RECOVERY - Lucene on search9 is OK: TCP OK - 0.002 second response time on port 8123 [16:45:43] RECOVERY - Disk space on srv219 is OK: DISK OK [16:57:22] PROBLEM - Lucene on search9 is CRITICAL: Connection timed out [17:17:46] RECOVERY - Lucene on search3 is OK: TCP OK - 0.002 second response time on port 8123 [17:18:04] RECOVERY - Lucene on search9 is OK: TCP OK - 0.001 second response time on port 8123 [17:22:49] apergos: how many images are on Wikimedia's file servers? (ballpark) [17:28:16] RECOVERY - Disk space on search1015 is OK: DISK OK [17:33:30] 11 million? (originals) [17:33:36] that number could be waaaay off [17:33:52] it's in the double digits of millions though [17:33:53] You could look at [[Special:Version]] on Commons [17:34:02] That's not all of our files but definitely >90% [17:47:22] oh cool... we have ~11 million also [17:47:42] http://commons.wikimedia.org/wiki/Special:Statistics <-- ~12.4MM [17:47:43] cool. [17:47:53] RoanKattouw, I would think there's a hell of a lot more. [17:47:54] !log catrope synchronized php-1.19/resources/startup.js 'touch' [17:47:58] Logged the message, Master [17:47:59] Commons can't be 90% [17:48:14] are all of the images (across Commons and other sites) all on the same file server? [17:48:25] Yes [17:48:31] Hmm, yeah maybe not 90 [17:48:39] awesomesauce [17:48:47] But I'm guessing that if Commons has 12.5M files, the cluster as a whole probably doesn't have more than 15M [17:48:53] do you do any image optimization on the images? [17:49:01] And that also excludes deleted files, BTW, those are also archived forever (on the same server) [17:49:10] Not that I'm aware of [17:49:11] hmm [17:49:36] I'm rolling out some Image Optimization across our image servers today (if all goes well) [17:49:46] if it goes well, you guys should prolly take my code ;) [17:49:51] :) [17:50:06] exactly, the deleted files and old versions would also be resident on the servers. [17:50:19] Ah yes, old versions too [17:50:41] 13% savings ends up being a lot less storage/bandwidth (and faster downloads esp. on bad connections) [17:50:48] okey dokey... will let you know how it goes ^_^ [17:50:56] heh I once wrote a proposal for this. [17:51:00] Like 3 years ago. [17:51:03] So do you want to know about the # of images, or the aggregate size? [17:51:11] Cause df -h will tell me the aggregate size no problem [17:51:21] that'd be fun too :D [17:51:26] Image optimization running on the server side, would mean huge bandwith savings. [17:51:50] Alright, the image storage filesystem (includes archive of old versions and deleted files) is 20 TB [17:51:53] Theo10011: yeah... been long overdue at both of our places [17:52:02] 20TB? Snap :) [17:52:07] Sweet. [17:52:17] The thumbnail storage FS (thumbs that can be regenerated from the originals) is on a separate server and uses 8.5T [17:53:55] I did a little du recently [17:54:04] it turns out that commons really is that huge of a lion's share [17:54:16] 80 to 90 % [17:54:24] I forget exactly now [17:55:01] Do we even know the # of files we have, BT?W [17:55:18] ls -1 | wc -l would probably take a while, even on a 1/256 shard [17:58:07] PROBLEM - MySQL disk space on db59 is CRITICAL: DISK CRITICAL - free space: /a 16116 MB (2% inode=99%): [17:59:00] !log updated and synchronized payments cluster to r114382 [17:59:03] Logged the message, Master [17:59:46] PROBLEM - Disk space on db59 is CRITICAL: DISK CRITICAL - free space: /a 16129 MB (2% inode=99%): [18:00:17] I've done it, you use the "don't sort this" option and it's pretty quick [18:00:25] I do not remember any of those numbers now [18:00:33] ask me in a month, I will have a lot more current info [18:00:36] maybe in two weeks [18:01:02] !log catrope synchronized wmf-config/InitialiseSettings.php 'Temporarily disable ShortUrl on testwiki because we think it might conflict with ArticleFeedbackv5' [18:01:06] Logged the message, Master [18:02:09] which of your ops guys are in SF, these days? [18:02:36] most I guess [18:03:01] Let's see who's not [18:03:02] !log awjrichards synchronized php/extensions/MobileFrontend/MobileFrontend.body.php [18:03:06] Logged the message, Master [18:03:25] aper gos, muta nte, Jeff_Green [18:03:38] ma rk [18:04:03] ah, i thought apergos was in Greece or something [18:04:09] I am [18:04:13] this is the list of not sf [18:04:24] oh, missed that line! :[ [18:04:26] oopz [18:04:39] not peter [18:04:51] he's not in sf is he? [18:04:56] That's right [18:31:00] Is high replag for db24 planned? It has reached almost 46 minutes. It is killing my bot... [18:31:46] binasher: ---^^ [18:32:39] binasher: If you are indeed doing the massive ALTER on db24, could you comment it out in db.php like you did with the other server yesterday? It's affecting bots again [18:34:22] uh, db24 is in s2 [18:34:51] Then why is it lagged to hell and back? :O [18:35:12] [18:35:26] when id doubt, blame a migration that isn't running [18:35:27] [1866276.067772] EDAC MC0: CE page 0x1353f8, offset 0x210, grain 0, syndrome 0x28d8, row 7, channel 1, label "": amd64_edac [18:35:58] !log asher synchronized wmf-config/db.php 'pulling db24, failing hw' [18:36:01] Logged the message, Master [18:37:05] Thanks [18:37:34] damn, that's the snapshot db for that cluster [18:42:21] bummer, was hoping to find the bandwidth costs in your annual report [18:42:34] it just says "internet hosting: $1.8MM" but doesn't get more specific [18:42:50] that vm to host wikitech is very expensive then [18:43:18] Sean_Colombo: Talk to woosters privately, he probably knows [18:43:26] RECOVERY - MySQL Replication Heartbeat on db24 is OK: OK replication delay seconds [18:43:35] RECOVERY - MySQL Slave Delay on db24 is OK: OK replication delay seconds [18:44:47] PROBLEM - Host ms-be3 is DOWN: PING CRITICAL - Packet loss = 100% [18:46:07] !log asher synchronized wmf-config/db.php 'returning db36' [18:46:10] Logged the message, Master [18:48:05] PROBLEM - Host db24 is DOWN: PING CRITICAL - Packet loss = 100% [18:48:59] RECOVERY - Host db24 is UP: PING OK - Packet loss = 0%, RTA = 1.05 ms [18:51:12] !log catrope synchronizing Wikimedia installation... : Deploying ArticleFeedbackv5 update [18:51:16] Logged the message, Master [18:55:52] RoanKattouw: thanks [18:56:02] PROBLEM - MySQL Replication Heartbeat on db24 is CRITICAL: CRIT replication delay 2957 seconds [18:56:11] PROBLEM - MySQL Slave Delay on db24 is CRITICAL: CRIT replication delay 2938 seconds [18:57:05] RECOVERY - Host ms-be3 is UP: PING OK - Packet loss = 0%, RTA = 0.39 ms [19:04:13] sync done. [19:18:14] nighty~ [19:31:00] RECOVERY - MySQL Replication Heartbeat on db24 is OK: OK replication delay 25 seconds [19:31:18] RECOVERY - MySQL Slave Delay on db24 is OK: OK replication delay 0 seconds [19:35:21] PROBLEM - Disk space on srv221 is CRITICAL: DISK CRITICAL - free space: / 206 MB (2% inode=61%): /var/lib/ureadahead/debugfs 206 MB (2% inode=61%): [19:35:21] PROBLEM - Disk space on srv222 is CRITICAL: DISK CRITICAL - free space: / 221 MB (3% inode=61%): /var/lib/ureadahead/debugfs 221 MB (3% inode=61%): [19:43:36] RECOVERY - Disk space on db59 is OK: DISK OK [19:43:54] RECOVERY - MySQL disk space on db59 is OK: DISK OK [19:45:51] RECOVERY - Disk space on srv221 is OK: DISK OK [19:51:42] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [19:52:09] RECOVERY - Disk space on srv222 is OK: DISK OK [19:53:39] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [19:57:51] hrmm, just missed RoanKattouw_away [19:58:06] i was thinking you could do df -i to get a rough file count [20:00:09] oh, nvm, it seems i have a lot of catching up to do ;P [20:01:45] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [20:01:45] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [20:02:51] Sean_Colombo: re bandwidth costs: i think it's mostly unmetered links and also some free peering to some networks. so, essentially no incremental bandwidth cost. (but maybe cost some other way? electricity?). anyway, I'm not necessarily correct about any of that [20:09:27] jeremyb: Back [20:10:54] !log catrope synchronized wmf-config/CommonSettings.php 'Set $wgArticleFeedbackv5OversightEmails on enwiki' [20:10:58] Logged the message, Master [20:12:34] jeremyb: IUsed = 47717739 [20:12:41] That's 47.7M [20:13:56] RoanKattouw: and that's the originals box? [20:14:14] aye [20:14:16] Thumbs is like 150M [20:14:23] Sean_Colombo: ^ [20:14:43] Which is expected cause there's a directory per file that contains the thumbs for that file [20:14:52] and then potentially multiple thumbs of different sizes [20:15:08] sure, i was thinking mostly about the multiple sizes [20:19:36] RECOVERY - Puppet freshness on aluminium is OK: puppet ran at Wed Mar 21 20:19:23 UTC 2012 [20:21:51] !log reedy synchronized wmf-config/InitialiseSettings.php 'Disable prefswitch' [20:21:55] Logged the message, Master [20:27:57] Can someone K-line derp from irc://irc.wikimedia.org (see #meta.wikimedia) [20:27:59] ? [20:28:55] All the joins/quits [20:29:35] meh, he can only PM there and you can set umode _g [20:34:22] hi all, can you explain this page to me? http://sv.wikisource.org/wiki/MediaWiki:Proofreadpage_index_namespace [20:34:25] it has "show wikitext", but no "show history" [20:35:21] or rather, it has "view source", but no "history" tab, here's the English user interface, sv.wikisource.org/wiki/MediaWiki:Proofreadpage_index_namespace?uselang=en [20:38:03] LA2: there is no such page there, the content is loaded from extension message file [20:38:41] exactly how is it "loaded from" that message file? does this mechanism have a name? [20:38:45] Hey, I've got a guy onwiki complaining of severe slowness, do we know of any reasons that might be at the moment? [20:39:07] LA2: like all other interface messages [20:39:21] you can overwrite them by creating a page [20:39:34] so "interface messages" is the term I should use? [20:41:27] LA2: https://www.mediawiki.org/wiki/Manual:System_message [20:42:18] thanks [20:42:24] the message you asked about comes from https://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/ProofreadPage/ProofreadPage.i18n.php?revision=114398&view=markup [20:44:32] Beau_: how did you trace its origin? [20:47:35] <^demon> !log /trunk/phase3 is now r/o in SVN [20:47:38] Logged the message, Master [20:47:40] LA2: messages are usually in files named i18n.php, so I had to find ProofreadPage extension in https://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions and then the i18n file [20:49:40] so a grep in the source, rather than some online or API link? [20:51:40] or, just go to the source yourself [20:51:49] LA2: yes, I don't know if there is an on-line tool for checking where a particular message comes from, grep works fine for me [20:52:08] the interface message name starts with the extension name (at least in this case) so it was pretty obvious [20:52:54] Can't see much in the stats to explain a performance problem, although the sharp jump in http://bit.ly/GE8J3h is slightly odd. [21:01:46] ok, thanks all! [21:04:22] PROBLEM - Host virt3 is DOWN: PING CRITICAL - Packet loss = 100% [21:11:34] RECOVERY - Host virt3 is UP: PING OK - Packet loss = 0%, RTA = 0.59 ms [21:52:29] !log awjrichards synchronized php/extensions/MobileFrontend/MobileFrontend.body.php [21:52:33] Logged the message, Master [22:27:06] <^demon|away> !log wmf-deployed extensions now r/o in SVN [22:27:10] Logged the message, Master [22:40:31] PROBLEM - Puppet freshness on linne is CRITICAL: Puppet has not run in the last 10 hours [22:40:31] PROBLEM - Puppet freshness on ms2 is CRITICAL: Puppet has not run in the last 10 hours [23:08:43] PROBLEM - Puppet freshness on db59 is CRITICAL: Puppet has not run in the last 10 hours [23:34:31] PROBLEM - Puppet freshness on ms1002 is CRITICAL: Puppet has not run in the last 10 hours [23:37:52] gn8 folks [23:47:34] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours