[00:02:58] Reedy, why everywhere? how many wikis change count method? [00:03:09] I don't know of any procedure [00:05:51] Said bug asks for "Run updateArticleCount.php on all Wikisources and Wiktionaries" [00:07:02] Reedy, yes, so what's the problem? [00:07:19] You said why everywhere [00:07:26] that's not everywhere [00:07:26] that's quite a number of wikis [00:07:29] the count method was changed because of incorrect results on those wikis [00:07:44] but it's still a limited number, and for a specific (one-time) reason [00:07:47] There's other bugs still open asking for the same thing to be done [00:08:04] after a method change? [00:08:54] https://bugzilla.wikimedia.org/buglist.cgi?title=Special%3ASearch&quicksearch=updatearticlecount&list_id=78653 [00:09:52] if you mean ksh, that wasn't a method change but a massive deletion [00:10:59] and that bug doesn't even ask the script to be run [00:11:25] nor does https://bugzilla.wikimedia.org/show_bug.cgi?id=27256 , at least not very clearly [00:39:22] !log neilk synchronized wmf-config/CommonSettings.php [00:39:24] Logged the message, Master [00:41:53] Sorry I forgot to add a commit message to the configchange ... [00:42:15] that was just a fix so we can do faster banner editing on testwiki, should have zero effect elsewhere. [00:42:21] add it via !log [00:44:23] !log neilk just added config change to set caching for banners on testwiki to 0. Should have no effect anywhere else. [00:44:24] Logged the message, Master [01:31:38] !log powercycling hooper [01:31:41] Logged the message, Master [01:51:50] !log installing php-apc on hooper [01:51:52] Logged the message, Master [01:51:54] !log installing tidy on hooper [01:51:56] Logged the message, Master [01:52:39] !log installed w3 total cache in wordpress on hooper [01:52:41] Logged the message, Master [01:54:37] PROBLEM - Puppet freshness on spence is CRITICAL: Puppet has not run in the last 10 hours [02:06:00] !log LocalisationUpdate completed (1.18) at Tue Jan 17 02:06:00 UTC 2012 [02:06:02] Logged the message, Master [02:23:51] PROBLEM - MySQL replication status on storage3 is CRITICAL: CHECK MySQL REPLICATION - lag - CRITICAL - Seconds_Behind_Master : 1724s [02:29:33] !log temporarily disabled puppet, since the apache configuration was manually modified [02:29:35] Logged the message, Master [02:29:48] !log that last message was in regards to hooper [02:29:50] Logged the message, Master [02:34:30] RECOVERY - MySQL replication status on storage3 is OK: CHECK MySQL REPLICATION - lag - OK - Seconds_Behind_Master : 0s [03:10:58] how do I download wikipedia in a html-viewable format? [03:11:11] this page has some links for that, but they are broken: https://en.wikipedia.org/wiki/Wikipedia:Database_download [03:12:21] ah well I believe the answer is you can't, you need to download the database and import it in to mediawiki [03:12:51] okey hmm [04:18:09] RECOVERY - Disk space on es1004 is OK: DISK OK [04:18:41] !log installing varnish on hooper [04:18:43] Logged the message, Master [04:18:48] RECOVERY - MySQL disk space on es1004 is OK: DISK OK [04:35:26] RECOVERY - Puppet freshness on spence is OK: puppet ran at Tue Jan 17 04:34:59 UTC 2012 [04:37:45] Is it an oversight that more than 50 pages of wikitext can be request with a single API call? [04:38:12] eh probably isn't a problem [04:38:21] now that means I can do 5000, right? [04:38:31] that might be a bit much [04:38:43] only 500 [04:38:50] oh that's not as cool [04:39:31] Might be different if you have the higher limits flag [04:43:16] PROBLEM - MySQL slave status on es1004 is CRITICAL: CRITICAL: Slave running: expected Yes, got No [04:58:38] New patchset: Ryan Lane; "Adding blog to marmontel and allowing .htaccess in blogs" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1927 [04:59:18] New review: Ryan Lane; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1927 [04:59:18] Change merged: Ryan Lane; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1927 [08:18:57] any Wikimedia sysadmins around? The monthly donation page doesn't seem to be working… Error: ERR_ACCESS_DENIED [08:24:05] RECOVERY - Puppet freshness on brewster is OK: puppet ran at Tue Jan 17 08:23:40 UTC 2012 [09:12:58] New review: Dzahn; "about cron jobs running every minute. see:" [operations/puppet] (production); V: 0 C: 0; - https://gerrit.wikimedia.org/r/1926 [09:25:03] New patchset: Hashar; "testswarm: explicitly set cron schedule" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1928 [09:25:18] New patchset: Hashar; "testswarm: job to wipe clients idling" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1926 [09:25:41] New review: Hashar; "Change https://gerrit.wikimedia.org/r/1928 explicitly define the agenda :)" [operations/puppet] (production) C: 0; - https://gerrit.wikimedia.org/r/1926 [09:26:02] mutante: hi :) change 1928 add the "minutes" to testswarm cronjobs [09:26:10] made a second change by mistake :/ [09:44:05] PROBLEM - Disk space on es1004 is CRITICAL: DISK CRITICAL - free space: /a 451161 MB (3% inode=99%): [09:45:55] PROBLEM - MySQL disk space on es1004 is CRITICAL: DISK CRITICAL - free space: /a 445685 MB (3% inode=99%): [10:06:09] hashar: hi [10:06:23] mutante: hello :) [10:06:50] New review: Dzahn; "yep, says "Done" after a few seconds when opening that URL" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/1926 [10:06:50] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1926 [10:07:45] New review: Dzahn; "yea, as the commit message says" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/1928 [10:07:46] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1928 [10:08:28] hashar: done [10:10:31] hashar: it's not like ?state=wipe is then just starting another external script, right [10:10:53] \o/ [10:11:04] ?state=wipe just call some PHP magic [10:11:31] but inside the application, so we need to go through curl and the webserver [10:11:42] k [10:11:53] yes. There is no console script for that [10:12:10] * mutante nods [10:18:13] thanks for the merges mutante : ) [10:19:04] Why doesn't the recent changes api seem to go back further than a few months? [10:19:16] I mean, the old revisions are still all there in the database, and public. [10:19:32] 90 days I guess, we flush older changes [10:19:41] see $wgRCMaxAge [10:19:42] hashar: yw [10:20:09] rc is meant to keep only the last so many [10:20:58] I see [10:21:09] and we keep them for 30 days ( http://noc.wikimedia.org/conf/highlight.php?file=CommonSettings.php = $wgRCMaxAge = 30 * 86400; ) [10:21:40] ah woops, I wonder why I always think it's 90 [10:21:49] maybe that's the defaul setting or something [10:21:56] the default is 13 weeks :-b [10:22:07] close enough [10:22:09] :-P [10:25:54] !log neilk synchronized wmf-config/CommonSettings.php 'added CongressLookup require' [10:25:56] Logged the message, Master [10:26:30] Does the recent changes api find page deletions and page moves like the recent changes page does? [10:26:49] I guess I should try... [10:28:55] !log neilk synchronized wmf-config/InitialiseSettings.php 'added CongressLookup to InitialiseSettings' [10:28:57] Logged the message, Master [10:30:39] !log neilk synchronizing Wikimedia installation... : deploying CongressLookup. We are not deploying to any live wiki, just test, but this is to make i18n work [10:30:41] Logged the message, Master [10:32:40] PROBLEM - ps1-d2-sdtpa-infeed-load-tower-A-phase-Z on ps1-d2-sdtpa is CRITICAL: ps1-d2-sdtpa-infeed-load-tower-A-phase-Z CRITICAL - *2463* [10:34:30] RECOVERY - MySQL slave status on es1004 is OK: OK: [10:34:55] yep, it definitely shows deletions [10:34:59] sync done. [10:36:02] then I presume it would also show page renames, though I haven't tested that [11:04:38] !log neilk synchronized wmf-config/extension-list 'added CongressLookup to extension-list for i18n' [11:04:40] Logged the message, Master [11:05:19] !log neilk synchronized wmf-config/ExtensionMessages-1.18.php 'added CongressLookup to ExtensionMessages-1.18 for i18n' [11:05:20] Logged the message, Master [12:15:40] PROBLEM - Puppet freshness on db1045 is CRITICAL: Puppet has not run in the last 10 hours [13:04:40] hi, if this is a well known thing then just point me at it, but will the api be affected by the blackout, and if so in what way? [13:05:13] no edits [13:05:35] yep that makes sense, but will content be returned as normal? [13:05:39] I *think* so [13:05:44] :-) [13:05:48] not sure though [13:06:09] I didn't look at the code for that soo I could be full of b.s. [13:06:17] np [13:13:24] I guess no one else has a definitive answer either... [13:13:34] ask in um [13:13:42] #wikimedia-sopa [13:13:50] ah, cheers [13:56:42] hello, I've just created bug 33769, I don't know if I did everything right... [13:57:10] the link is wrong :o [13:57:50] https://bugzilla.wikimedia.org/show_bug.cgi?id=33769 [14:09:35] comp1089: normally it's only stewards [14:11:29] saper, do you mean I wrote in the wrong place? [14:11:53] no, normally it's done only by stewards, on few wikis I know [14:16:40] saper: so, what should I do? [14:17:48] if the community wants it, that's fine [14:20:25] yay it looks like it's working tentatively [14:20:43] what're you upto? [14:20:48] 9 digi zip [14:21:05] ah [14:22:51] and found a bug in the deployed one :-D [14:23:00] (on test that is) [14:23:14] just the one? :p [14:23:21] for a minute yeah [14:23:28] ignoring things like mising data [14:23:31] *missing [14:23:56] I guess I need to look at trimZip a bit [14:28:38] hmm [14:28:53] no, maybe I need to fix it in getSenators, that's better I guess [14:29:49] grr indecision [14:47:38] <^demon> apergos: Do we really need 9-digit zip? Most people in the US don't know the last 4 digits, and the first 5 should be plenty sufficient for finding your representatives. [14:48:13] yeah but they aren't for a pile of districts [14:48:30] <^demon> Oh :( [14:48:33] yep [14:48:47] I think at this point I'm just finding weird stuff with the data [14:49:57] <^demon> I tested it out on test.wp a little while ago. It gave me my correct representative :) [14:50:02] yay [14:51:26] <^demon> Robert Scott (everyone calls him Bobby, even his campaign signs iirc) [14:53:33] where do i see the +4 form? http://test.wikipedia.org/wiki/Main_Page?banner=blackout gives me 5-digit [14:54:08] anyway, if possible we should offer to geocode an address instead of sending them to the usps [14:54:59] <^demon> I wonder if the USPS site could handle the traffic we might send their way ;-) [14:56:04] um geocoding at this point, not even [14:56:26] the specialpage can take 9 digits, the current version just throws away the end [14:57:45] huh. well i just got the wrong rep back for me [15:02:43] well I need to get my test finished up here and then I guess commit a pile of crap and see what folks think [15:02:55] there's this one zero padding bug I'm having trouble with [15:05:47] apergos: for e.g. 02631? [15:05:53] ah it looks ok now [15:06:02] I coldn't get any senators for PR :-D [15:06:49] that one came out ok for me [15:06:58] with and without padding. yay [15:07:08] I think my code is prolly ok [15:07:29] huh wonder if I oughta check it in :-P [15:09:17] Might be worth it to get some CR ;) [15:10:32] yeah guess so [15:11:01] and now, a message from Google about HTTP status code 503: https://plus.google.com/115984868678744352358/posts/Gas8vjZ5fmB [15:55:41] PROBLEM - Auth DNS on ns2.wikimedia.org is CRITICAL: CRITICAL - Plugin timed out while executing system call [15:59:16] RECOVERY - Host db43 is UP: PING OK - Packet loss = 0%, RTA = 0.26 ms [16:07:47] RECOVERY - Auth DNS on ns2.wikimedia.org is OK: DNS OK: 5.599 seconds response time. www.wikipedia.org returns 208.80.152.201 [16:10:26] PROBLEM - Squid on brewster is CRITICAL: Connection refused [16:15:08] http://en.planet.wikimedia.org/ hasn't updated in about 4 days [16:33:05] New patchset: Jgreen; "adjusted notification recipient for offhost_backups script" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1930 [16:33:40] New review: Jgreen; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/1930 [16:33:40] Change merged: Jgreen; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1930 [16:34:09] RECOVERY - Squid on brewster is OK: TCP OK - 0.000 second response time on port 8080 [16:48:16] PROBLEM - HTTP on ekrem is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:01:46] RECOVERY - HTTP on ekrem is OK: HTTP OK HTTP/1.1 200 OK - 453 bytes in 6.174 seconds [17:05:53] Hello! Hope, all of you are doing great! :) [17:07:46] I have a problem that I've trying to solve. I would like to integrate the menu tab (together with the header) as a mediawiki logo. Can someone please advise how to this? [17:09:56] IvPetr, looks like a question for #mediawiki [17:11:13] ok, thanks [17:24:51] PROBLEM - Squid on brewster is CRITICAL: Connection refused [17:58:29] Platonides, nice stats [17:58:50] should they be put on https://meta.wikimedia.org/wiki/Mirroring_Wikimedia_project_XML_dumps despite the title perhaps? [18:57:48] i'm firoz interested in Volunteering.... [19:00:14] hi [19:02:18] kabir: well mediawiki could always use more developers [19:03:01] zzz [19:04:05] @prodego : could u please help me in that [19:04:28] help you with what? [19:05:51] could u help me in how can i find out any opportunity in develpment... [19:07:45] @prodego : could u help me in how can i find out any opportunity in develpment... [19:08:08] http://www.mediawiki.org/wiki/MediaWiki [19:20:38] kabir: User:Sumanah is the volunteer coordinator [19:22:20] @mutante : thanks...can u please tell me how to contact him... [19:23:28] kabir: it's a she. you can find her on #mediawiki currently [19:23:56] kabir: but exactly what you're asking for "helping volunteers get started with development activities (coding, testing, documentation, wrangling) and match volunteers to opportunities " [19:24:52] kabir: http://www.mediawiki.org/wiki/User:Sumanah/TechVolunteersCanDo [19:26:05] mutante : thanks a lot for that help.... [19:26:19] kabir: your welcome [19:26:41] you're [19:27:23] mutante : sorry didnt get u.. [19:28:12] kabir: just "no problem" / "de rien" [19:28:56] It appears en.planet.wikimedia.org hasn't been updated since the 13th [19:29:09] topic^ [19:29:22] !log reedy synchronized php-1.18/extensions/ArticleFeedbackv5/modules/jquery.articleFeedbackv5/jquery.articleFeedbackv5.js 'r109186' [19:29:24] Logged the message, Master [19:30:06] ? that's not in the topic (or was that directed to someone other then me) [19:31:25] "blog slowdowns" [19:31:37] Brownout: i think that's irrelevant [19:31:46] ok... [19:31:57] bawolff: besides the update problem, i'll test to replace the outdated planet version with "planet-venus" [19:32:24] :) [19:32:29] http://intertwingly.net/code/venus/ [19:33:04] i was going to test that in labs.. hmm, not sure about the update issue right now [19:33:13] Right now probably isn't the best time for planet not to be working, given all the current SOPA related stuff that we want to reach far and wide [19:34:52] that's right. i'll check [19:36:49] !log added Cite extension to labscosnole [19:36:51] Logged the message, Master [19:37:17] Does anyone know how the api disabling at enwp is going to be implemented? Is it going to give a clean error? [19:37:27] (something my bot can parse) [19:43:46] mark: you there? nosy is getting nervous, she needs access to the colo... [19:44:04] or is mark en route to sf? [19:44:32] have not heard from mark today [19:45:09] mutante: do you know where mark has gotten to? [19:45:23] LeslieCarr: thanks... so he may be traveling [19:45:38] Daniel_WMDE: no, i talked to nosy earlier and yea, he may be travelling [19:45:44] Daniel_WMDE: I can give to myself, not others :S [19:49:23] The whole SOPA thing is just a cover to do toolserver maintenance! :P [19:50:08] hahaha [19:50:41] hehe... [19:50:50] good opportunity for replication to catch up :) [19:51:24] hmhm [19:51:44] multichill, mutante: thanks guys [19:55:22] PROBLEM - Mobile WAP site on ekrem is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:56:50] Daniel_WMDE: Do you know when they're leaving? We're having a big event on Saturday in Haarlem [19:57:26] multichill: who? leaving? where? huh? [19:57:48] New patchset: Pyoungmeister; "cleanup. removing a nrpe.cfg that's no longer used." [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1931 [19:58:04] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1931 [19:58:33] dab and Nosy are going to the datacenter right? The datacenter is in Haarlem. [19:59:19] !log reedy synchronized php-1.18/includes/Feed.php 'r109197' [19:59:20] Logged the message, Master [19:59:38] multichill: they are not going unless they can get mark to authorize access [19:59:48] which is the problem [20:00:08] Did you try calling him? [20:00:29] i don't think i have his number [20:00:48] could you try? [20:00:49] Daniel_WMDE, if you ask nicely, people can get you it ;) [20:01:04] ...or of course, give me his number :) [20:01:56] M ark is listed as being in SF from saturday [20:02:03] Daniel_WMDE, see PM [20:02:19] New review: Pyoungmeister; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1931 [20:02:20] Change merged: Pyoungmeister; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1931 [20:04:29] PROBLEM - Puppet freshness on gallium is CRITICAL: Puppet has not run in the last 10 hours [20:07:22] RECOVERY - Mobile WAP site on ekrem is OK: HTTP OK HTTP/1.1 200 OK - 1642 bytes in 8.835 seconds [20:12:48] multichill: nosy sais she's leaving saturday morning. too bad. [20:13:47] !log en.planet updates were stuck. reason was corrupted cache causing "bsddb.db.DBPageNotFoundError" which broke update script. solution was to kill stuck updates, delete files in cache dir and run update manually [20:13:48] Logged the message, Master [20:14:00] http://en.planet.wikimedia.org/ [20:25:05] New patchset: Bhartshorne; "added new SOPA filter to emery" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1932 [20:25:20] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1932 [20:25:58] New review: Bhartshorne; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/1932 [20:25:59] Change merged: Bhartshorne; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1932 [20:37:40] will the enwiki blackout be implemented with $wgReadOnly or in another way? [20:37:46] Another [20:37:52] Permission config [20:38:00] +banner via JS [20:38:04] [00:37:46] Another [20:38:04] [00:37:53] Permission config [20:38:21] heh [20:38:55] I imagine it is not the first time you've heard that question today? [20:39:51] Not sure [20:39:55] This channel is silent [20:40:07] Most are in #wikimedia-sopa [20:42:53] vvv: FYI, /quit Changing host doesn't leave any notable opportunity for a missed message. it happens whenever you identify if you have a cloak to be set [20:43:39] jeremyb: freenode's ircd requires quit for that? [20:43:41] Didn't know [20:43:43] Thanks [20:54:42] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 33769 - Allow bureaucrats to remove sysop rights at Bashkir Wikipedia' [20:54:44] Logged the message, Master [21:00:59] i got a mail with content again from dewiki and lang de [21:03:35] mail at 7:10 pm utc was ok, mail at 8:34 not [21:06:50] PROBLEM - Disk space on srv223 is CRITICAL: DISK CRITICAL - free space: / 167 MB (2% inode=60%): /var/lib/ureadahead/debugfs 167 MB (2% inode=60%): [21:07:50] Merlissimo: so they fix it in 1h? ;) [21:07:59] fix → fixed [21:09:27] DaBPunkt: there is no log message after the error, so i don't think so [21:11:49] anyone here knows about this? http://en.wikipedia.org/wiki/Wikipedia_talk:SOPA_initiative/Action#Impact_on_mobile_site.3F [21:26:31] PROBLEM - Puppet freshness on spence is CRITICAL: Puppet has not run in the last 10 hours [21:28:23] RECOVERY - Disk space on srv223 is OK: DISK OK [21:54:58] quick question: will the enwp blackout take effect for users with javascript disabled? [21:57:43] New patchset: Jgreen; "adding file_mover@emery to logmover account class (used on storage3)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1933 [21:58:00] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1933 [21:58:16] New review: Jgreen; "(no comment)" [operations/puppet] (production); V: 1 C: 2; - https://gerrit.wikimedia.org/r/1933 [21:58:16] Change merged: Jgreen; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1933 [22:02:19] anyone? [22:04:33] I think it will, yes [22:04:39] Ask in #wikimedia-sopa [22:05:23] Well [22:07:55] !log installing memcache on marmontel [22:07:57] Logged the message, Master [22:16:30] RECOVERY - Puppet freshness on spence is OK: puppet ran at Tue Jan 17 22:16:16 UTC 2012 [22:20:00] New patchset: Ryan Lane; "Adding support to modify memcached's bind ip, and adding memcached to marmontel" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1934 [22:20:16] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1934 [22:21:15] New patchset: Asher; "prep for throwing varnish in front of single server blog" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1935 [22:21:30] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1935 [22:22:21] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1935 [22:22:22] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1935 [22:24:40] PROBLEM - Puppet freshness on db1045 is CRITICAL: Puppet has not run in the last 10 hours [22:25:08] New review: Bhartshorne; "(no comment)" [operations/puppet] (production); V: 0 C: 1; - https://gerrit.wikimedia.org/r/1934 [22:25:43] New review: Ryan Lane; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1934 [22:25:44] Change merged: Ryan Lane; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1934 [22:32:39] New patchset: Asher; "reorg probes to prevent error on unused bits probe" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1936 [22:32:53] New review: gerrit2; "Lint check passed." [operations/puppet] (production); V: 1 - https://gerrit.wikimedia.org/r/1936 [22:33:00] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1936 [22:33:01] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1936 [22:41:22] New patchset: Asher; "blog: swap varnish and apache between ports 80, 81" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1937 [22:43:48] New review: Asher; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1937 [22:43:49] Change merged: Asher; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1937 [22:48:49] New review: Bhartshorne; "(no comment)" [operations/puppet] (production); V: 0 C: 1; - https://gerrit.wikimedia.org/r/1918 [22:49:31] New review: Lcarr; "(no comment)" [operations/puppet] (production); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1918 [22:49:32] Change merged: Lcarr; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/1918 [23:20:10] There is a cosmetical problem with the german SOPA-Banner: It is also shown at the site it links to. [23:28:35] New review: Bhartshorne; "(no comment)" [operations/software] (master); V: 0 C: 2; - https://gerrit.wikimedia.org/r/1908 [23:28:35] Change merged: Bhartshorne; [operations/software] (master) - https://gerrit.wikimedia.org/r/1922 [23:28:36] Change merged: Bhartshorne; [operations/software] (master) - https://gerrit.wikimedia.org/r/1908 [23:35:28] Hello, I have a problem with an user who was renamed a month ago, most of his edits were transfered to the new account, but about 500 edits remained in the old account [23:35:52] http://es.wikipedia.org/w/index.php?title=Especial%3ARegistro&page=Usuario%3AErick1984&uselang=en [23:38:43] Carmilla: if you submit a report to bugzilla and assign it to me (ariel@wikimedia.org) I'll get to it in the next few days [23:40:20] okay [23:45:14] It seems there is a problem with a mail-filter on lily. My email to a mailing-list hanged there for nearly 20 minutes [23:47:19] sleep time for me [23:47:28] gn8 apergos [23:47:35] need to be awake to watch things fall over in the morning :-D [23:47:36] tah! [23:56:47] I'm not sure if this is the right channel for this, but has Wikimedia ever considered adding support for access over the I2P and Tor darknets without having to use the clearnet tunnels?