[00:00:37] UW on test2wiki seems fairly hosed in all IEs right now at least [00:49:28] bsitu: http://pastebin.com/ySs4NZVn [00:50:13] you can stick that in your MediaWiki:Common.js [02:19:03] And smoke it. [11:37:14] !log installed rsyncd on 6 apache servers for network-aware scap testing [11:37:29] Logged the message, Master [11:37:41] it saw me coming for it [11:38:31] network aware scap testing :) [11:38:32] really [11:40:16] that's not as useful in eqiad you know [11:40:22] it has 20 times the bandwidth the tampa racks have [11:40:56] but I suppose it offloads the deployment host [11:41:48] it's super awesome [11:42:16] 40 lines of pure perl power [11:43:03] find-nearest-rsync: http://paste.tstarling.com/p/yRNjYA.html [11:43:34] if the preferred rsync server goes down, the servers that were near it will just use the next closest [11:43:59] haha [11:44:31] soon we'll multicast updates [11:44:32] scap forever? [11:44:55] well, ryan was pretty glum about git-deploy not working [11:45:26] i was thinking about maybe trying to fix git [11:45:28] he was saying things along the lines of "oh well, a month wasted, time to replace it all with bittorrent" [11:45:42] please not [11:45:57] so I thought I'd just kick him while he's down by raising the bar for feature- and performance-parity with scap ;) [11:46:05] haha [11:46:15] this could be used for git-deploy as well [11:46:30] well, that and CT wants me to have scap ready for eqiad deployment ASAP [11:46:44] yeah except we were more thinking off just straight from tampa ;-) [11:46:49] -f [11:47:40] multicast updates would actually be nice for l10n, instead of bittorrent [11:48:08] bittorrent is multicast afair, just over TCP streams :( [11:48:30] no it has nothing to do with multicast [11:50:06] not with IP multicasting [11:50:25] not with any multicasting [11:53:18] TimStarling: setting both $wgReadOnly and readOnlyBySection in db.php should be sufficient, right? [11:54:05] either [11:54:28] readOnlyBySection causes $wgReadOnly to be set to true if the current section is the one specified [11:54:32] whether it's true already or not [11:54:42] ah [11:55:07] i thought $wgReadOnly means "no edits", but might still cause db/cache writes [11:55:31] what exactly does it mean [11:55:32] ? [11:55:39] no edits [11:55:59] to prevent DB/cache writes, use SET GLOBAL read_only=1; in MySQL [11:56:13] ok [11:56:16] or @@read_only or something, I'm rusty [11:56:44] well I guess we won't need that, as it's handled in the master switch [11:57:12] well, if you do need it, I was right the first time [11:57:23] then SELECT @@read_only; to see what the current status is [12:01:46] now I kinda want to write a multicast deployment system for l10n [12:01:50] "how hard can it be" [12:01:53] but no time ;-) [12:04:57] I had to switch over the method to ICMP instead of TCP SYN, to get more accuracy, so unfortunately it'll have to sudo to root [12:05:12] but now I have plausible results: http://paste.tstarling.com/p/CqBuKS.html [12:05:15] i hope it's not affected too much by system load during the deployment [12:05:18] the latency measures I mean [12:05:57] TCP SYN probably would have been heavily affected, hopefully ICMP is less affected [12:06:42] but at worst, it will send everything to one server, which will still be better than what it does now because that server will have more cores than nfs1 [12:07:33] hmm [12:07:42] how about a cron job that writes the RTT in regular intervals to an easily parsed file? [12:07:58] with a lock file you could exclude that from running during deployments [12:08:48] that sounds like more work [12:09:07] speaking of which, got to go wash some dishes [12:10:41] well, /bin/ping is setuid root [12:10:48] * ori-l thinks Tim-away should write another bash script [12:11:06] i'm going to go to bed before he sees that [13:06:24] hi all [13:06:56] need some help regarding IRC Channel for Assamese language Language code: as [13:07:46] which channel, what kind of help? [13:08:58] There is a need for Assamese Wikimedia project related discussion on a separate IRC Channel [13:09:48] which project? [13:10:49] It would be good to have one IRC Channel for all the wikimedia projects for Assamese [13:11:34] as the community is small and there are a lot of common editors who are active on multiple projects [13:17:43] Danny_B: am I asking this question in a wrong place? [13:18:57] not necessarily. [13:19:30] alright, could you please guide how to go about this? [13:20:12] if you want to have channel for asamese language for all projects at once just create something like #wikimedia-assamese-projects [13:20:35] but there is a habit to have channels per project&language [13:20:51] such as #wikipedia-as, #wiktionary-as etc [13:21:16] that's why i was asking which project you are talking about [13:22:45] i do understand your concern, but as I already told that the community is small at this moment and it is better to keep one channel: #wikimedia-as [13:22:52] Is that possible? [13:23:26] #wikimedia-assamese-projects is too long, if #wikimedia-as is not possible then #wikipedia-as would be the right one [13:23:58] i would discourage from #wikimedia-as since #wikimedia-xx is used for chapters where xx is country code [13:25:14] alright, then #wikipedia-as would be the right channel at this moment [13:26:00] please guide how to create it [13:26:25] type /join #wikipedia-as [13:26:32] tell me when you're there [13:26:58] done [13:28:51] ok, mmt pls [15:42:33] apergos: is dataset2 fast again? [15:44:43] I don't know [15:45:49] hmm [15:45:52] yes, how odd :-D [15:46:24] well that's just special [16:17:21] hello [16:17:33] anybody here who could deploy latest changes to the wikibugs bot? [16:17:56] issues with IRC output related to the bugzilla upgrade were fixed [16:56:12] MatmaRex: do you know where that bot lives ? [16:56:14] what project ? [17:02:48] LeslieCarr: as in? where on gerrit? [17:02:59] wikimedia/bugzilla/wikibugs [17:03:17] https://gerrit.wikimedia.org/r/gitweb?p=wikimedia/bugzilla/wikibugs.git;a=summary [18:25:05] Reedy, should this change be live now? https://gerrit.wikimedia.org/r/#/c/44429/1/wmf-config/InitialiseSettings.php,unified [18:25:18] I'm trying to disable the AFT on this page: https://pt.wikibooks.org/wiki/Wikilivros:Caixa_de_areia [18:25:26] but it is still there after I use the new category [18:26:33] Yup, should be [18:26:41] I've synced a few times.. [18:26:52] soooo... can anybody deploy changes to the wikibugs bot? [18:27:00] https://gerrit.wikimedia.org/r/gitweb?p=wikimedia/bugzilla/wikibugs.git;a=summary [18:27:15] damned internet connection [18:27:18] MatmaRex: try mutante [18:28:16] weird... Maybe it is bugged? [18:29:44] mutante: *poke* [18:31:26] never mind, it is working now (I used action=purge in the category and the page) [18:31:34] MatmaRex: eh..which change.. that looks like an overview [18:33:46] mutante: the last two from yesterday? [18:34:08] https://gerrit.wikimedia.org/r/#/q/status:merged+project:wikimedia/bugzilla/wikibugs,n,z [18:34:42] MatmaRex: ah, they are merged already..i thought you ask for merges [18:35:25] if this is the same bot then i asked myself how it is deployed in which place [18:35:31] but let me take a look [18:36:44] lol [18:36:53] no, i want to get them deployed [18:37:10] andre__ said that someone has to dorp the files on the mail server, or something like that [18:42:16] MatmaRex: there isnt even git installed there.. so i dont think it has ever been deployed from git before ...:/ [18:44:25] mutante: haha [18:49:42] MatmaRex: http://wikitech.wikimedia.org/view/Wikibugs where is that file "Wikibugs" itself though.. any idea? [18:49:55] ah, /usr/local/bin/ ignore me [18:50:12] hold on..i'll take care of it in a few.. [18:50:25] ori-l: yo! did you ever file a bug about that ResourceLoader issue you discovered where $resourceLoader->register() doesn't work for bottom scripts in ResourceLoaderRegisterModules hook? [18:50:49] we just ran into it again and i'd like to try and get to the bottom of it [18:50:57] pun intended? :) [18:51:01] mutante: thanks [18:51:18] lol [18:51:21] awjr: i didn't, because i wasn't sure what was intentional and what wasn't. but i remember the issue [18:51:22] no, but… yes :) [18:51:43] let me look at the file, hang on [18:52:35] ori-l thanks - this is the bug for what we ran into https://bugzilla.wikimedia.org/show_bug.cgi?id=44070 and i feel like we ought to file a bug for the RL issue (unless it's by design, i guess) [18:52:39] Reedy kaldari apropos of https://bugzilla.wikimedia.org/show_bug.cgi?id=43203, any ideas why the Curation Toolbar would be showing up on User pages but not on actual article pages? [18:52:49] lol. [18:53:06] lol indeed [18:53:31] Reedy! [18:53:37] nice glitch :) [18:53:39] it shows up on any new user or article pages [18:54:00] awjr: would you like me to simply file a bug instead of pasting notes in the channel? [18:54:03] although there is disagreement among the community on whether it make sense to have it available on user pages [18:54:13] ori-l: actually yeah, that would be great if you dont mind [18:54:20] thank you :) [18:54:23] np at all, give me a minute or three [18:54:31] no rush [18:55:14] kaldari: in the case of test2wiki, PCT doesn't seem to show up on article pages at all. (I'd like to get some basic browser tests for PageTriage, but maintaining it in beta labs is a drag) [18:56:41] hmm [18:58:50] chrismcmahon: I don't see it on either new article or new user pages [18:59:10] Do you see it on https://test2.wikipedia.org/wiki/User:AKlapper_%28WMF%29 ? [19:00:25] kaldari: I definitely see Toolbar at http://test2.wikipedia.org/wiki/User:AKlapper_(WMF) [19:01:14] lemme check my user rights on test2 [19:05:00] chrismcmahon: I gave myself some user right and now see it on user pages, but not article pages [19:05:17] I wonder if it's a conflict with pending changes or some other article-specific feature [19:05:19] kaldari: then we match now, same for me [19:05:26] awjr: https://bugzilla.wikimedia.org/show_bug.cgi?id=44072 [19:06:31] chrismcmahon: Is pending changes active on all articles on test2? [19:06:55] ori-l: ohho the problem is in MF! [19:07:05] awjr: yeah [19:07:08] that is good though, easier to fix [19:07:12] thanks ori-l :) [19:07:14] kaldari: no idea. I have only a vague notion of what "pending changes" means. [19:07:39] It creates the "Unchecked" thing and the "Review this revision" box at the bottom [19:08:11] it's not normally used on en.wiki [19:08:30] apart from a few experimental uses [19:08:40] np [19:09:20] kaldari: apparently it is part normal use again [19:09:33] oh, maybe so, I haven't kept up with it [19:10:24] it's something of an ongoing saga :) [19:11:12] so is it for sure that there is a curation vs. pending-changes conflict? [19:11:49] no, just a wild guess [19:12:06] I have no basis for this suggestion :) [19:12:23] I would say it's probably a bug in PageTriage [19:13:36] chrismcmahon: I can try turn on page curation for a new article on enwiki and see if the curation toolbar disappears... [19:13:44] erg [19:13:51] I mean turn on pending changes [19:17:34] chrismcmahon: looks like my spidey-sense was correct. Curation Toolbar won't load on an article with Pending Changes turned on. [19:17:55] kaldari: is that a feature or a bug? [19:18:07] not sure [19:18:16] mostly a bug [19:19:00] although I don't think anyone has figured out how Page Curation and Pending Changes should work together [19:19:33] question for Ironholds maybe? [19:19:37] I think the reason is that pending changes is a form of page protection, so maybe PageCuration is thinking that there's no point in loading since the page is protected or something [19:20:05] wait, is pending changes the same thing as flaggedrevs? (just wondering) [19:20:07] chrismcmahon: definitely worth filing a bug fir [19:20:08] for [19:20:15] yeah [19:20:19] basically [19:21:10] kaldari: OK, I'll retire https://bugzilla.wikimedia.org/show_bug.cgi?id=43203 and replace it with a PCT vs. PC conflict bug report. [19:21:19] * MatmaRex is all for turning flaggedrevs on on enwiki, maybe it'll get someone to care about these bugs... https://bugzilla.wikimedia.org/buglist.cgi?title=Special%3ASearch&quicksearch=component%3AFlaggedRevs&list_id=174023 [19:23:52] MatmaRex: FR is surely not the worst maintained big extension we have [19:25:52] chrismcmahon: Thanks! [19:26:21] kaldari: thank you. it would be nice to have all of PageTriage working properly in test2wiki, for a number of reasons [19:27:49] Nemo_bis: it seems to be the one i run into most issues with ;) [19:29:30] MatmaRex: on which wiki? [19:32:21] Nemo_bis: pl.wikipedia [19:32:46] oh, right :) I always forget it has FR [19:32:54] okay, maybe Collection bests it [19:32:58] I've not edited it much after 2007 [19:33:03] ? [19:33:09] i run into bugs in that one even though i dont use it [19:33:17] ;) [19:37:16] MatmaRex: oh man.. so of course this turns into a big deal :/ [19:37:25] i installed git but the version is ancient [19:37:29] and " cmn> I don't think that version knows about smart HTTP" [19:37:39] and it needed curl.. and and ... [19:37:47] heh mutante [19:38:02] can't you like download this one file from somewhere? [19:38:48] wikibugs is just one file D: [19:39:19] yes, of course. but next time? and the time after that? [19:39:34] am i gonna be the "wikibugs guy" then? :p [19:39:56] also, i dont like its on the mailserver in the first place.. hmmmm [19:40:16] well, if deploying this was a single wget call [19:40:22] this could be documented somewhere, i guess [19:40:59] yea, almost just a single wget i guess, i can get the .tar.gz from gitweb [19:41:09] (until people add more files to that repo) [19:41:30] why would anyone add more files there [19:41:32] actually [19:41:43] why would ever touch that if not for simple bugfixing [19:42:15] ok, we'll see.. i hope you're right [19:43:23] (also, you're Dzahn on gerrit, right?) [19:44:08] MatmaRex: yes.. and i am merging this one too https://gerrit.wikimedia.org/r/#/c/35286/ [19:44:21] log for parsoid bugs [19:44:47] chrismcmahon: Any idea what's up with the weird Vector actions menu on test.wiki? [19:45:12] next to the watchlist star [19:46:18] kaldari: you mean http://test.wikipedia.org? I don't think I've ever done anything meaningful on test, I've always used test2. (and I think test is going to go away after the EQIAD migration) [19:46:35] oh [19:46:50] <^demon> It'll probably be a normal cluster wiki. I doubt we'll remove it entirely. [19:47:00] <^demon> Having two testwikis is nice. [19:47:18] what will be the normal deployment test wiki then? test2? [19:47:19] thanks ^demon. so it'll still be there, just not on a whole separate filesystem like today [19:47:31] <^demon> I imagine that's what we'll do. [19:50:58] will updating fenari automatically update test2 (after the migration)? [19:51:48] <^demon> test2 has never been automatic, that was test (since it was served straight from nfs). [19:52:01] yes, that's exactly my point :) [19:52:04] <^demon> After the migration, none of the wikis will be served direct from nfs. [19:52:30] <^demon> (And it won't be fenari, it'll be tin) [19:52:40] got it [19:52:59] <^demon> There's supposed to be a tech talk starting soon-ish about all of this :) [19:53:04] I guess this will all be explained at the session today [19:53:06] yep [19:54:04] in the meantime we're still deploying and still using test.wiki, so I guess I'll just ignore the weirdness [20:02:27] Dereckson: around? [20:22:42] Danny_B: ping [20:23:03] Dereckson: g44455 has wrong summary, is it fixable? [20:24:46] All is fixable, yes. [20:25:02] wikiquote and not wikipedia, seen [20:25:10] ok any ideas about this? [20:25:22] https://en.wikipedia.org/w/index.php?title=Special%3ALog&type=block&user=Prodego&page=User%3A127.0.0.1&year=&month=-1&tagfilter= [20:25:25] the most recent block [20:25:29] I achieve another config change first, then I fix that. [20:25:31] I typed '4 years' in the block box [20:25:43] instead I get a block that lasts 4 years, 43 minutes and 12 seconds [20:26:43] 1 year works [20:26:59] Dereckson: and sk not sv [20:27:07] Prodego: "year" isn't a precise measurement unit, y'know [20:27:12] 2 years is "1 year, 364 days, 18 hours, 10 minutes and 48 seconds" [20:27:22] MatmaRex: it may not be precise, but it should be consistant [20:27:30] Prodego: leap years? [20:27:58] no, because 2 years is low but 4 years is high [20:28:05] also, where do you get your measurements from? [20:28:09] and it is off by a few hours [20:28:20] I am looking at the block log entry [20:28:32] the blocklist shows expiration times, not lengths [20:28:40] Prodego: looking [20:31:00] Prodego: well, that's funny. let me test locally [20:31:17] Prodego: did you just input "2 years" into the text box, or chose from dropdown, or something? [20:31:25] I typed "2 years" [20:32:10] Locally it does exactly the same for me [20:32:25] well it didn't use to do that, so something changed [20:32:47] okay, confirming htis. http://users.v-lo.krakow.pl/~matmarex/testwiki/index.php?title=Specjalna:Rejestr/block&page=U%C5%BCytkownik%3AClawsonPuleo224 [20:32:49] now, whether the thing that changed is the length of the block or what the block log says... [20:33:21] Prodego: but it seems to just be more "correct" [20:33:29] the expiration date is Sat, 17 Jan 2015 20:32:08 GMT [20:33:47] so exactly two years from now [20:34:12] well how is that not 2 years [20:34:18] let me go see how the block log computes time [20:34:45] well i have no idea why it does that [20:34:49] but it's pretty funny [20:35:48] The expiry time shown on Special:BlockList is 4 years for me [20:35:56] But the block log shows 4 years, 43 minutes and 12 seconds [20:36:04] so where is that computed? [20:36:08] I assume that is just text [20:36:22] likely it is inserted in to the log somewhere in Special:Block? [20:38:09] Seems the expiry put into the logging table is '4 years' [20:38:19] so then it is probably due to this [20:38:24] /** [20:38:24] 967 * Convert a DB-encoded expiry into a real string that humans can read. [20:38:25] 968 * [20:38:25] 969 * @param $encoded_expiry String: Database encoded expiry time [20:38:25] 970 * @return Html-escaped String [20:38:26] 971 * @deprecated since 1.18; use $wgLang->formatExpiry() instead [20:38:26] 972 */ [20:38:37] Please don't paste like that into IRC [20:38:37] that "deprecated since 1.18; use $wgLang->formatExpiry() instead" is suspicious [20:38:44] oh its just 8 lines [20:38:49] This looks like someone summoned by Echo https://www.mediawiki.org/w/index.php?title=Talk:Possible_tarballs&diff=0&oldid=408828 [20:39:25] Is Echo sending emails even for links? [20:49:43] Krenair: MatmaRex I filed bug 44075 [20:49:56] I saw [21:01:06] Prodego, I can't find that method doc you posted... where was that from? [21:09:31] MatmaRex: done. it's deployed [21:10:14] thanks a lot mutante [21:10:26] let's hope that actually fixed the bugs ;) [21:10:44] yes please:) [21:11:13] as long as it still reports its own bugs:) [21:16:03] MatmaRex: and at least minimal docs update http://wikitech.wikimedia.org/index.php?title=Wikibugs&diff=55517&oldid=49661 [21:17:03] mutante: :) [21:17:18] MatmaRex: have you fixed the netsplit handling too? :D [21:17:19] also, why is wikitech wiki on 1.19.2? [21:17:27] Nemo_bis: that wasn't even me! [21:17:43] i was just poking poeple to get it deployed, and mutante came up :) [21:24:00] Krenair: that was from Block.php [21:24:17] really though SpecialBlockip.php is the relevent one for logging I'd guess [21:24:30] ok, what method was it for in block.php? [21:24:57] Hello guys: Would anyone mind lending a helping hand to help me with what I am assuming is an API problem? [21:25:38] I can try [21:25:48] JohnLewis: just ask. someone might be able to help. [21:26:16] (but you might prefer to do it in #mediawiki - this channel is mostly for issues on WMF wikis, that one is for eveyrthing mediawiki-related :) ) [21:26:21] (and there's more people there.) [21:26:39] MatmaRex: Its Wikipedia API :) [21:26:54] alright :) [21:27:11] One of the WMF employees said get there now (here). Right; [21:28:01] IN my operation of a bot, I am using the Post method to login to Wikipedia but failure is the top issue as the login fails on 'NeedTpken' despite a token being sent. [21:28:57] JohnLewis: show us the code :) [21:29:27] Any specific part? Such as where the login is done? [21:29:40] well, preferably all of it [21:29:53] Hold on. [21:34:10] https://gist.github.com/2ed1920cadca81a721dc [21:37:36] JohnLewis: this doesn't include the logging in part, does it? [21:38:09] Clarify 'logging in part' [21:38:59] Dereckson: are you gonna sync it? [21:39:13] JohnLewis: it only calls $wpapi->login($user,$pass); [21:39:19] JohnLewis: which is apparently defined in another file [21:39:42] Ill post the section that does that for you. [21:42:14] MatmaRex: https://gist.github.com/0bc48a3092eb8a8fc956 -- Hopefully the right section. [21:45:04] JohnLewis: have you checked that $x['login']['token'] is set at all? [21:45:51] JohnLewis: try $x['login']['lgtoken'] [21:45:59] Ok, One moment. [21:48:50] Same. [21:49:56] JohnLewis: where are you handling cookies? [21:50:20] JohnLewis: you have to save the cookies that you receive, then send tem with every subsequent request [21:51:36] JohnLewis: here's complete login code i did in Ruby some time ago: https://github.com/MatmaRex/Sunflower/blob/master/lib/sunflower/core.rb#L290-L314 [21:51:46] (not sure if this helps you any, but maybe) [21:53:31] Danny_B: I don't know when the next configuration window is, [21:53:44] now? ;-) [21:53:45] but I think it were today the last before datacenter migration. [21:54:37] If you're in a mood to annoy someone for more configuration deployment, wait 45 minutes, there are some other config changes pending on Bugzilla. [21:58:17] MatmaRex: That partially helps. If I had that in PHP (Since I am only a beginner at Ruby) it would help much better. [22:00:49] JohnLewis: @cookies is later used in all API calls, see https://github.com/MatmaRex/Sunflower/blob/master/lib/sunflower/core.rb#L213-L226 [22:01:09] JohnLewis: i'm sort of a beginner in PHP ;) [22:02:52] Ill figure it out in PHP, Just got an Wikipedia admin slightly on my back saying 'Is it working?'. So far Thanks for your help, If I need anything else I know who to come to :) [22:05:02] In PHP, I believe using $_COOKIES will be part of the solution. [22:05:12] *$_COOKIE [22:08:55] 18<Danny_B> Dereckson: are you gonna sync it? [22:09:17] ... I don't think Dereckson can deploy to WMF servers [22:14:52] MatmaRex: My knowledge (and usage of cookies in my ability) only allows the use of predefined cookies. Unless there would happen to be an API for getting the cookie. [22:18:15] JohnLewis: that api you're using returns them [22:18:27] JohnLewis: in response headers [22:18:32] Ah, Alright. Thanks. [22:43:39] mutante: you there? check #mediawiki, please [22:43:48] wikibugs seems dead after all :( [22:45:33] MatmaRex: great. i love it. am i allowed to quote "why would ever touch that if not for simple bugfixing" [22:46:16] worst case we have to fetch it from SVN ..somehow [22:47:07] mutante: just check if it crashed? [22:47:12] mutante: also, it might not be the wikibugs [22:47:18] but the thing that writes them to IRC [22:47:20] whatever that is [22:47:28] wikibugs just logs to a file, no? [22:48:11] the latest line in that file is Deeply nested templates not loaded [22:48:41] processes look like before.. and i killed and started it per wikitech page.. sigh [22:48:45] yo preilly, if you have a sec: https://github.com/wikimedia/wmf-vagrant/pull/20