[00:48:11] New patchset: Stefan.petrea; "Adding new pageview reports mobile (in progress)" [analytics/wikistats] (master) - https://gerrit.wikimedia.org/r/41979 [08:48:53] New patchset: Hashar; "(bug 42628) lint whitespaces using git diff --check" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/37803 [08:49:22] New review: Hashar; "PS7: made extensions to check whitespaces before PHP lint." [integration/jenkins-job-builder-config] (master); V: 0 C: 0; - https://gerrit.wikimedia.org/r/37803 [09:06:56] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/39850 [09:09:48] New review: Hashar; "deployed on server" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/39850 [09:10:41] New review: Hashar; "Deployed live." [integration/jenkins-job-builder-config] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/37803 [09:10:42] Change merged: Hashar; [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/37803 [09:24:43] New patchset: Hashar; "(bug 43579) EventLogging pep8 job" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/42517 [09:26:42] hi hashar [09:27:29] New patchset: Hashar; "(bug 43579) run pep8 on EventLogging extension" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42518 [09:41:17] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42518 [09:43:34] Change merged: Hashar; [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/42517 [09:46:21] Nikerabbit: hello Niklas :-) [09:47:49] hashar: you're back refreshed? [09:47:59] New review: Hashar; "Hello Marius, could you seek out additional approvals from Wikimedia staffer? Will be happy to merge..." [integration/zuul-config] (master); V: 2 C: 1; - https://gerrit.wikimedia.org/r/39711 [09:48:47] Nikerabbit: kind of :-) The first week was all about being on the road, drinking and eating. Then I caught a flu or something and had to stay 2 days in bed :( [09:49:06] finally managed to rest this weekend. So I am almost refreshed this morning :) [09:50:36] hashar: aww too bad [09:51:21] hashar: just FYI I don't know if you are aware: TranslationNotifications and CLDR extension tests seem to have some problems and always fail [09:53:19] New review: Hashar; "Uploaded this version on commons https://commons.wikimedia.org/wiki/File:Wikimedia_CI_workflow.svg" [integration/doc] (master) - https://gerrit.wikimedia.org/r/39085 [09:54:02] Nikerabbit: have they ever been successful ? [09:54:29] hashar: dunno, maybe not [09:54:33] apparently not :( https://integration.mediawiki.org/ci/job/mwext-cldr-testextensions/ [09:56:14] for CLDR, the issue is that there are no PHPUnit tests to be found. So the testextension job is always failling [09:56:21] though it should not be voting :/ [09:56:48] Nikerabbit: do you have bugs for both issues? If not I will fill them [10:06:46] hashar: I don't remember seeing those in bugzilla [10:07:37] Nikerabbit: I found out the bug [10:07:37] New patchset: Hashar; "non voting extension tests were voting" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42522 [10:07:47] Nikerabbit: probably an issue with Zuul code base. The above change fix it [10:08:00] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42522 [10:11:20] New review: Thehelpfulone; "Whilst I'm not quite a WMF staffer, I'm highly trusted.. :-)" [integration/zuul-config] (master) C: 1; - https://gerrit.wikimedia.org/r/39711 [10:32:50] !g I1b00b00f5ddf34248f158303f176b3cc32bbcc3c [10:32:50] https://gerrit.wikimedia.org/r/#q,I1b00b00f5ddf34248f158303f176b3cc32bbcc3c,n,z [10:33:05] !g Id454bad62258c2f33c13f8fc43e45e69b0f5e8a7 [10:33:05] https://gerrit.wikimedia.org/r/#q,Id454bad62258c2f33c13f8fc43e45e69b0f5e8a7,n,z [10:44:18] Reedy: please stop killing the 'static-master' symbolic link for bits docroot -:) https://gerrit.wikimedia.org/r/42526 [10:44:35] Reedy: that breaks beta which use 'master' as a MediaWiki version [10:44:43] New review: Hoo man; "IMO it's a bit weird that I can merge code to run in production, manage central notices and edit eve..." [integration/zuul-config] (master) C: 0; - https://gerrit.wikimedia.org/r/39711 [10:53:40] hashar: PFFFT [10:53:45] ;) [10:53:47] Sorry [10:55:25] Reedy: tis ok :-] [10:56:13] hashar: besides, I need to go break beta with git deploy stuff [10:56:14] :D [10:56:48] Reedy: yeah I have been tasked the same thing apparently [10:57:07] There's a couple of conversations in the backscroll [10:57:18] Basically, we need to decide on a rough layout, then create lots of symlinks [10:57:21] Or osmething like that... [11:03:00] Reedy: at first: I have no idea what git deploy is :-] [11:03:08] so will have to find out a 30000 feet overview of it [11:03:16] hashar: deploying from git [11:03:18] would most probably read the source code which is on github somewhere [11:03:19] duh :D [11:03:25] and look at the recipes written by Roan for parsoid [11:03:42] oh [11:03:43] and salt [11:03:45] oh my god [11:05:02] Reedy: well I am going to get to lunch [11:05:09] will have a look at the list of stuff at https://bugzilla.wikimedia.org/showdependencytree.cgi?id=43338&hide_resolved=1 [11:05:15] heh [11:05:17] and find out how to deploy salt + git deploy from puppet [11:05:23] hopefully we have some puppet classes to do that [11:06:10] ah here is my bug https://bugzilla.wikimedia.org/show_bug.cgi?id=43339 [11:09:07] lunnnch time [12:24:27] back [12:42:13] hashar: I blocked your bug with my bug [12:42:32] Reedy: noticed that [12:42:39] heh [12:42:40] Reedy: thought I am not really sure what your bug is about :-] [12:42:51] still finding out how to get git-deploy from puppet [12:42:57] I can't find the perl script in our repo :( [12:43:04] Changing the layout from /home/wikipedia/common layout [12:43:32] moving mediawiki-config.git repo up to the same level as the code checkouts, not checking them out into it [12:43:40] ah [12:43:58] And then also tidying up /usr/local/apache and such [12:44:05] To make it all nice and consistent [12:44:20] do we have any design / architecture documentation about that ? [12:44:34] http://wikitech.wikimedia.org/view/Git-deploy is not really helpful :-D [12:44:59] Not really, as we've not designed it yet :D [12:45:05] ahh [12:45:25] something like /srv/deployment/mediawiki [12:45:26] probably [12:45:29] I forgot about the WMF project methodology : hack first, figure it out later :-D [12:45:40] Need to find the discussions from Tim in backscroll and put them on the bug [12:45:52] I don't have any backscroll myself :/ [12:46:12] Channel logged http://goo.gl/ckvIW [12:47:28] Damn it [12:48:16] Got the first convo, let me pull out hte bits not needed [12:49:00] I am updating the wikitech doc [12:49:59] hashar: sam@reedyboy.net [12:50:00] FAIL [12:50:08] hashar: https://bugzilla.wikimedia.org/show_bug.cgi?id=43340#c3 [12:50:34] wtf, some of them display in browser [12:50:35] some it downloads [12:51:26] Ok, so what channel was the other part of the discussion [12:52:46] that's probably the useful bit at least [12:56:55] Reedy: looking [12:57:56] "# Speculative fix: have the server copy the files using BitTorrent." [12:58:20] grr [12:58:26] Wasn't it just all git... Make them all pull, wait for that to work [12:58:38] I thought that the MediaWiki servers could all peer together using bittorent and happily update their copies [12:58:39] then do the checkout/whatever to get them all using the same code in a small amount of time [12:59:00] 700 servers all hitting the same 10GB link to download files is a bit dumb [12:59:08] ; WARNING : the issues below should be made bug reports in our Bugzilla}} [12:59:40] removing the curly braces [12:59:59] i believe there was a discussion about having a peer server in each rack or something [13:00:05] I'm slightly out of the loop [13:00:15] I don't buy the per rack based solution [13:00:28] there is no way we will ever manage to properly track which server is in which rack [13:00:41] but maybe i am underestimating ourselves :-] [13:00:42] lol [13:00:51] NFS for life. [13:00:56] I would go to have the 700 servers to grab the changes using BitTorrent [13:01:04] 1 copy of the data would be so much easier [13:01:14] then once all Bittorent clients have completed the tasks, send a salt run to actually switch to the new MW copies [13:01:34] New patchset: Siebrand; "Automatically merge UniversalLanguageSelector on +2/+2" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42540 [13:01:49] siebrand: :-] [13:02:04] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42540 [13:02:09] hashar: thanks. [13:02:26] hashar: Jenkins had a fit on https://gerrit.wikimedia.org/r/#/c/37395/ [13:02:53] New review: Hashar; "deployed :-)" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42540 [13:03:08] hashar: I'm sure they'd found a way to do it [13:03:14] You'd have to ask Ryan (tm) [13:03:35] Reedy: indeed [13:04:05] ahh if only we had done all of that directly on the 'beta' cluster [13:04:28] siebrand: I think I had zuul reloaded but I am not sure. It is probably still processing ongoing jobs. [13:05:02] I should try and start creating lots of symlinks today [13:05:19] siebrand: it was. UniversalLangueSelector should now have its change merged by Jenkins after CR+2 :-] Thanks for the patch! [13:05:44] Reedy: on my side I am figuring out the puppet stuff and will try to get classes for 'beta' [13:06:02] Reedy: why do we need symlinks farm anyway? [13:06:14] Reedy: can't we just update the apache conf to points to /srv/deployment/slot0 ? [13:07:15] I think it's keeping it incremental [13:07:25] rather than changing loads of things and then not knowing what actually broke [13:07:38] And also it might take a while to catch up and fix other scripts etc [13:08:39] sure :) [13:08:56] * hashar coffee + cigarette [13:09:01] and reading the puppet manifests [13:09:49] Also, it's not 700 [13:09:54] it's currently just under 200 ;) [13:10:21] Though, that's mostly just tampa? [13:11:12] 95 more in eqiad [13:12:19] hashar smoking is bad :P [13:12:37] * petan says [13:13:45] petan: yeah it is slowly killing me :/ [13:13:54] stop it? [13:14:32] http://en.wikipedia.org/wiki/Electronic_cigarette [13:14:44] It's usb powered! [13:14:50] :D [13:15:45] hashar you have mac air? [13:15:53] or just some... eh, mac? [13:16:02] I don't really know much about macs [13:16:30] I just know they are all white and girls like them [13:16:51] btw mac air has no usb for you :/ [13:16:54] the plastics ones are white [13:16:58] aha [13:17:04] there are some non-plastic? [13:17:07] they have to be heavy [13:17:07] nowadays most are in aluminum though and hence are gray [13:17:14] mm [13:18:01] I got the MacBookAir4,2 (13 inches screen from mid 2011) [13:18:12] waits 1.34kg [13:18:14] doesn't it prevent wi-fi from working properly? I mean aluminium blocks the signal far more than plastic case [13:18:17] which is light by my standard [13:18:36] na wifi works fine in all of my flat [13:18:45] well, at some point it's cute and it has terminal, so it's not so bad for me :D [13:18:48] better than my smartphone can handle [13:19:25] I like the microkernel as well :D [13:19:27] I highly recommend the 13 inches Air cause of the SSD and 1440 x 900 resolution. [13:19:39] wish there was a microkernel linux project [13:20:04] petan: such as GNU Hurd? http://en.wikipedia.org/wiki/GNU_Hurd [13:20:36] weeeee is it linux compatible? [13:21:25] (latest macbook pro 13inches from mid 2012 is 2.04kg (700g more), overpriced SSD and 1280x800 resolution :( [13:21:30] though it is probably cheaper [13:22:50] 13'' Air and basic 13''Pro are boths at 32 761 Kč [13:22:51] doh [13:23:31] 1300 euros doh [13:23:40] vs 1050€ on the french apple store [13:23:47] haha I was just looking at the price you found it for me :D [13:24:13] yes in CZ all stuff is far more expensive [13:24:14] err the Air 13'' is 1250€ [13:24:29] 1050€ is for the 11 inches air [13:24:30] it's cheaper to buy it from US e-shop :P [13:24:36] or in UK [13:24:45] unless the custom strike and make you pay the taxes :-D [13:25:14] government here sucks hard [13:26:10] It's useful having an academic proxy in the uk [13:26:20] get education pricing as you don't have to give proof of academic status [13:26:39] really? [13:26:41] Yup [13:26:50] Yay for having an internal linux cluster at my old uni ;D [13:26:58] haha [13:27:11] Also, buying it in the US can be cheaper still if you get it in states that there isn't sales tax... [13:29:45] you can still have your country customs to strike in and charge you import taxes + VAT :( [13:30:04] that's our case [13:32:06] hashar: send it to the office and pick it up from SF ;) [13:32:43] IIRC Krinkle bought one when he was in Portland for oscon [13:35:31] * Reedy symlinks hashar [13:36:23] yeah should do that next time I got to the states [13:36:36] I need a DSLR camera btw :-] [13:36:50] Amazon is quite good as they don't charge sales tax [13:37:16] hence me buying multiple SSDs, ram, cpu, graphics card... [13:38:35] Reedy: even in the US ? [13:38:44] I mean how can Amazon skip tax ? [13:38:47] Especially in the US [13:38:51] The same way they do everywhere else [13:39:08] Item(s) Subtotal: $1,115.96 [13:39:08] Shipping & Handling: $8.90 [13:39:08] ----- [13:39:08] Total before tax: $1,124.86 [13:39:08] Sales Tax: $0.00 [13:39:10] ----- [13:39:13] Total for This Shipment: $1,124.86 [13:39:14] ----- [13:39:19] ka-ching [13:41:12] :) [13:41:48] so bad gnu hurd doesn't support most of filesystems yet :/ [13:42:12] Reedy: that is crazy [13:42:18] The no tax? [13:42:24] Reedy: yup [13:42:25] Or my $1000+ Amazon US order? :P [13:42:27] btw hashar what FS is being used by apple [13:42:37] amazon charges sales tax only for states where it has offices [13:42:38] HFS? [13:42:45] Reedy: I know that US have different tax per states and hence most site show the prices without tax then apply the tax [13:42:57] can't charge residents of other states, because it's interstate taxes [13:43:01] hfs is kind of old, isn't it [13:43:06] Reedy: so when ordering for a delivery to the WMF office in SF, Amazon should apply the California tax I believe. [13:43:11] does it even support journaling [13:43:23] aude: so that is tax evasion ? :( [13:43:26] hashar: see what aude said [13:43:33] US are doomed huuh [13:44:16] It makes the savings quite a bit larger too [13:44:51] nah [13:56:29] New patchset: Siebrand; "Automatically merge translatewiki on +2/+2" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42546 [14:11:31] omg [14:11:33] lrwxrwxrwx 1 root root 30 May 18 2012 common -> /usr/local/apache/common-local [14:14:12] Reedy what's wrong? [14:14:17] it's a symlink... [14:14:44] target file permissions are used when you access it [14:15:34] ahaeharha git-deploy is a f*** package!! [14:15:46] I was looking for a perl script in our puppet repo :-] [14:16:11] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42546 [14:16:49] New review: Hashar; "deployed." [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42546 [14:17:03] siebrand: I have deployed the auto submit for Translatewiki [14:17:19] hashar: cool. Thanks. No hurry there. [14:17:41] siebrand: well once the patch is written, it just take a couple minutes to deploy it. So I might as well do that sooner than later. [14:17:52] siebrand: one less trivial issue in my work backlog :-] [14:18:14] siebrand: + it get frustrating to not have such trivial changes merged on sight. [14:18:50] Reedy: any idea how salt work ? [14:19:00] Reedy: do we need a client on the app servers? [14:27:40] hashar: "A salt module lives on every salt minion and can be called from the salt master or from any peer which is allowed access." [14:27:45] So yes, the clients need *something* [14:28:00] petan: It was more that it wasn't actually in /h/w/c ;) [14:28:16] ah [14:28:30] Reedy: guess I will figure it out :-] [14:28:32] long live NFS [14:28:40] Reedy: I did write the puppet class for beta :D [14:28:52] hashar: Surely Ryan would've written puppet modules to do the client installs? [14:28:59] I hope so [14:29:14] damn you timeouts [14:29:52] oh [14:30:04] a salt client is named a 'minion' [14:31:19] yeah, looks to be... [14:32:16] fun [14:32:26] I like discovering new technology [14:32:39] tbh, if we were planning to move to this and ryan hadn't written the puppet configs, I'd be scared! [14:33:38] Reedy: so role::salt::minion is installed by default via the 'base' class [14:33:42] so we have it already :-] [14:33:43] \O/ [14:33:46] sounds sensible [14:33:47] wheee [14:35:27] err: /Stage[main]/Accounts::L10nupdate/Ssh_authorized_key[l10nupdate@fenari]: Could not evaluate: can't find user for 996 [14:35:30] grmlblblb [14:42:56] If I start moving files on bastion I'm going to break shit, aren't I? [14:44:02] Reedy: on the beta bastion ? [14:44:03] yup [14:44:08] the apaches configurations are not in puppet [14:44:17] and have their document root pointing somewhere [14:44:23] lol [14:44:33] I wonder how much point there is actually doing it here first... [14:44:52] maybe /usr/local/apache/common which symlink to /usr/local/apache/common-local which might smylink to /home/wikipedia/common which in turn link to /data/project/apache/conf or something [14:44:57] well that is certainly a HUGE mess [14:44:57] lol [14:45:10] I think we should get git deploy / salt deployed on the deployment-bastion host [14:45:29] make sure the salt minion/clients properly fetch the mediawiki content under /srv/deployment/something [14:45:34] then once THAT works [14:45:47] But they need somewhere to pull it from... [14:45:47] alter the beta Apaches configuration to point to the new dir. [14:46:13] apparently the minion pull using http on the deployment-bastion host [14:46:13] It might make more sense just doing it on fenari/whatever that host in eqiad is [14:46:26] and the role class install the files on that machine under /srv/deployment [14:46:36] oh I am talking about beta ) [14:46:37] not fenari [14:46:59] I think Tim was suggesting we create this base /srv [14:47:01] get the files in it [14:47:22] remove the old /usr/local/apache, and at the same time, symlink those back to /srv [14:47:46] yup [14:47:47] ` [14:47:54] sounds like an almost good idea :-] [14:48:02] But the question is whether we should do it on labs first.. [14:48:19] why not ? [14:48:30] if we screw something on labs, that is safer for production i guess [14:48:41] then it does not closely match production [14:48:44] lol [14:49:03] and we still have shell scripts on fenari which are not in puppet / wikimediamaintenance nor the deb package that hold scripts [14:49:09] lols [14:49:16] yeah that is a mess [14:49:48] anyway on beta I would like git-deploy to work / have the file pushed on apaches [14:49:52] but not serving content yet [14:50:05] once we are happy with git deploy we can switch the apaches [14:50:15] * Reedy starts copying files around [14:50:26] ie common-local to /srv/old [14:50:38] Reedy: what for ? ;)D [14:50:52] so they're in the new location [14:51:12] then /usr/local/apache/common-local can symlink back to it [14:51:35] This starts getting confusing if I overthink it [14:51:50] Reedy: not that / partition on beta apaches are very small [14:52:04] so /srv should symlink to /data/project/apaches_srv or something [14:52:13] lol [14:52:17] (and that should be in Puppet) ;-] [14:52:24] hmm [14:52:27] as well as moving files around, we need to start making the new structure [14:52:31] or we could use the /dev/vdb maybe [14:52:40] let me check [14:52:44] lol [14:53:01] each instances as a /dev/vda1 mounted on / [14:53:15] which is small (10G) [14:53:35] and a /dev/vdb1 (300G on our apaches) which is mounted on /mnt [14:54:38] so hmm [14:55:13] lol [14:55:34] I'm sure this just keeps getting more annoying [14:55:52] I would make /srv/ a symbolic link to /mnt/srv which is a local disk to the instance [14:56:40] /dev/vdb 43G 185M 40G 1% /mnt [14:57:41] lrwxrwxrwx 1 root root 8 Jan 7 14:57 srv -> /mnt/srv [14:58:00] except you need to have the symlink to be handled by puppet :-] [14:58:28] not sure in which class that should belong too though [14:58:55] heh [14:59:16] But making puppet do everything while we're still organising seems daft [15:07:16] hashar: so do the beta apaches serve directly from a remote FS mounted locally? [15:08:18] ie gluster i guess [15:08:37] 30349 root 20 0 413m 204m 2760 S 49 5.2 1381:48 glusterfs [15:09:08] so apaches have /usr/local/apache symlink to /data/project/apache [15:09:18] Reedy: btw don't trust df , sizes are all wrong [15:09:35] I wasn't using it for sizes [15:09:40] then /data/project/apache/common symlink to /data/project/apache/common-local [15:09:54] better safe than sorry [15:10:23] on the deployment-bastion host, /home/wikipedia/common is also a symbolic link to /data/project/apache/common-local [15:10:27] hmm [15:10:48] lol [15:11:09] I think we should get rid of /data/project entirely (since it is a shared filesystem) [15:11:16] and use /srv/ pointing to /mnt/srv [15:11:16] Next you're going to tell me that they're actually served from fenaris /home via sftp [15:11:22] figuring out which class could host that [15:11:29] Reedy: lol [15:15:28] ah role::applicationserver something [15:16:29] I'm sure I'm over thinking the whole thing [15:18:14] bah [15:18:21] the role::applicationserver class no more exist [15:18:25] refactored [15:18:43] role::applicationserver::appserver [15:20:03] fun fact [15:20:05] I should've started this copy in a screen session [15:20:11] I can't add a new puppet class under labsconsole :/ [15:20:27] Reedy: I don't think you need to copy anything [15:20:41] Reedy: as I understand it git deploy will handle it for us [15:21:42] [00:07:37] maybe we should try to avoid updating these 700 /usr/local/apache references more than once [15:21:42] [00:08:04] we could change scap to push out to a new location, say /srv/old [15:21:42] [00:08:21] then replace the old /usr/local/apache with a symlink farm [15:22:17] I'm confused. [15:23:34] Very confused. [15:26:06] I'm giving up till later on [15:31:39] I give up till later too [15:31:46] can't add the required class :/ [15:31:51] :( [15:32:09] https://labsconsole.wikimedia.org/wiki/Special:NovaPuppetGroup for deployment-prep does not let me add any class :D [15:32:48] Can you not get to https://labsconsole.wikimedia.org/w/index.php?title=Special:NovaPuppetGroup&action=create&project=deployment-prep ? [15:34:55] yeah that one create a puppet group [15:35:00] which group classes [15:35:07] I did create a new group named 'group' [15:35:14] does not let me edit the classes anyway [15:35:18] Ryan would know :) [15:59:53] !g Icf3a73387bc0a855ebec3805747685b1e8cc71e0 [15:59:53] https://gerrit.wikimedia.org/r/#q,Icf3a73387bc0a855ebec3805747685b1e8cc71e0,n,z [16:05:18] I am out [16:05:36] Reedy: will most probably be back later this evening to sync with Ryan [16:05:39] :) [16:06:07] * hashar waves [16:18:49] Nikerabbit, <-- why underscores? [16:24:16] MaxSem: that's what solr 4 excepts with the example configuration [16:25:00] still, with these underscores it looked like some magic var [16:25:14] maybe, content_version or something? [16:25:37] anyway, you or siebrand need to answer my email about all this stuff [16:26:27] MaxSem: It's on my list, but I may take about as much time as in the first roundtrip. [16:26:44] lol [16:37:52] New patchset: Yurik; "whitelist Yuri Astrakhan (yurik)" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42570 [16:51:40] hello [16:52:10] is there something or somebody i should poke about fixed bugs that really need a point-release? [16:52:34] right now i'm thinking about things describedin second paragraph of https://bugzilla.wikimedia.org/show_bug.cgi?id=42452#c41 [17:43:39] Reedy: if you're around, any idea what's happening with the new version of this file? try clicking the new file itself and the new thumb, get the old file and old thumb http://test2.wikipedia.org/wiki/File:0.8622626746865945.png [18:22:53] ori-l: hi :) so I think you get pep8 on the EventLogging extension :-) Not fully tested though [18:29:28] New patchset: Hashar; "mwext-EventLogging-pep8 is now blocking on failure" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42583 [18:32:54] New review: Ori.livneh; "Weeeee. Thanks." [integration/zuul-config] (master); V: 0 C: 1; - https://gerrit.wikimedia.org/r/42583 [18:59:31] valeriej: ping [19:13:27] hexmode: I believe "pong" is the correct response? :) [19:13:50] :) [19:14:20] valeriej: so, I don't know what sort of information you're looking for, but maybe I can help you in any case [19:15:24] * Reedy waves his hand at hexmode [19:15:35] This isn't the information you are looking for [19:15:54] hexmode, Well on the page you mentioned that the information could lead to the creation of a wizard. Do you think that's still a viable option? [19:16:12] Reedy: are you planning to go to the hack-thing in Amsterdam? [19:16:21] Whowhatwherewhenwhyhow? [19:16:37] Instead of the de one? [19:17:02] are they still doing the de one? [19:17:09] I don't see it on Events [19:17:42] valeriej: sorry, reading [19:18:00] hexmode: no worries. [19:18:47] valeriej: I think a wizard is one way to solve it. Maybe there is a better way. But first we have to agree on what "it" is. ;) [19:19:27] FWIW: I think it is the problem of where to report a problem that you run into on Wikipedia or its sister sites. [19:20:12] And that could be a software issue or a legal issue or just an education issue [19:20:34] I think the WMF is doing a lot of great things around the education front [19:20:41] (e.g. Tea Room) [19:21:04] hexmode: Ah, I see your response on your page. And yes, what is "it". I think that's a good "it" for me to start with. [19:21:05] and the resources are there for technical and legal issues [19:21:17] csteipp: can you audit https://gerrit.wikimedia.org/r/#/c/37478/ (it's quite small)? [19:21:32] It's something to focus on. I don't want to have too large of a scope. [19:21:54] AaronSchulz: Sure [19:21:56] valeriej: ok... let me know if I can help more :) [19:22:24] hexmode: Thank you, I will! [19:22:39] <^demon> csteipp, AaronSchulz: It's probably nicer now, since there's no shelling out ;-) [19:23:13] No shelling out!?! Where's your spirit of adventure.... [19:23:36] <^demon> No nfs either! [19:24:19] * AaronSchulz looks at fetchArchiveInfo [19:24:34] ^demon: are / escaped? [19:24:58] I'm just thinking of people triggering strange urls fetched server side [19:25:20] in terms of $EXT/$REF [19:25:33] <^demon> Ah, yeah, we should escape / in those. [19:27:58] <^demon> I'm wondering if the whole thing should be urlencoded, or if it should just be the /. [19:28:05] <^demon> I can't find any docs on the subject. [19:30:09] <^demon> Eh, should probably just urlencode the whole thing to be safe. [19:30:34] Yes please on the urlencode. [19:30:38] ^demon: well double-encoding is not always "safe" ;) [19:31:06] but yeah that's probably fine to encode in this case [19:31:15] would be nice to no for sure what the api expects [19:31:33] <^demon> Whoops. [19:34:39] * AaronSchulz looks at the docs [19:35:29] ^demon: yeah, just encode, since we don't want ?/&and such messing about [19:36:01] <^demon> Yeah, I'm thinking the same thing [19:36:36] <^demon> Done. [19:39:01] ^demon: maybe just use rawurlencode? [19:39:18] urlencode is somewhat legacy in its space handling [19:40:05] <^demon> Done. [19:45:50] MaxSem: is there some sort of bp that says the name should be changed? [19:47:50] ^demon: what if the disposition changes to not have 'attachment; filename=', say if its rfc 6266 format? [19:47:50] Nikerabbit, none - I just thought if we're naming the schema, it could make sense to rename it when we change it [19:47:59] I intentionally didn't -1 it [19:48:20] <^demon> AaronSchulz: Then we'll fix the extension. [19:48:41] heh, maybe it could check if that exist first instead of assuming it does in the str_replace [19:49:10] <^demon> Nitpicking... [19:49:15] :) [19:49:29] it feels like screen scraping a little [19:49:49] MaxSem: does the schema name affect anything else? I might as well do it if not [19:51:13] ^demon: 'archive' is just used for that message? [19:51:36] I guess it doesn't matter much then [19:52:01] Nikerabbit, I don't think so [19:52:42] yeah, just used for the 'tar -xzf wikimedia-mediawiki-extensions-FlaggedRevs-928c562.tar.gz -C /var/www/mediawiki/extensions' bit [19:53:14] MaxSem: are you doing solr updates today? what are you updating? [19:53:34] ^demon: merged [19:53:54] Nikerabbit, enabling replication [19:54:07] (it will not affect Vanadium) [19:54:44] MaxSem: could I perhaps piggy back in your deployment window for my schema update? [19:54:57] Nikerabbit, are we on 4.0 already? [19:55:06] MaxSem: nope [19:55:31] is this schema 3.6 - compatible? [19:55:37] MaxSem: yes [19:55:54] ori-l: replied [19:56:19] Nikerabbit, I personally have no objections. is it something like https://wikitech.wikimedia.org/view/Solr#Upgrading_schema ? [19:57:06] MaxSem: that should do the trick indeed [19:58:31] New review: Hashar; "Good to me. Please poke over people on IRC to get more +1 :-]" [integration/zuul-config] (master); V: 0 C: 1; - https://gerrit.wikimedia.org/r/42570 [20:13:48] MatmaRex: could you test https://gerrit.wikimedia.org/r/#/c/38493/ ? [20:13:53] Apparently there's nobody to merge it [20:16:00] hashar: Hi, there you are [20:16:09] hashar: I'd like to make some progress on qunit. [20:16:55] hashar: However there's some dependencies. Things we need to migrate to grunt (or otherwise be able to perform) .Namely 1) installing mediawiki, 2) script to snapshot to a public dir and remove when done. [20:17:43] hashar: also, I see you don't seem to be on the grunt bandwagon, you prefer bash to do everything and zuul jobs. [20:18:23] Krinkle: looking [20:20:04] hashar: also, what's the status on vagrant/vm/secure testing? [20:20:48] oh man [20:20:59] so many questions :-] [20:21:34] so installation of mediawiki is still triggered via the good old ant script cause it simply work :-] [20:21:47] definitely want to migrate that to something else though (aka grunt I guess) [20:22:10] the snapshot to public dir could be a dedicated job that will install mediawiki directly there [20:22:29] I used to do a copy of the MediaWiki-Git-Fetching installation and then do some sed to fix the paths manually [20:22:54] right, the paths. Yeah, better install there directly. especially if that is the only purpose of that install. [20:23:03] it is way easier to snapshot mediawiki to the public dir then run maintenance/install.php all over again (that is fast, less errors) [20:23:05] right, so we only need the installer then. [20:23:10] Yeah [20:23:32] and to get them in a temporary semi-public/secure location. [20:23:34] Krinkle: tested https://gerrit.wikimedia.org/r/#/c/38493/ , works as expected [20:23:51] hashar: can we do it in a non-public way? [20:23:58] It needs only be accessible to the grunt shell. [20:24:00] (localhost) [20:24:12] or are all ports public by default on that machine? [20:26:09] Krinkle: yeah sure [20:26:25] create a virtual host matching 127.0.0.1 / restricted to localhost only [20:27:08] grr [20:27:33] I mean: create a virtual host matching 'localhost' and restricted to 127.0.0.1 / ::1 [20:28:00] Krinkle: as for me not using grunt, I haven't really looked at it. I attempted to write some JS to parse .yaml files but failed :-] [20:28:12] Krinkle: I then gave up and moved to something else. [20:28:31] hashar: could still be faked if someone puts "localhost" in their hosts file to point to that server, but atleast no mass xss abilities. [20:28:55] hashar: there is a standard module for that. [20:29:08] hashar: and the vm/secure testing? [20:29:15] MaxSem: btw thanks for making ApiSandbox -- https://www.mediawiki.org/wiki/Events/MediaWiki_Workshop_-_Kolkata#Report found it handy :-) [20:29:28] whee:) [20:29:48] (I really like to get rid of the unmaintainable node_modules situation we have going on and start putting the grunt file in the repositories instead so that it can be customised easily :) and used locally by developers as well. [20:29:56] Krinkle: then you can restrict that 'localhost' virtual host to only accept request from 127.0.0.1 and reject everything else. Something like: allow from 127.0.0.1, deny from all [20:30:21] hashar: ah, nice :) [20:30:25] Krinkle: haven't you merged the grunt tasks repository under integration/jenkins ? [20:30:36] I did, or atleast submitted a patch for it [20:30:44] hashar: why are you asking? [20:30:51] https://gerrit.wikimedia.org/r/#/c/38645/ [20:31:22] hashar: I don't self-merge, so I"m waiting for you. [20:31:34] Krinkle: ahh I knew I have seen something like that :) [20:32:37] Krinkle: lets break jenkins tonight [20:33:04] hashar: I don't know how to make it consume the xml from zuul/jenkins. I'd like to learn by example though [20:33:13] (well I know it in jenkins, not in zuul) [20:33:14] Nikerabbit, you can ask in #-ops regarding your stuff [20:33:33] Krinkle: ah you get the result in tmp/checkstyle/jshint.xml [20:34:31] hashar: yeah, to make cleaning easier I don't want to have files hanging in all kinds of root directories and files [20:34:42] prefer everything in a common tmp/ that can be nuked with no warning [20:35:13] Krinkle: well the workspace directories are nuked on each run [20:35:18] so that is unneeded [20:36:03] I know the xUnit jenkins plugin check whether the .xml file has been updated by the build and the plugin will mark the build as a failure whenever the file did not get updated [20:36:17] I guess the violations plugin (which handles check style XML files) would do the same [20:40:04] hashar: okay, so... what? [20:40:12] Krinkle: also the reason for me hacking up stuff in shell, is that it was ten time easier to easily bootstrap something by the end of december [20:40:22] Krinkle: but need to migrate that to grunt :-) [20:41:00] so checkstyle results are handled with the violation plugin. I have set that up for the python linter (named pep8) already [20:41:31] the repo ssh://gerrit.wikimedia.org:29418/integration/jenkins-job-builder-config.git has a yaml file named python-jobs.yaml [20:41:39] I know which plugins do this. [20:41:45] How do you control it from zuul? [20:41:54] which is a basic wrapper using pep8 macros defined in macros.yaml [20:42:11] you then get the job generated / loaded in our jenkins install [20:42:35] zuuli is configured using ssh://gerrit.wikimedia.org:29418/integration/zuul-config.git which contains a layout.yaml files describing the overall workflow [20:43:07] i have described how to deployed a Zuul configuration change at https://www.mediawiki.org/wiki/Continuous_integration/Zuul [20:43:15] but did not explained the layout.yaml file [20:43:25] it has a few comments though [20:44:28] there's so much stuff involved, it seems overkill. [20:44:29] integration/jenkins [20:44:52] integration/jenkins-job-builder [20:44:52] integration/jenkins-job-builder-config [20:44:52] integration/zuul [20:44:53] integration/zuul-config [20:45:53] I have split the softwares (which are basically forks from upstream) and the configuration [20:46:20] one reason I that I eventually want to reload zuul whenever a change is made to the zuul-config [20:46:50] by adding a jenkins job on post merge that will update the zuul configuration directory, trigger a reload, make sure it works and report success :) [20:47:00] and rollback automatically if anything has gone wrong. [20:47:06] same for jenkins job builder. [20:47:25] merging a change in jenkins-job-builder-config should trigger a run of jenkins job builder and updated Jenkins [20:47:42] (then one day we might end up with an IRC bot to let us add jobs :) [20:47:45] I know how it works :) [20:48:17] anyway, basic overview is at https://www.mediawiki.org/wiki/Continuous_integration/Git_repositories [20:48:25] btw, I'd like to get rid or improve the failure message for -merge [20:48:30] I think I wrote that after you mentioned we had too many repos :) [20:48:37] It has no link to the job run, thus no info on what failed. [20:48:44] The message is useless. [20:48:54] MaxSem: ask what? [20:49:10] I know, I said I find it too completed. Not saying I don't know how it works or that you didn't document it. [20:49:16] Krinkle: that message? :: -> failure-message: Change could not be automatically merged. Please rebase it and reupload. [20:49:20] Nikerabbit, about deploying your stuff [20:49:20] Yes [20:49:42] Krinkle: feel free to amend it :) [20:49:51] hashar: Why is it there? [20:49:54] you might even be able to enter a multine comment [20:50:00] it blocks the default message that is actually useful. [20:52:05] k [20:52:13] Krinkle: so just remove it entirely so we just point to the -merge/console ? [20:52:31] ideally the message should be appended after the default message [20:52:32] yeah, that's the default right? Or is there an option I need to enable instead of this? [20:53:01] I guess "-merge: FAILURE" is clear enough. It's not a blog post :) [20:53:07] failure-message: override the default which is to show a list of console URL to each jobs + their status [20:53:16] I guess so [20:53:24] so simply remove the three lines and that will do it :-] [20:53:50] but then I thought that having a shot and simpler message instructing to rebase would be less cryptics to user than a list of URLS [20:53:57] that make it obvious you need to rebase. [20:54:02] (for newbies) [20:54:46] three lines? [20:54:51] New patchset: Krinkle; "Remove the failure-message override from -merge jobs." [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42611 [20:55:21] hashar: I like user friendliness, pretty extreme at it. But I think that at this point in, the message here is not going to help them. [20:56:40] Krinkle: I told you three lines :-]]]]]]]]]] https://gerrit.wikimedia.org/r/#/c/42611/ [20:56:43] magic trick! [20:56:54] Krinkle: by the way I have talked to my wife about your tricks. [20:57:08] Krinkle: she instantly told me : "you must REALLY have enjoyed that night". [20:57:19] Krinkle: to which I replied her: "Oh hell I did". [20:57:27] Krinkle: thanks for your your tricks :-] Was awesome [20:58:30] The pleasure was all mine, I enjoy making people happy! [20:58:44] awwww [20:59:07] hashar: I'm in the office for 3 weeks january/feb. Are you scheduled for 2013 yet? [20:59:16] Krinkle: not at all [20:59:45] Krinkle: there is no real reason for me to get to SF yet. [20:59:51] hashar: I admit though the first line "Krinkle: by the way I have talked to my wife about your tricks." kept me wondering (what tricks I do could possibly be interesting to your wife, assuming programmer context) [21:00:02] Krinkle: still have to bump my roadmap for the next six months. [21:00:24] But I know what you referred to after the second line. [21:00:29] okay [21:00:50] working on some new tricks to put up my sleeve. [21:01:01] \O/ [21:01:23] Not actually in my sleeve, though doing so isn't uncommon for magic. [21:01:40] Alrighty, dinner's ready. [21:01:41] ttyl [21:01:50] * hashar load french touch house music in his headphones and bump up the volume tooououou toouou touou touoou toouuu [21:01:56] * hashar waves at timo [21:02:08] sumanah: I did not forgot you :-] Still owe you a few replies. [21:02:23] sumanah: good news is that today I almost cleaned up my 487 new emails mailbox :-] [21:03:24] thank you hashar! [21:03:29] and I hope your family is doing well [21:03:34] also hashar re coworking I have news [21:03:46] sumanah: yeah family is fine :-] [21:04:04] hashar: I have a coworking space I can go to for free one day per week to cowork with people who also work on free software :D [21:04:06] sumanah: ahh have you find a coworking place in your neighborhood? [21:04:06] it's nice [21:04:12] !!!! [21:04:12] Stop using so many exclamation marks ! [21:04:14] No, it is an hour's train ride away :( [21:04:20] * marktraceur snickers at wm-bot  [21:04:52] sumanah: guess you could find one around your place or bootstrap one from scratch :-] [21:05:06] or we could ask for a WMF branch office in NYC hehe [21:05:58] :-) I do enjoy working from home a lot of the time, there is a lot of freedom and convenience, but I just need to cowork, like, 2 days a week to be happier. So I'll just work at a cafe or a friend's place for 1 additional day per week and be happy [21:05:59] I think [21:06:25] I tend to stay at home in the mornin [21:06:37] then move to the coworking place to lunch with the people there [21:07:00] if I stay home, I end up taking a two hours nap in the afternoon which is not really productive :-] [21:09:56] <^demon> AaronSchulz: https://gerrit.wikimedia.org/r/#/c/42670/ :) [21:11:12] <^demon> http://noc.wikimedia.org/~demon/extdist.png - what it looks like now [21:20:10] ^demon: heh, I forgot about the [21:20:33] <^demon> Oh, CodeReview handles that on mw.org :p [21:20:43] ... [21:21:01] <^demon> We should probably move it to WikimediaMessages though. [21:22:19] New patchset: Hashar; "split whitespace lint check to make it non voter" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/42672 [21:23:45] New patchset: Krinkle; "Remove the failure-message override from -merge jobs." [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42611 [21:24:04] New review: Hashar; "Verified on Raymond change https://gerrit.wikimedia.org/r/#/c/42601/ which did introduce a legitimat..." [integration/jenkins-job-builder-config] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/42672 [21:24:04] Change merged: Hashar; [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/42672 [21:24:35] New review: Hashar; "Job updated:" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/42672 [21:25:33] hashar: https://gerrit.wikimedia.org/r/42611 [21:26:40] * hashar counts  [21:26:40] :) [21:26:54] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42611 [21:26:58] Krinkle: deploying [21:27:09] unless you wanna do it ? [21:28:59] deployed :-D [21:29:17] New review: Hashar; "deployed, thanks for the tweak." [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42611 [21:40:16] New patchset: Hashar; "whitespaces checking jobs are non voting" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42677 [21:40:16] New patchset: Hashar; "trailing whitespaces check for mediawiki/core" [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42678 [21:41:40] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42677 [21:41:49] Change merged: Hashar; [integration/zuul-config] (master) - https://gerrit.wikimedia.org/r/42678 [22:00:03] Dereckson: submitting a TCL script are you serious ? :-] [22:02:53] anyway commented :-) [22:02:55] hashar: join the club [22:02:59] https://gerrit.wikimedia.org/r/#/c/40295/ [22:03:10] Nikerabbit: yeah TCL is one of the first language I have learned [22:03:19] loved it then switched to the evil PHP [22:03:35] haha [22:03:38] http://www.eggheads.org [22:03:44] and wrote a FTP server :-] [22:06:28] awwwww, tcl :) [22:07:10] lovelly :) [22:15:35] that is all for tonite [22:15:37] * hashar waves [22:17:55] Change abandoned: Jeroen De Dauw; "Need to make changes somewhere else now" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/32836 [22:18:03] Change abandoned: Jeroen De Dauw; "Need to make changes somewhere else now" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/35686 [22:19:44] New review: Hashar; "Please ping other people to get more +1 :)" [integration/zuul-config] (master); V: 0 C: 1; - https://gerrit.wikimedia.org/r/42574 [22:29:12] Greenpeace uses a full TCL website, with OpenACS as web framework. [22:29:54] Of course, PHP should be favoured for serious stuff, it's our primary language. [22:36:56] Doesn't XChat still use Tcl scripts? [22:52:14] marktraceur, can do. Also does perl and python [23:05:07] lwelling: Hey Luke! Hope you'll be able to join us in hosting the Echo IRC briefing for developers tomorrow morning at 11. [23:06:00] especially for answering questions about the job queue aspects and such