[00:03:59] It doesn't [00:04:05] It maps onto "anything, just give me *anything*" [00:04:12] Which is usually memcached || APC || DB [00:14:04] aha, thanks [00:22:24] ori-l: It's also a way to get around $wgFooCacheType = CACHE_NONE; , because CACHE_ANYTHING is guaranteed to give you a CACHE_DB if all else fails [00:22:49] (CACHE_NONE maps to a fake caching class with an empty implementation) [00:39:07] RoanKattouw: thanks [01:46:37] AaronSchulz: is it OK to use $wgScoreFileBackend = 'global-NFS', shared with math? [01:47:40] going to nas1 alone? [01:48:38] 9 different backends to choose...decisions decisions ;) [01:48:58] ah right, math uses global-multiwrite [01:49:36] for some reason I thought you would have picked local-multiwrite [01:49:47] * AaronSchulz tries to remember [01:53:58] I think it will work to make it global [01:54:02] yep [01:54:08] I was looking at render() [01:54:23] you hash the lillypond text, code, and lang [01:54:42] yeah, there is that part of it, and there is also override_midi [01:54:50] and use that for dest_storage_path [01:54:58] which is basically an open timidity server [01:55:13] I remember the path handling in that extension being slightly confusing [01:55:38] override_midi uses the SHA-1 of the entire input file as a destination file location [01:55:52] $sha1 = $file->getSha1(); [01:55:53] $oggRelDir = "override-midi/{$sha1[0]}/{$sha1[1]}"; [01:55:53] $oggRel = "$oggRelDir/$sha1.ogg"; [01:59:13] probably doesn't need to be sharded [01:59:13] what controls the mapping of swift containers to public URLs? [01:59:50] files/swift/SwiftMedia/wmf/rewrite.py in puppet [01:59:53] you mean don't add it to shardViaHashLevels? [02:02:39] yes [02:03:38] grr, labs makes everything twice as complicate [02:03:39] d [02:04:36] I guess I can just let it be broken in labs and assume someone is going to fix it later [02:04:56] how many files might score have in the next 5 years? I doubt tens of millions [02:05:19] probably only tens of thousands [02:05:25] if you do shard, you also want to update the shard_container_list in manifests/role/swift.pp [02:05:40] but it seems premature so lets not bother [02:06:00] depends how many lilypond scores need to be localised, I guses [02:06:20] but musicians thought of that centuries ago and already write everything in italian [02:06:31] and if we switch to ceph, it really won't matter [02:06:34] match = re.match(r'^/(?Pmath)/(?P(?P[0-9a-f])/(?P[0-9a-f])/.+)$', req.path) [02:06:43] yeah so 1 line change there should do it in rewrite.py [02:06:49] I remember thinking about that before [02:07:08] apergos: around? [02:10:40] TimStarling: heh, there is also squid.conf.php [02:10:44] acl swift_math url_regex ^http://upload\.wikimedia\.org/math/ [02:11:03] actually you could just update both later at once, in which cases reads will just come from nas1 and work [02:11:57] or, hrm [02:12:11] apergos: is any webserver actually serving nas content? ;) [02:14:02] https://gerrit.wikimedia.org/r/#/c/34251/ [02:14:23] just a draft since there is more to do [02:17:43] hmm [02:17:49] score-render versus lilypond [02:18:41] maybe the URL should just be /score to save confusion [02:19:10] yeah, that would involve less rewrite.py changes [02:21:56] so if there is no sharding, how can self.shard_containers='all' work? [02:22:13] or is that option just there to throw me off? [02:22:47] where do you that? [02:25:50] # Add 2-digit shard to the container if it is supposed to be sharded. [02:25:51] # We may thus have an "actual" container name like "." [02:25:51] if ( (self.shard_containers == 'all') or \ [02:25:51] ((self.shard_containers == 'some') and (container in self.shard_container_list)) ): [02:25:51] container += ".%s" % shard [02:26:07] for timeline, shard is set to an empty string [02:26:16] so it'll be container += "." in that case [02:26:23] which I assume would break it [02:27:47] self.shard_containers is not "all" and timeline is not in the list [02:29:12] ok [02:29:43] it's just that usually, when I see a configuration variable, I try to make the code work regardless of what it is set to [02:31:32] well self.shard_containers == 'all' is bs anyway [02:31:48] maybe I should get faidon to delete that before anyone uses it ;) [02:32:39] anyway, for sure the is no webserver for the netapps [02:32:42] * AaronSchulz asked Ryan [02:32:58] I wonder if traffic that doesn't match acl regexes is already directed to swift [02:33:05] someone may have made that change already [02:33:11] * AaronSchulz looks at the squid conf [02:33:52] you know, it was complicated enough to make Score work that I've been thinking that we should provide a generic interface for Score/Math-like extensions in the core [02:34:34] something more like the MediaHandler hierarchy, just give me some data and I'll give you a file [02:35:18] and it looks like no one did that [02:35:55] TimStarling: and store everything by sha1 and stuff :) [02:36:05] * AaronSchulz sometimes fantasized about such a layer [02:37:12] yeah, it would be moderately complicated, but that's the point of doing it, so that you don't have to do a moderately complicated thing every time [02:38:36] do we still update wikimedia-task-appserver? [02:39:00] I guess I should have picked a channel with more ops people [02:39:06] heh, I guess you will need to update swift...then depool squid, add the acl to test it, then do the other squids [02:39:27] well there are like no ops people in the office anymore, that's for sure [02:39:42] ...moderately complicated :) [02:40:31] TimStarling: I remember thinking about adding a more simple URL => container mapping the acls and rewrite to at least cut down on the work in the future [02:41:14] the feature I really want is remote rendering [02:41:22] so we don't have to install all these packages on every server [02:41:28] with 404 handling too? [02:42:01] yeah, but then there's the question of where to store the input data [02:42:43] if you have, say, 2KB of input text, and a 404 comes in for some SHA-1, you have to look up the input text [02:43:24] I guess it's not a big deal [02:43:48] you can just have a registration interface which is called from the parser, which stores the input text somewhere, doesn't matter where [02:44:16] but it has to be permanent storage if you want to be able to expire the rendered images [02:47:48] unless they expire on there own [02:47:52] *their [02:47:54] * AaronSchulz sighs [02:48:15] ok, I'm going home and will be online later [05:58:43] ASchulz|away: no, the nas is there as a fallback but there is no web server ready to kick in [06:15:14] did you know that every time you connect to a git server with a "smart" protocol, it sends a complete list of available refs, before anything is even requested by the client? [06:17:18] that's "smart" is it? [06:17:29] (no I didn't) [06:18:24] it's probably not a problem if you use git in the way it was intended [06:18:45] but with gerrit, every change every considered is a ref [06:19:19] so git fetch on mediawiki/core.git will send 535KB before it even gets started [06:19:27] 8000 lines [06:19:41] and we've only been using it for half a year [06:20:05] uh oh [06:20:25] by the time wp gets to its 20 year anniversary we will be sad campers [06:22:03] yes [06:22:32] the next version of gerrit has happy camping support [06:22:42] mind you, by then, networks will probably be a bit faster [06:22:47] jgit will still suck though [09:33:15] hashar: hey! you awake? [09:33:25] DanielK_WMDE: I am [09:33:29] yay :) [09:33:36] for like 4 hours :-] [09:33:44] i poked a bit at our update.php hook and removed some stuff [09:34:01] could you check whether the memory problem is still there? [09:34:09] upgrading jenkins right now [09:34:17] but yeah will reenable the job and retrigger it [09:34:29] just let me finish the Jenkins plugins upgrade [09:34:30] ;] [09:34:52] ok, cool. if that didn't help, I suppose I'll have to sprincle the code with debug output, to see when this happens [09:34:55] DanielK_WMDE: btw, ContentHandler is really nice. [09:35:00] ori-l: thanks! [09:35:35] i was afraid people would hate me for that ;) [09:36:10] DanielK_WMDE: https://gerrit.wikimedia.org/r/#/c/34234/ [09:36:21] (not for review -- it's still WIP, just to show what i'm working on) [09:37:21] actually, don't look -- it's a pretty ugly patch at the moment :) but i'll get it in shape. [09:38:44] DanielK_WMDE: ideally we would enable debug log on builds and save the file somewhere for people to look at it [09:38:50] haven't figured out how to handle that [09:39:00] I guess that is all about setting up something in the global ExtraSettings.php file [09:39:12] and pointing $wgDebugLog to the build dir [09:42:35] New patchset: Hashar; "universal linter now use ANSI coloring of its console" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34273 [09:42:55] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34273 [09:44:51] hm... i guess...# [10:09:57] New patchset: Hashar; "xunit configuration update" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34276 [10:10:22] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34276 [11:55:37] I've just have moved an old MW 1.15.1 to a new Unix system (will upgrade MW itself later). The move required importing the (MyISAM) databases. After moving, the complete wiki appears to work except the search index. When performing a search the error is: .... error "145: Table './tuxedo/tuxedo_searchindex' is marked as crashed and should be repaired (localhost)". A similar error occurs when running the 'updateSearchIndex.php' maintinan [12:04:35] Tuxedo_: that's really a question for #mediawiki - or, from the sound of it, for #mysql. [12:05:07] Tuxedo_: you are seeing my MySQL error message. If MySQL things the table is damaged, let MySQL repair it. Or drop and re-create it. [12:05:17] *thinks [12:07:52] DanielK_WMDE_: Yes, you're right. I wasn't quite sure where to ask this, and so I asked in both channels. [12:10:06] Tuxedo_: #mediawiki is general mediawiki help (setup, usage, trouble shooting, extensing, etc). #wikimedia-dev is mostly the core developer team, for organisational questions but also core programming discussions. [12:13:16] Ok, thanks. [12:14:36] By the way, I just dropped the database table. MW does not recreate it by itself and nor does the updateSearchIndex.php maintinance script. [14:15:05] i'm trying to understand how cache control works... what cache control header(s) is mediawiki sending to the squids for logged in vs. anon users? [14:46:28] Cache-Control:private, must-revalidate, max-age=0 [14:46:30] that's what I see [15:16:03] New patchset: Hashar; ".gitreview file" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34312 [15:16:04] New patchset: Hashar; "Job templates for mediawiki/core" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34313 [15:16:21] Change merged: Hashar; [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34312 [15:16:29] Change merged: Hashar; [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34313 [15:27:31] New patchset: Hashar; "Zuul jobs as generated by Jenkins Job Builder" [integration/jenkins] (zuul) - https://gerrit.wikimedia.org/r/33580 [15:32:57] people discussing etherpad-> mediawiki export right now on #etherpad-lite-dev https://github.com/ether/etherpad-lite/pull/1161 [15:57:29] New patchset: Hashar; "use ANSI color wrapper by default" [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34321 [15:58:18] New patchset: Hashar; "use ANSI color wrapper by default" [integration/jenkins] (zuul) - https://gerrit.wikimedia.org/r/34322 [15:58:51] Change merged: Hashar; [integration/jenkins-job-builder-config] (master) - https://gerrit.wikimedia.org/r/34321 [15:59:24] <^demon> Krinkle: I need some color advice :) [15:59:48] hashar: You forgot to gitsubmodule update -init, grunt still fails on ci [15:59:58] grunt-contrib-wikimedia is empty [16:00:05] ^demon: k, whats up? [16:00:38] <^demon> I've upstreamed our theme to Gerrit to make it the default. Someone wondered on CR if green/pink on diffs (eg: https://gerrit.wikimedia.org/r/#/c/27531/1/debian/control) still fit? [16:00:41] <^demon> Would you tweak them? [16:01:49] Krinkle: going to do it right now [16:02:04] Krinkle: would you mind writing a basic grunt task that will shell out to run phpunit ? ;-D [16:02:14] hashar: I will [16:02:32] Krinkle: something like: shell( 'php $WORKSPACE/tests/phpunit.php --exclude-groups Broken,Fuzz' ) [16:03:00] Krinkle: I ran submodule update --init [16:03:11] <^demon> hashar: That fuzz testing is junk :\ [16:03:18] ^demon: Well, we did change the diff colours in mediawiki, and we loosy coloured this theme after Vector/mediawiki [16:03:23] Krinkle: also mark should have added you as a sudoer on gallium and in the jenkins group. Poke him if there is any trouble [16:03:30] ^demon: but I think for source code gree/red makes perfect sense. [16:03:34] ^demon: can you link to that comment? [16:03:41] ^demon: we might want to factor out the fuzz system indeed. [16:03:47] hashar: the RT ticket says I was already in wmf? [16:03:51] I am out, going to catch my daughter [16:03:52] <^demon> Krinkle: https://gerrit-review.googlesource.com/#/c/39376/ last comment, from David. [16:03:57] Krinkle: ask in -operations :-] [16:04:03] I must leave, sorry [16:04:08] might connect later this evening [16:04:11] cya! [16:05:33] ^demon: nah, I disagree. I think the diff colours make perfect sense as they are. They weren't green/pink because gerrit was piss yellow, they were green/pink because that makes sense for code diffs. [16:06:03] <^demon> That was my thought too. Thanks. [16:06:20] but if he has a better idea, bring it on :) Maybe he's on to somethign. [16:07:02] but at this point, I'd say theyre fine. The worst justification, but works well in practice: Everybody uses green/red for diff (even GitHub and better-than-grep) [16:07:36] btw, nice work on pushign stuff upstream to gerrit (not just this one) [16:08:01] <^demon> Upstreaming is best. I wouldn't want to maintain a hack. [16:08:07] <^demon> /fork [16:10:44] <^demon> Krinkle: One of the other volunteers is re-tooling the upper-right bar area (search + user links) too. [16:11:04] <^demon> Search will be larger and have more of a focus. And links for logout/settings will be hidden in a nicer dropdown. [16:11:08] <^demon> Way less cluttered overall. [18:42:35] marktraceur: Nikerabbit went ahead at full speed and etherpad is translatable now (stil beta), so big problems https://translatewiki.net/wiki/Translating:Etherpad_lite [18:42:46] Woo! [18:42:50] there are some small issues on talk page [18:42:52] Nemo_bis: "so big" or "no big"? [18:42:58] *no big [18:43:01] sorry [18:43:15] Nemo_bis: Don't be, good to meet another dvorak user :) [18:43:36] marktraceur: actually just a typing-impaired user I'm afraid [18:43:40] Heh, OK [18:44:29] marktraceur: apart from those small fix tasks, the main problem would be to convince them to give nikerabbit direct push access to their 'develop' branch [18:45:04] they are *almost* convinced, but not completely [18:45:12] Nemo_bis: That shouldn't be too hard, it would give them a lot less work [18:45:25] perhaps they only need some assurance that Niklas won't delete their repos suddenly or so [18:45:50] marktraceur: they'd like to be sure it doesn't touch files other than the l10n ones [18:47:15] [oh, it wasn't a typo, I completely skipped the word – *so no – poor me] [19:09:51] mlitn: Hey Matthias, do you know if Mjackson is still doing any development work on ArticleFeedback? [19:11:26] kaldari: no, he's not [19:13:09] Cool. I'm going to remove the afttest-hide permission from his account since it includes oversight permission. Do we still need that permission to exist or can we just rely on the regular oversight permission now? [19:14:15] kaldari: I just did that [19:14:22] Because Ironholds asked me [19:16:17] kaldari: regular oversight is fine [19:17:02] I would suggest killing it then so we don't have permission group cruft [19:20:47] hashar: it's not working [19:20:49] https://integration.mediawiki.org/ci/job/MediaWiki-GIT-Fetching%20(testing)/7/console [19:20:54] Permission denied (publickey). [19:21:18] kaldari: Ironholds says he's pursuing the aft rights issue through product [19:22:00] I can just remove the oversight part in the config since matthias says it's no longer necessary [19:23:28] RoanKattouw: but let me know if someone else wants to handle it [19:23:48] I'm making config changes for UploadWizard anyway [19:23:52] kaldari: Probably best you short-circuit it with Oliver directly [19:24:05] Krinkle: oh [19:24:15] doesn't he ever sleep :) [19:25:43] Krinkle: the repos usually just have an origin remote which uses the https:// URL. [19:26:01] Krinkle: that job workspace has two remote, one named "gerrit" has a ssh:// url. [19:26:20] Krinkle: so I guess drop the remote and that will be fine [19:26:28] I didn't create any [19:26:42] I just copied git fetching as you told me to and adjusted the job config [19:26:44] oh no [19:26:45] hm [19:26:59] new job > based on MediaWiki-GIT-Fetching > remove stuff I don't need [19:27:12] sorry the workspace does not contain anything [19:28:41] so it is actually trying to update /var/lib/jenkins [19:32:13] hashar: doesn't the gerrit trigger or setup.sh script in bin/ do the clone? [19:32:21] na the ant file does [19:32:30] hashar: also, interesting.. it seems you can't submit a change on gerrit if jenkins does -2 [19:32:33] see setup-extension target [19:32:33] https://gerrit.wikimedia.org/r/#/c/34347/ [19:32:48] When I do V and CR+2 I still can't submit [19:33:00] I have to click [x] on the jenkins-bot vote to override it [19:33:16] I dont understand [19:33:17] that's good, I just didn't know this was enabled yet. [19:33:27] hashar: open https://gerrit.wikimedia.org/r/#/c/34347/, try to merge it. [19:33:29] I think that has always been the case [19:33:31] no [19:33:35] verified - 2 prevent you from submitting [19:33:52] even if you do verified +2 yourself? [19:34:00] anyway, dinner's ready [19:44:49] New patchset: Hashar; "fetch script failed on jobs containing whitespace" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34354 [19:45:04] job with a space in the name was simply not working :-] [19:45:16] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34354 [19:47:04] Krinkle|detached: fixed :-] [19:47:10] --cwd "$PWD" [19:47:12] need double quotes [19:47:17] simply cause of the whitespaces in the path [19:48:30] hashar: Did you do anything to init the clone ? [19:49:02] Or was it just this? [19:49:06] (the whitespace) [19:50:14] I inited the clone manually [19:50:25] with zuul, we will simply use the Jenkins GIT plugin [19:50:30] instead of the lame shell script [19:50:32] anyway [19:50:36] rebooting my internet box [19:50:37] brb [19:53:11] ;à [19:53:12] grmblbl [20:24:34] hashar: So, now that it is working. What'dya say we give it a try on the real thang ? [20:26:26] hashar: what is the fingerprint stuff for? [20:27:51] hashar: when you have a minute, I'd like you to remove me from jenkins admin userlist, and see if the LDAP group is working. [20:28:00] Nikerabbit: expose your idea there #mediawiki is too crowed :-] [20:28:11] hashar: here? [20:28:29] Nikerabbit: yes please, about using branch/Tag for your bundled extension. [20:28:49] Krinkle: fingerprint is used by Jenkins to track which build triggered what other build. something like that. [20:28:57] Krinkle: will drop that whenever zuul is used [20:29:00] k [20:29:12] hashar: the other 2 things [20:29:39] Krinkle: if we ever enable jshint and that start failing builds, I guess that deserve a quick mail to wikitech-l [20:29:54] well I asked saper for comments about how to do it, but right now I'm creating a local branch, doing a commit, creating a tag from it and pushing the tag without gerrit review [20:30:01] Krinkle: definitely try sending a faulty js file to test/mediawiki/core2 to check how it works [20:30:45] for now it would be nice to use jenkings to run php/js unit tests for the tag or branch [20:30:51] ahh [20:31:08] Nikerabbit: so for core I am pretty sure it trigger tests on wmf branches [20:31:10] in the longer run we prolly run tests locally once they start working, to test the tag against MW 1.19 1.20 master and so on [20:31:20] Nikerabbit: for extension, I have no idea [20:31:24] hashar: I already did [20:31:38] (already did test a failure) [20:32:23] hashar: Yes, although it is just enforcing what was already a standard (the jshintrc file is already in mediawiki/core, and tests already failed locally when jshint failed), I will notify wikitech as soon as it is working (avoid false alarm) [20:32:35] hashar: okay, #3, jenkins admin/ldap [20:33:38] hashar: #2: I just enabled grunt and rebased a change to see if it works. [20:33:52] Yep, success: https://integration.mediawiki.org/ci/job/MediaWiki-GIT-Fetching/7718/console [20:34:54] nice [20:35:04] but then it will fail on the wmf branches :-] [20:35:10] REL1_20 or REL1_19 too [20:37:50] Krinkle: Jenkins does indeed trigger tests on core branches :/ https://gerrit.wikimedia.org/r/#/c/33821/ [20:37:58] so you want the script to be non failling [20:38:11] grunt …. --cwd="$WORKSPACE" | : [20:38:15] | : [20:38:18] or | /bin/tru [20:38:19] e [20:38:32] these last few lines don't make sense, what are you saying? [20:38:40] ahh damn irc client [20:38:46] Why would it fail on other branches? [20:39:02] I mean that jshint run should always exit 0 [20:39:03] and no, it would be rather pointless if it was non failing [20:39:09] what? [20:39:11] or the shell invocation will fail and the build marked as afailure [20:39:18] yes, that's the point. [20:39:44] it is triggered on ** branches, that's by design. [20:39:47] which mean whenever someone send a patch to REL1_19 REL1_20 or one of the current wmf branch, Jshint will fail [20:39:53] why ? [20:40:02] because the javascript updates are only in master ? [20:40:12] REL1_20 gives me 37415 errors right now :-] [20:40:24] I'll add .jshintignore to 19 and 20 [20:40:29] so jshint should only run on specific branches [20:40:36] Which is likely the source of 99% of problems [20:40:45] (files that shouldn't be linted, e.g. uptream libs) [20:41:00] No, just make them pass, like we do for unit tests [20:41:46] what do you mean ? [20:41:58] If an old branch is rotten and not up to our standards, we fix that. [20:41:59] are you going to fix the javascript in mw 1.19 and 1.20 ? [20:42:06] I disagree [20:42:14] There isn't much to fix, 99.9% of those lint errors are files we don't maintain [20:42:21] we didn't have a .jshint file yet back then. [20:42:27] so we add one [20:42:42] I would prefer we simply ignore REL1_19 / REL1_20 for now. [20:43:11] Why would you not want to lint them? [20:43:17] and we could even ignore it on wmf branches until we branch the new one that will have all the js fix [20:43:20] It is no extra work, it'll only take a minute for me to fix that. [20:43:45] master was a mess because we allowed it to become a mess. until a few months ago we passed jshint [20:43:57] the files that don't are third party libs [20:44:20] so unless there is a reason we principly don't want to lint those branches, I'll go ahead and fix those [20:44:49] ask around to other people [20:44:55] but for me it is not worth it [20:47:07] I find that hard to believe. Why would we risk a minor release introducing syntax errors in javascript, potentially causing all kinds of interface breakage. [20:47:22] it is no different than the php lint tests and unit tests we have. [20:47:52] I understand your point of view [20:47:55] if it is about something else than "it takes work to make it pass", please say so. [20:48:01] but I just disagree for no specific reason :-] [20:48:21] yeah that takes work [20:48:41] and I am not sure what other people will say about it [21:04:37] New patchset: Hashar; "disable Ext-EducationProgram" [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34419 [21:06:49] Change merged: Hashar; [integration/jenkins] (master) - https://gerrit.wikimedia.org/r/34419 [21:07:03] hashar, we've got bogus test failures https://gerrit.wikimedia.org/r/#/c/34410/ [21:07:54] oh [21:07:58] so you need a software developer :-] [21:08:14] can we disable jshint for now> [21:08:21] Krinkle: seriously [21:08:22] :-) [21:08:26] ^^^^ [21:08:28] cause it's blocking deployment [21:08:30] https://integration.mediawiki.org/ci/job/MediaWiki-GIT-Fetching/7721/console [21:08:56] Krinkle: please disable jshint or make it always a success. That is blocking wmf as I said :-] [21:09:17] hashar: Why? Just ignore it. We did it before. Just give me 20 minutes. [21:09:27] Or comment it out for now. [21:09:35] (for 20 minutes..) [21:09:36] whatever. [21:09:36] please do something [21:09:43] I need to deploy [21:09:43] MaxSem: click [x] and merge it. [21:09:45] it isn't hard. [21:10:07] I can't X someone othher than myself [21:10:18] you can [x] jenkins-bo [21:10:29] no [21:10:32] hashar: " || exit 0", right? [21:11:22] MaxSem: it is running now, it'll pass in a minute. [21:11:29] thanks [21:12:11] Krinkle: yup that would work [21:12:22] or || /bin/true [21:12:54] poor max :( [21:13:41] MaxSem: success :-) [21:13:43] MaxSem: https://integration.mediawiki.org/ci/job/MediaWiki-GIT-Fetching/7722/console [21:13:59] thanks [21:14:18] Krinkle: one way would be to check the $GERRIT_BRANCH env variable [21:14:31] and exit 0 whenever it is not master [21:14:44] gotta try that in test/mediawiki/core2 i guess [21:17:43] bed time for now :-) [21:17:50] Krinkle: congrats and adding jshint :-] [21:18:09] s/and/on/ [21:18:30] hashar: thx [21:20:21] I think I will add a /bin/wmfgrunt wrapper that would simply: /var/lib/jenkins/bin/grunt --gruntfile /var/lib/jenkins/jobs/_shared/gruntfile.js --cwd "$WORKSPACE" [21:22:10] or [21:22:11] hmm [21:22:17] use jenkins job builder to create a macro :-] [22:09:27] marktraceur, I shamelessly stole your EtherEditor code for PHP unit tests. Both extensions cause `cd core/tests/phpunit; make safe` to fail with "Fatal error: Class 'EtherEditorApiTestCase' not found in core/extensions/EtherEditor/tests/phpunit/api/GetEtherPadTextTest.php on line 17" [22:11:24] somehow Jenkins CI knows not to run these extension tests, I wonder how to set core/tests/phpunit to ignore them (or run successfully). [22:14:31] spagewmf: Whoa, that's odd [22:15:01] spagewmf: I haven't touched it in a while, so it might be that there's some problem with it that never got fixed [22:15:13] spagewmf: I can revisit it maybe this weekend [22:16:59] spagewmf: Did you not make it down to 3 in the move? [22:18:02] marktraceur, I'm circling a cold, I figured Americans would lynch me if I make 'em sick for their Thanksgiving. [22:18:10] Hah. [22:18:24] spagewmf: Real Thanksgiving is in October? Or September? [22:19:31] marktraceur I hope they find me a little space on the end of a row for my Ergo...TRON desk [22:20:44] Don't know if this is the right place to ask, but here goes: Would it be completely crazy to use the HTML from a WP-page for a CSS-contest? [22:21:01] error: insufficient permission for adding an object to repository database .git/objects [22:21:01] fatal: git-write-tree: error building trees [22:21:06] Let's play guess who broke Git [22:22:24] MaxSem: !! [22:23:16] wtf? [22:23:30] Stop hiding [22:23:39] drwxr-xr-x 2 maxsem wikidev 4096 2012-11-20 21:17 f2 [22:23:39] drwxr-xr-x 2 maxsem wikidev 4096 2012-11-20 21:17 fa [22:23:39] etc [22:23:40] $ cat ~/.bashrc [22:23:40] umask 002 [22:44:25] "Why can't one use a MediaWiki message for this, which would contain the name of the on-wiki page containing this blacklist?" [22:44:29] I know configuration-by-message is discouraged [22:44:34] But is it completely disallowed? [23:12:27] marktraceur turns out for E3Experiments I just had to add $wgAutoloadClasses for my test class (when you run phpunit locally it infers the class location). I'll attempt a patch for EtherEditor. [23:13:29] spagewmf: Did the bug manifest following my instructions (on the wiki page) for running the tests? Or only running them via the MW runner? [23:14:28] Not sure about the former, definitely the latter.