[00:00:37] Reedy: You shouldn't have told me that, 'cause now I want to test it... [11:05:51] hi [11:08:49] I have to do a little talk about dependable systems in university and wonder if there is documentation about how wikipedia deals with problems like disc failure or broken webservers... [11:09:29] https://wikitech.wikimedia.org/wiki/Hurricanes [11:09:41] We ALWAYS have documentation. [11:10:20] Hmm, now that I think of it, the RFP for data centre doesn't include a requirement about hurricanes, does it [11:11:27] and some docs about the architecture? how many db servers, how the replication is done, load balancing, ... [11:12:27] It's all on that wiki, follow the links [11:12:44] Summarise your discoveries on https://meta.wikimedia.org/wiki/Wikimedia_servers [11:21:15] thanks a lot! [11:27:26] one question: each "thing" in this graph https://upload.wikimedia.org/wikipedia/commons/d/d8/Wikimedia-servers-2010-12-28.svg is a server? [11:32:38] lbenedix: i think that yes, every symbol is a single physical machine [11:32:55] lbenedix: but if that's really from 2010, then it's certainly hopelessly outdated [11:33:10] are there more actual graphs? [11:33:20] no idea [11:33:37] I searched in the commons categories... no luck [11:33:38] there's some data on the wikitech wiki [11:33:48] but it's all not very well arranged for casual reading [11:34:36] thank you pointing me to wikitech wiki [11:34:55] ganglia.wikimedia.org [11:35:28] lbenedix: there's https://wikitech.wikimedia.org/wiki/Category:Servers and https://wikitech.wikimedia.org/wiki/Category:Clusters [11:36:19] but that apparently only contains the machines interesting enough to be named :P [11:37:13] I think wikitech is the right starting point to understand how wikipedia performs so well [12:23:04] if only I could access my wikitech account.. [13:46:32] Nemo_bis: who did you talk to about it? ryan? i can poke him when i see him again (but he has been on since you had that problem) [13:47:30] Nemo_bis: there's an implicit no hurricane clause because they say west of or at chicago. but then you can have tsunami [13:47:42] Nemo_bis: anyway, ops isn't so worried about hurricanes [13:48:21] Nemo_bis: they just assume that datacenters will fall off the grid once in a while and hurricanes are only one reason and we need multiple primaries with full copies of everything read to take over [13:49:30] earthquake clause :) [13:53:21] jeremyb: bug is filed [13:53:30] and I was kidding about the hurrican [13:53:31] e [13:54:07] ok, but did you talk to him about it? [13:54:43] remind me the bug #? [13:54:54] https://bugzilla.wikimedia.org/show_bug.cgi?id=56114 [13:55:07] I suppose I should just create another account [13:56:05] ewww, no [15:54:14] edsu: wikipulse seems to be broken again, do you have an idea what the reason is? [15:56:29] lbenedix: agreed [16:07:58] hm https://wikipedia-edits.herokuapp.com/ [16:49:58] Nemo_bis: is there an api where I can get the number of edits in a given time? [16:50:18] lbenedix: you could process the rc feeds [16:50:24] I know [16:50:47] I did that a while ago: http://page.mi.fu-berlin.de/benedix/pediameter/ [16:53:10] Reedy: PHP fatal error in /usr/local/apache/common-local/php-1.23wmf1/includes/PoolCounter.php line 91: [16:53:10] Class 'PoolCounter_Client' not found [16:53:14] on wikidata [16:53:16] editing my talk page. [16:53:37] but my edit went through [16:54:00] PoolCounter is enabled on wikidatawiki [16:54:40] reedy@tin:/a/common$ mwscript eval.php wikidatawiki [16:54:40] > echo class_exists( 'PoolCounter_Client' ); [16:54:40] 1 [16:54:42] wtf [16:54:59] heh. [16:55:10] There's a load of them [16:55:11] well, it went away. [16:55:13] :o [16:56:16] Numerous wikis [16:56:18] Numerous apaches [16:56:35] One Reedy [16:57:45] Scapping now [16:58:00] They're still happening [16:58:13] Numerous mw versions [16:59:13] of wtf [16:59:14] oh [16:59:21] there is a pool counter client extension [16:59:28] yeah [16:59:30] i assume wikidata has it enabled [16:59:33] for cirrus [16:59:35] everywhere does/should [16:59:40] k [16:59:44] For Lucene and article view [17:00:18] Ah [17:00:22] It's dying in Cirrus code [17:00:25] oh [17:00:30] ping ^d manybubbles [17:00:40] #0 /usr/local/apache/common-local/php-1.23wmf1/includes/PoolCounter.php(91): PoolCounter::factory() [17:00:40] #1 /usr/local/apache/common-local/php-1.23wmf1/includes/PoolCounter.php(148): PoolCounter::factory('CirrusSearch-Up...', '_elasticsearch') [17:00:40] #2 /usr/local/apache/common-local/php-1.23wmf1/includes/PoolCounter.php(290): PoolCounterWork->__construct('CirrusSearch-Up...', '_elasticsearch') [17:00:40] #3 /usr/local/apache/common-local/php-1.23wmf1/extensions/CirrusSearch/includes/CirrusSearchUpdater.php(197): PoolCounterWorkViaCallback->__construct('CirrusSearch-Up...', '_elasticsearch', Array) [17:00:47] reading [17:01:33] Reedy: which branch is this? [17:01:39] Any [17:01:49] 1.22wmf22 and 1.23wmf1 are showing it [17:02:50] maybe have to increase the pool counter limit? [17:02:54] Reedy: just so I'm clear, is everyone seeing this or just wikidata? [17:02:54] i know that was done already [17:03:16] manybubbles: i'm not sure we can tell, but very likely just wikidata [17:03:25] due to the size, etc. and it is indexing [17:03:28] <^d> Wikidata is only running as secondary. [17:03:33] <^d> Also, it's still indexing :\ [17:03:38] but still updates on pages save [17:03:43] <^d> Also, we do need to bump the maxqueue. [17:03:55] probably [17:04:18] <^d> Convenient....I wrote a patch last night for this ;-) [17:04:22] :) [17:04:25] <^d> https://gerrit.wikimedia.org/r/#/c/92808/ [17:04:28] looks familiar [17:04:47] <^d> Prolly way too high tho [17:05:09] 600 (lucene) is for all the wikis combined? [17:05:13] per host? [17:05:26] <^d> Per apache. [17:05:31] <^d> Number's too high [17:05:32] ok [17:05:40] <^d> I copied from lsearchd :p [17:05:44] yeah [17:07:32] ^d and aude: that error Reedy posted makes it look like pool counter isn't enabled in that wiki [17:07:55] <^d> Should be enabled for all wikis. [17:08:10] the line it is blowing up on is: new $class( $conf, $type, $key ); [17:08:16] looks like it is enabled everywhere [17:08:24] and would be strange for the problem to just be appearing now [17:08:35] since we have been indexing / enabled for 2 days [17:08:48] certainly odd, reay [17:08:51] yeah [17:09:00] i would try upping the limits [17:09:05] see if it helps [17:13:51] ^d: do you want to merge that commit and see if that helps? [17:16:35] i am trying to get the error myself [17:16:47] if it's really a fatal error, that's odd [17:19:16] Reedy: are you sure that the pool counter extension is being properly put everywhere? this seems really odd [17:19:25] it is in special version [17:19:44] in common settings it appears pool counter is enabled first [17:19:48] not that order should matter [17:20:29] <^d> We can merge. [17:20:36] <^d> I dunno if it'll fix things. [17:24:29] reproduced [17:25:10] Class 'PoolCounter_Client' not found [17:25:29] aude: did you get that by performing an update? [17:25:36] saving a page [17:25:46] but i saved a bunch without problems [17:25:59] it's like i'm hitting certain apache or something [17:26:16] what else is pool counter client used for besides cirrus? [17:26:21] lucene? [17:27:09] <^d> aude: Lucene, Cirrus, page views [17:27:22] hmmmm [17:27:37] if somehwo a scap went bad? [17:27:50] some files corrupt somewhere [17:27:56] * aude can't imagine and has no idea [17:30:18] I had a look at mw1201 which seems to be showing symptoms but I don't see anything wrong with the files [17:30:34] the timing seems to correspond with config changes deployed [17:30:44] yeah [17:30:53] 16:52 <+logmsgbot> !log reedy synchronized wmf-config/ [17:31:14] after https://gerrit.wikimedia.org/r/#/c/92897/ and https://gerrit.wikimedia.org/r/#/c/92898/ [17:31:25] but dont' see how they can cause this [17:32:23] well, the version specific list of extensions is new, right? [17:32:31] yeah but [17:33:08] it looks like it should skip fine, it the file doesnot exist [17:33:20] and cirrus / pool are not in the specific lists [17:35:17] hmmmm [17:35:34] if ( $wmgUserBetaFeatures ) { require_once( "$IP/extensions/BetaFeatures/BetaFeatures.php" ); [17:35:46] :( [17:35:46] * marktraceur fixes [17:35:58] i suppose that is not a problem? [17:36:24] It seems like it is, but I'll confirm [17:36:29] k [17:37:22] looks like wmgUseBetaFeatures is true for only 1.23wmf2 wikis [17:37:33] which has that stuff, i think [17:39:15] oh, looks like it stopped? [17:39:51] nope, nevermind [17:41:45] reedy has lost internet access if that matter [17:42:09] :( [17:42:33] ub [17:44:27] wait [17:44:31] aude: Where do you see that? [17:44:53] Oh [17:45:00] Was "wmgUser" your typo? [17:45:21] oh, probably [17:45:27] 'kay [17:45:32] Gave me quite the scare there :) [17:45:40] aude: Enabling BetaFeatures is intentional [17:45:42] yeah [17:45:53] i think beta, etc. looks fine to me [17:46:00] and can't see how it's related [17:46:09] Huh? [17:47:08] oh [17:47:14] i got the error on test.wikidata [17:47:22] yeah, so, errors not going away, Reedy still have connectivity or is his cell coverage gone? [17:47:32] test wikidata is still on 1.23-wmf1 [17:48:08] Yeah, AFAIK we haven't pushed wmf2 out [17:48:17] ok [17:48:24] right, but, those errors started as soon as that wmf-config change was pushed [17:48:39] actually, no, wmf2 has been pushed, just nothing should be pointing to it yet [17:48:43] i would revert them, but dont' really know the implications [17:48:55] 16:29 logmsgbot: reedy synchronized php-1.23wmf2 'Staging php-1.23wmf2' [17:49:01] they try one by one to apply them again [17:49:08] ok [17:49:26] i'd think something scapped incomplete or something, but again no idea [17:49:49] Reedy: you debuging or...? [17:50:07] * aude tries wikisource [17:50:14] aude: How do you repro on wikidata? [17:50:31] edited a page [17:50:35] though i edited a bunch fine [17:50:45] first edit on test.wikidata, boom* [17:50:56] Ah, yeah [17:51:01] just my user page [17:51:15] Quoth the server, "Edit conflict." [17:51:45] well, i can edit wikisource fine [17:52:01] oh, I just saw that hashar said Reedy lost net access [17:52:05] yeah [17:52:05] alright, so, crap [17:52:17] ^d: what do you think? [17:52:21] he is looking for wifi [17:52:25] yeah [17:52:31] iirc I've seen it on other wikis [17:52:36] it only affects editing, which is bad [17:52:42] but not good obviously [17:53:04] fortunately the wikidata community is patient with us, though we did screw up the js last night for 30 minutes [17:53:12] unrelated incident [17:53:16] if you have an issue with something and you can do it: revert :D [17:53:22] and figure it out later on :] [17:53:23] * aude doesn't really know [17:53:29] symlinks etc [17:53:34] if it is not that important, you can always wait a bit more and give you some more time [17:53:36] hahh [17:53:40] that is some docroot issue ? [17:54:01] Are there wikibase changes going out that are causing this, or is it an issue with something else? [17:54:03] not something i am qualified to know [17:54:06] marktraceur: no [17:54:45] * aude tries wikisource more [17:54:59] hashar: https://gerrit.wikimedia.org/r/#/c/92898/ ? [17:55:12] yeah those are scary [17:55:22] mutante: think we can revert that? [17:55:23] Normal edits work fine (in the user namespace) so it's not like all editing is borked [17:55:27] ^d: is going to revert those wmf-config changes [17:55:28] nop [17:55:33] marktraceur: i am editing normal pages on wikidata [17:55:34] (doesn't mean i know, didn't merge docroot change on apache) [17:55:35] user / talk [17:55:36] that is for wmf1.23wmf2 deployment [17:55:39] Oh? [17:56:14] there is also https://gerrit.wikimedia.org/r/#/c/92897/ Add version specific extension-list [17:56:20] yeah [17:56:31] i think that affects only i18n update [17:56:41] unless that failed [17:56:49] and then stuff was incomplete [17:57:13] ahh fatal spam hmm [17:57:29] aude: eh, escalated to platform, there's already talk at office [17:57:38] lbenedix: i'm not sure, it worked fine for months and months ... hmm [17:57:48] lbenedix: let me take a look at the logs [17:57:54] [31-Oct-2013 18:57:38] Fatal error: Class 'PoolCounter_Client' not found at /usr/local/apache/common-local/php-1.22wmf22/includes/PoolCounter.php on line 91 [17:57:55] :( [17:58:16] though that one is in open search api query [17:58:38] ^d: manybubbles: the opensearch API queries yield a bunch of Fatal error: Class 'PoolCounter_Client' not found [17:59:03] yeah, I was just saying that on wikimedia-operations [17:59:04] might or not be related [17:59:07] mutante: ok [17:59:14] <^d> Tell me something I don't know. [17:59:21] * aude leaving a note for the community if i can save a page [17:59:21] hashar: it is the same thing, I'm pretty sure [17:59:28] I guess [17:59:32] which i can sometimes [17:59:36] aude: 11:02 <+logmsgbot> !log demon synchronized wmf-config/ 'Cluster to known good state' [17:59:46] good [17:59:49] can someone figure out when it started exactly ? [17:59:58] https://ganglia.wikimedia.org/latest/graph.php?r=hour&z=xlarge&title=MediaWiki+errors&vl=errors+%2F+sec&n=&hreg[]=vanadium.eqiad.wmnet&mreg[]=fatal|exception>ype=stack&glegend=show&aggregate=1&embed=1 [18:00:04] started about 16:52 [18:00:09] same time as the config changes about [18:00:58] yep [18:01:01] PHP fatal error in /usr/local/apache/common-local/php-1.23wmf1/includes/PoolCounter.php line 91: [18:01:02] Class 'PoolCounter_Client' not found [18:01:26] brion: yep [18:01:38] ahh [18:01:56] this is editing on mediawiki.org [18:02:12] ah, so not specific to wikidata [18:02:16] as suspected [18:02:17] alright, chad just synced out the revert [18:02:24] let's see.... [18:02:31] same [18:02:34] yay my edit saved this time [18:03:09] brion: yeah, but it was intermittent [18:03:15] agh [18:03:17] still seeing fatals :/ [18:03:38] lbenedix: well, a restart got it going again, unfortunately the heroku log didn't have history long enough to see what the problem was [18:03:52] thx [18:04:24] http://page.mi.fu-berlin.de/benedix/wikidatameter/ is using the wikipulse-"api" right now [18:05:01] got the error again [18:05:10] but was able to save a bunch of times also [18:05:20] aude: aren't you missing the PoolCounter extension on wikidata maybe ? [18:05:30] and on mediawiki and test wikidata? [18:05:43] it's enabled everywhere and ins pecial version [18:06:30] hashar: it is coming up an cawiki too [18:06:33] its pretty crazy [18:06:43] like something just decided it hates us [18:07:06] did we up the limits? [18:07:09] for pool counter? [18:07:16] so [18:07:16] i can't see how it's related, quite [18:07:38] Cirrus has limits, yeah [18:07:42] but they aren't super high [18:07:46] but that wouldn't cause this issue [18:07:50] agree [18:08:18] csteipp: the changes https://git.wikimedia.org/commitdiff/operations%2Fmediawiki-config/049259d18ddf17294bdbcf0a222d4d874b5a3acf and https://git.wikimedia.org/commitdiff/operations%2Fmediawiki-config/ecfa294d4b158aa5c44c166fa883d8ceef7d357d [18:08:22] eek, long urls [18:08:33] https://gerrit.wikimedia.org/r/92793 and https://gerrit.wikimedia.org/r/92897 [18:08:51] i can't see what/how but then i don't have shell access to look at anything [18:09:23] cirrus has been running fine the past few days on wikidata [18:11:48] reedy should be back in ~ half an hour [18:12:05] k [18:13:20] lbenedix: if are finding yourself relying on it, it should be pretty easy to run [18:13:54] lbenedix: off of heroku, i mean :-) [18:14:23] can we just try syncing PoolCounter client again? [18:14:37] the extension you mean ? [18:14:45] yeah [18:14:57] wild guess but maybe it's corrupt somewhere [18:15:00] corrupt file [18:15:06] some apache [18:15:10] but not all [18:15:57] doing so [18:16:07] k [18:16:40] I was wondering whether it is an issue in the autoloader [18:17:05] still fataling [18:17:07] i don't think anything changed there [18:17:24] not in pool counter, not in mediawiki (as deployed) [18:19:15] argh [18:19:45] I can't reproduce from the command line with eval [18:19:45] :( [18:19:51] edsu: I think I will run a modified version on a little old netbook [18:20:50] hashar: yeah, we were trying that on a couple of the apaches to see if it was there... :/ [18:20:51] lbenedix: cool, how did you plan on modifying it? anything worth rolling back to git? [18:21:16] I only need the "api" and only for wikidata [18:21:29] hashar@tin:/a/common$ mwscript eval.php --wiki=wikidatawiki [18:21:29] > return $wgVersion; [18:21:30] 1.23wmf1 [18:21:31] > $e = new PoolCounter_Client(); [18:21:32] PHP Warning: Missing argument 1 for PoolCounter_Client::__construct(), [18:21:35] :( [18:21:44] no changes, just commenting out stuff I dont need [18:21:46] requires arguments [18:21:48] greg-g: do you have people at the office looking at it ? [18:21:52] hashar: yeah [18:21:56] chad's on it right now [18:21:58] aude: it is irrelevant, it manage to find the class :-] [18:22:05] lbenedix: oh i see [18:22:06] greg-g: good. He is smarter than me :-] [18:22:10] hah [18:22:13] Cha to the dth power. [18:22:24] hashar: right :) [18:22:34] reedy incoming [18:22:38] yay [18:22:42] * marktraceur braces for impact [18:23:19] hashar: see -operations, btw [18:23:37] lbenedix: speaking of wikidata when did denny go to google? [18:23:51] lets switch to operations so [18:23:52] Reedy: when you get back online, check in with chad, he's at the helm, as it were [18:23:53] I think 2 month ago [18:24:08] lbenedix: is he able to continue working on wikidata in his new position? [18:24:53] I'm not sure, Lydia is project manager now [18:25:41] daniel kinzler is still there? [18:25:47] yes [18:26:15] here is the team: http://wikimedia.de/wiki/Mitarbeiter#WIKIDATA [18:26:47] edsu: he's not [18:26:52] i mean not right now [18:28:02] aude: daniel? [18:28:16] edsu: i see you are asking generally [18:28:19] aude: oh you mean, denny can't work on wikidata right now? [18:28:26] thought you were asking if he was here right now [18:28:32] * edsu is asking too many questions [18:28:36] denny is a community member now :) [18:28:37] aude: oh heh, yeah [18:28:42] he can edit, etc [18:28:48] is an admin [18:28:49] mais oui :) [18:29:31] just curious to know if google is willing to support it as a project, esp given how it's role w/r/t freebase [18:30:00] which is something i guess he *is* working on [18:30:12] google spend a lot of mony for wikidata... [18:30:22] lbenedix: yup [18:30:38] lbenedix: are they still? [18:30:58] I dont think so [18:31:03] edsu: no [18:31:47] is it just wikimedia de now? [18:32:31] pretty much [18:32:52] who else? [18:32:53] details are in the annual plan for wmde [18:32:59] we got a donation from yandex [18:33:09] no money from the foundation? [18:33:26] i think it's complicated [18:33:39] money is complicated in general ;) [18:33:55] with wmde doing fundraising, etc. [18:33:59] * aude doesn't understand 100% [18:34:20] oh, wmde is fundraising independently of the foundation? [18:34:29] * lbenedix thinks so [18:34:34] that does sound complicated :) [18:35:08] i could understand why some people might not want to have servers running in the US these days [18:35:29] edsu: servers are still in the us [18:35:36] oh, ok [18:35:57] i don't think that is changing anytime soon, since there are favorable aspects still to us law for wikipedia [18:36:18] although caching outside the us is okay [18:37:39] wikipedia and the foundation profit by wikidata, so in my opinion its strange that the funding comes only from wmde [18:38:15] not only profit by it, but are somewhat dependent on it too [18:38:32] i don't pretend to understand the politics involved though [18:38:39] me neither [18:38:53] my little exposure to how the foundation works left a bad taste in my mouth [18:38:57] I think they host it [18:39:20] lbenedix: We're actually going to help get bits of it going, though I think the details of that are still getting hashed out. See https://www.mediawiki.org/wiki/Multimedia "Implement structured data on Commons and integrate it with Wikidata [18:39:24] " [18:51:03] edsu: :( [18:53:08] edsu: well, all (or many) chapters strive to do a certain portion of stuff that benefits the wider community [18:53:16] e.g. hosting wikimania in dc [18:53:40] if wmde was more needy, i'm sure wmf would give the funding :) [18:54:15] and not 100% sure of all the details anyway [19:15:04] Damn it [19:15:32] welcome back! [19:15:38] you missed the fun [19:15:57] Found a brasserie with free internet [19:15:57] Won't let me SSH out etc [19:18:29] do we get a second post-mortem today? :P [19:18:30] did my scap run/finish? [19:18:41] i don't think so [19:18:55] we don't know what happened [19:19:04] heh [19:20:34] Sigh [19:20:53] So where do things lay? [19:22:11] I've still got to drive to lier [19:22:37] everything is okay now [19:22:55] we reverted some config stuff, changed some cirrus pool config [19:23:09] and disabled the multimedia /beta extensions everywhere and the issues are gone [19:23:37] no hurry, but still need to determine exactly what the problem is and scap/re-deploy stuff [19:24:04] okay for now [19:24:05] Currently it's still the deployment window [19:24:10] yeah [19:24:20] greg-g: ^ [19:24:39] I can't VPN out, I can't SSH out, i can't Remote Desktop out [19:24:43] i think we can wait until you have better access [19:24:57] good idea, imho :) [19:25:31] i dont' think anyone will do more deploys until then [19:27:56] reedy|france: yeah, get to solid net and ping us [19:30:38] Don't suppose someone wants to try opening 443 as a SSH port to bastion? ;) [19:30:54] heh [19:32:06] I guess I should eat quickly and get over to my hotel where I know the wifi will let me online [19:32:30] Orrr... Does someone have a SSL VPN service I could borrow? [19:33:27] reedy|france: go to your hotel [19:33:28] :) [19:33:34] eat! [19:33:40] I went to a restaurant that had free wifi [19:33:53] I've ordered some food, can't really leave till I've eaten [19:33:59] heh, fine [19:34:07] I might be able to steal the lightening deploy window later [19:35:50] reedy|france: not really, I mean, we should get mw updated before then, people are already wanting to use the LD [19:39:26] the boat I was supposed to be on would've had wifi [19:44:23] Anyone want to review a small change to the GettingStarted extension: [19:44:25] https://gerrit.wikimedia.org/r/#/c/91034/ [19:44:54] I'm glad to explain the context, since I'm trying to get more people capable of reviewing this change (partly since I'm the only real engineer for this project). [19:47:00] I meant get more people capable of reviewing the project in general. [19:50:21] superm401: there, done! but that doesn't mean i will review other stuff on it :P [19:50:40] MatmaRex, thanks. :) [19:50:40] the submit action is a very stupid thing in general [19:50:58] MatmaRex, it's a bit weird how it works for edit when it's a GET [19:51:01] it basically only serves to trip up people who want to detect the editing view [19:51:06] And of course, it sucks that you lose your preview when you log in. [19:51:13] i've had to handle this in way too many gadgets [19:51:17] But that requires some clever cookie or localStorage based fix. [19:51:32] i wonder why we have two actions for editing, edit and submit [19:53:03] MatmaRex, yeah, it would make sense from one perspective if there were three, 'edit', 'preview', and 'showchanges'. [19:53:04] Edit submission handler [19:53:07] This is the same as EditAction; except that it sets the session cookie. [19:53:11] class SubmitAction extends EditAction { [19:53:24] superm401: it would make much more sense if there was just 'action' to me [19:53:34] // Send a cookie so anons get talk message notifications [19:53:36] what. [19:53:45] why is this in there [19:53:51] * MatmaRex git blames [19:54:00] What do you mean just 'action'? [19:54:15] ugh, just 'edit' [19:55:08] heh, i'm blaming [19:55:17] the comment style on "// Send a cookie so anons get talk message notifications" was changed two times [19:55:34] haha, that dates to at least 2006. [19:55:37] http://mediawiki.org/wiki/Special:Code/MediaWiki/12570 [19:56:02] Right, I'm gonna get sorted and get to my hotel [19:58:08] ha [19:58:11] http://mediawiki.org/wiki/Special:Code/MediaWiki/7157 [19:58:13] Only start new session for anon users on submit, not edit [19:58:21] who is tom gilder? [19:58:40] errrr, 7157 is a small number [19:58:57] 18 January 2005. [19:59:22] i just *love* these very short SVN commit comments. i really do. [20:00:15] MatmaRex: he's not in https://svn.wikimedia.org/viewvc/mediawiki/USERINFO/ [20:00:34] bugzilla patch? [20:01:04] aude: then it would have had a different "author" [20:01:19] svn doesn't have a way to forge identity. afaik [20:01:47] no response yet on pediapress thread??? [20:01:53] anyway. does anybody know why we even have the two actions? [20:03:07] MatmaRex: I'm still looking for a tool that goes back further than the most recent edit to a line [20:05:28] valhallasw: gitk [20:05:48] valhallasw: it has a handy "Find origin of this line" tool when you right-click on a diff [20:06:08] valhallasw: i had to go through 7 or 8 steps before i hit the real origin in this case [20:06:22] the file containing this was once accidentally deleted and then restored a few commits later. :P [20:10:33] MatmaRex: tig is good i think [20:10:37] never tried gitk [20:13:18] jeremyb: never tried tig, gitk came with my git install [20:13:41] oh, tig is a console tool. gitk is gui [20:36:33] Reedy: how goes? :) [20:39:00] MatmaRex: tried it now? :) [20:40:36] MatmaRex: ah, cool. I [20:40:40] I'll keep that in mind. [20:41:31] greg-g: he is probably still fighting for wifi access and went along on his way to NL [20:42:02] greg-g: I texted Sam to let him known everything got fixed [20:44:08] hashar: thanks :) [20:44:17] still haven't deployed wmf2 today, but, you know ;) [20:45:16] greg-g: we should get more of us involved in wmf branch deployments [20:46:03] yeah, like me, but I'm not stepping in today :) [20:46:31] hashar: hey, so, to enable a new extension on betalabs, do you just do: https://gerrit.wikimedia.org/r/#/c/92922/ [20:46:35] hashar: or is more needed? [20:46:43] I don't think doing the grunt work is your role though [20:46:43] (if it looks good, please merge) [20:46:47] hmm [20:47:05] maybe extensions list? [20:47:10] hashar: it isn't my role, but I should be able to play backup sometimes, given our org's leanness :) [20:47:18] oh, submodules [20:47:33] do those extensions get registered as submodules somewhere? [20:48:11] https://git.wikimedia.org/tree/mediawiki%2Fextensions [20:48:24] greg-g: mail incoming [20:48:26] wee [20:48:30] which they are [20:48:58] * marktraceur fails to see what greg-g found amusing -.- [20:49:12] oh, not there [20:49:16] aude: we maintain the submodules in mediawiki/Extensions manually [20:49:24] hashar: ok [20:49:25] that was "oh, I shouldn't be laughing becuase mark is waiting on me" [20:49:31] aude: once registered, Gerrit takes care of updating them [20:49:37] Snrk :P [20:49:50] greg-g: We aren't paying you to laugh! [20:49:53] aude: there is sync-with-gerrit.py script at the root to detect new extensions and register them in the super project. [20:49:56] right [20:50:06] hashar: lunch looks yummy [20:50:06] like we will do with all the wikibase stuff :) [20:50:09] magical [20:50:53] aude: so, just that merge I linked and it should be good on betalabs? [20:51:25] greg-g: of course it is! I am french \O/ [20:51:37] hah [20:52:08] aude: feel free to reuse the python script :D [20:52:52] those extensions are there already [20:52:58] so i think ok [20:53:01] hashar: sure :) [20:53:40] jeremyb: uh, no. i like standard git and guis for it enough [20:53:49] MatmaRex: huh [20:53:50] jeremyb: also, it seems linux-only [20:53:56] i don't know about that [20:54:02] but why aren't you running linux? [20:54:03] and i happen to be on windows [20:54:07] why? [20:54:09] inertia, i guess [20:54:12] eww [20:54:16] :) [20:54:17] not enough motivation to switch [20:54:27] i've got everything set up here and it works for me [20:54:34] marktraceur: greg-g : so we land the 3 extensions right now ? [20:54:44] jeremyb: Give MatmaRex a break, if he cared about free software we'd have one fewer Windows/Opera tester [20:54:48] i.e. we'd have zero [20:54:49] i actually installed all of the unix-like commands worth running [20:54:56] hashar: On beta yes. [20:54:56] hashar: on betalabs, please [20:54:57] rotfl [20:54:59] On prod soon. [20:55:13] marktraceur: heh [20:55:21] +2ed [20:55:35] MatmaRex: really there's no excuse for not at least having a linux VM [20:55:40] Does {{int:lang}}/MediaWiki:lang only exist on Commons, or is there an extension that does it do (CLDR?) [20:55:47] MatmaRex: and you could run tig on any linux box (even labs) [20:57:01] I assume I'm not the only one getting an error page on mw.org [20:57:10] superm401: it's an evil hack [20:57:19] marktraceur: greg-g : 3 more ext on beta, hopefully they are registered in mediawiki/extensions or we now have blank pages on beta cluster [20:57:20] Nemo_bis, I know, just trying to figure out how evil. [20:57:27] I knew LangSwitch was purely on-wiki. [20:57:38] superm401: it won't be accepted as extension (let alone core) because it fragments cache, so it's done on wiki for even worse results :D [20:57:40] hashar: One of them needs update.php, how do? [20:57:42] But I thought {{int:lang}} was software-provided, doesn't look that way, though. [20:57:49] superm401: it's adopted by Meta, Wikidata and so on [20:58:16] marktraceur: update.php is run every hour. look at https://integration.wikimedia.org/ci/view/Beta/ [20:58:36] Commons interprets untranslatable content as damage and routes around it. :) [20:58:37] marktraceur: then run the beta-update-databases job. It does a foreachwiki update.php [20:58:38] superm401: translatewiki.net has it as a local cache [20:58:39] Nemo_bis, thanks, noted. [20:58:46] *hack [20:58:52] now I need a cognac. [20:58:56] hashar: So if it's "deployed" every three minutes and updated every 60, there's potentially 57 minutes of database error. [20:59:01] Oh, that'll work [20:59:33] i don't think it's every 3 minutes [20:59:49] i think update and deploy happen together [20:59:52] Oh, it's gonna run in two minutes [20:59:53] * aude not authority on this though [21:00:40] marktraceur: correct. [21:00:59] if logged on jenkins with a wmf account, you can just "Buildnow" [21:01:00] I did it [21:01:10] superm401: the idea why {{int:Lang}} aka {{UILANGCODE}} is not in MediaWiki itself is that just placing it on a page will mean caching one version of it for every language (and the user is served a random version) [21:01:15] jeremyb: i have a ubuntu VM for when i need it [21:01:24] MatmaRex: so install tig [21:01:27] :) [21:01:36] so many sirens... what's going on [21:01:57] jeremyb: must be tig opposers coming to fetch you [21:02:09] jeremyb: i don't have enough RAM to run both the VM and browser without heavy swapping [21:02:18] jeremyb: running non-trivial stuff via SSH is not an opion with the latencies i get sometimes [21:02:29] Nemo_bis, wouldn't necessarily have to be random if the browser language is used. [21:02:30] MatmaRex: tried mosh? [21:02:36] But still pretty random, I guess. [21:02:49] superm401: I'm not saying I agree with that rationale (I can't judge), just relaying it :) [21:02:56] MatmaRex: how much RAM are we talking? [21:03:01] Nemo_bis: "Mosh is free software, available for GNU/Linux, FreeBSD, Solaris, Mac OS X, and Android." [21:03:03] no. :P [21:03:04] Yeah, I shouldn't comment too much, since I haven't looked at the whole picture. [21:03:07] jeremyb: 2 GB total [21:03:15] Just reviewing https://gerrit.wikimedia.org/r/#/c/89499/ [21:03:25] MatmaRex: well depends on your browsing i guess. you should be able to do some... [21:03:25] MatmaRex: sigh, I always forget that horrible defect of yours [21:03:44] MatmaRex: i didn't run a VM here (at least not much) but i used a netbook with 1GB total for a while [21:03:54] jeremyb: i have 30 tabs open like all the time. using 1.5 GB for the browser is not unusual [21:04:32] and i think the old Opera 12 i'm using is pretty lightweight in memory usage terms [21:05:15] MatmaRex: dunno Opera, but Firefox is not really consuming 75 % of the memory it uses [21:05:21] right now on my machine I mean [21:07:12] Nemo_bis: i don't understand? [21:07:37] Nemo_bis: opera also includes a mail client which i heavily use, that probably costs something too [21:08:47] thunderbird is not exactly the cheapest mail client ever but it's using less than 200 MB here [21:09:10] and my mail DB was considered rather extreme by Thunderbird devs last time I reported a bug [21:09:43] Nemo_bis: how large is your mail db? [21:10:02] Nemo_bis: the mail client includes a feed reader :P [21:10:11] thunderbird too ;) [21:12:39] (mine on Opera is almost 3 GB) [21:20:19] Anyone ever seen: [21:20:21] Warning: DOMDocument::load(): I/O warning : failed to load external entity "/vagrant/mediawiki/languages/data/plurals-mediawiki.xml" in /vagrant/mediawiki/includes/cache/LocalisationCache.php on line 588 [21:20:37] ? [21:20:56] That happened when I tried import all the language names from Commons. [21:21:12] MatmaRex: what format does opera use? [21:21:44] Nemo_bis: why would i know? probably some proprietary one [21:22:02] all i can tell you is that is has thousands of small files and a few big ones :P [21:22:28] lol [21:22:39] howl can you trust such stuff with your mail [21:23:01] Nemo_bis: or browsing even? [21:23:56] heh [21:24:09] opera are pretty cool guys [21:24:15] jeremyb: but browsing is not something you want to store :) [21:24:36] (I hope at least bookmarks and address book are exportable in some sensible format) [21:24:49] Nemo_bis: the fact that i'm using gmail is an infinitely larger security hole ;) [21:25:05] Nemo_bis: yes, of course [21:25:08] I'm not speaking of security, just protability :) [21:25:13] MatmaRex: they might do cool things but there's a limit when you're closed source [21:25:21] omg my typing is even worse than usual this evening [21:25:40] Google recommends Thunderbird to bring your Gmail mails around, IMAP + mbox does wonders indeed [21:25:50] Nemo_bis: use msoh! [21:26:05] Nemo_bis: http://pravin.paratey.com/posts/exporting-opera-email-to-mbox-format [21:26:13] Master of Science in Occupational Health : Industrial Hygiene [21:26:14] apparently opera uses something that's (almost) standard [21:26:21] so that's nice i guess [21:26:25] Methodist Stone Oak Hospital - Methodist Healthcare System | San ... [21:26:35] it seems google fears for my health [21:27:39] Mars Society of Houston [21:30:02] MatmaRex: http://fileformats.archiveteam.org/wiki/Mbs [21:30:47] and even http://fileformats.archiveteam.org/wiki/Category:File_formats_with_extension_.mbs [21:32:08] PHP fatal error in /usr/local/apache/common-local/wmf-config/CommonSettings.php line 1862: [21:32:11] require_once() [function.require]: Failed opening required '/usr/local/apache/common-local/php-1.23wmf1/extensions/BetaFeatures/BetaFeatures.php' (include_path='/usr/local/apache/common-local/php-1.23wmf1/extensions/TimedMediaHandler/handlers/OggHandler/PEAR/File_Ogg:/usr/local/apache/common-local/php-1.23wmf1:/usr/local/lib/php:/usr/share/php') [21:32:19] on rollback [21:32:44] actually, on view even https://www.mediawiki.org/wiki/Article_feedback/Version_5 [21:32:45] Nemo_bis: see #-operations [21:33:16] Nemo_bis: all of the small files seem to contain the messages in plaintext. not a weird format, is it. [21:33:17] oki [21:33:30] also, i discovered that i have an e-mail marked as having arrived in 2018. [21:33:52] Date: Sun, 09 Sep 2018 02:00:00 +0200 [21:34:35] yep, it happens [21:34:48] MatmaRex: did it contain lottery picks? [21:35:12] ori-l: sadly, no :( [21:35:54] dire warnings of any kind? [21:37:01] it's a notification from the provider of my old crappy free email that some other message on that email is inaccessible, downloaded via POP3 by gmail and then by IMAP to my computer. [21:37:11] the message in question is an automated ad mailing. [21:37:45] too bad the TZ ruined the creepiness of midnight messages [21:46:14] Nemo_bis: hey, did anyone do anything about that twn bug from earlier today? [21:46:17] https://bugzilla.wikimedia.org/show_bug.cgi?id=56409 [21:47:26] nope, need Nikerabbit to get some sleep first; tomorrow