[00:04:09] spagewmf: great stuff [00:04:39] Is anyone else experiencing problems with a number of WMF sites ? [00:05:21] hi AlexandrDmitri, can you describe the problems? [00:05:38] Yes, they are connecting but timing out. [00:05:45] en.wp, en.wikinews [00:05:57] AlexandrDmitri: Hmm. When did this problem start? [00:06:01] And where are you? [00:06:10] About fifteen minutes ago, and Morocco. [00:07:47] Both http and https. Meta not responding either. [00:08:22] I am in the US and I can connect. Hmm. [00:08:30] works for me too. [00:08:42] AlexandrDmitri: No one else has reported this; can you reach other websites okay? [00:08:50] (obviously you can IRC so your net works) :-) [00:08:54] Yes, quite happily. [00:09:04] saper, it still feels there's a missing "How to work day-to-day": *never* change local master, always work on local branches that track gerrit/master, when you're ready create a new branch and git merge --squash your commits to that, then git review. The details are hazy, but I wish I'd been told that from the get-go. [00:09:11] I'm trying on my phone as I speak, to see if it is an ISP issue. [00:09:17] Thanks for reporting this, AlexandrDmitri [00:09:23] I always use http://downforeveryoneorjustme.com/ in such cases :) [00:09:32] (or ping, of course) [00:09:49] It's just you. http://en.wikipedia.org is up. [00:10:07] I can connect via my phone. Could be a Maroc Télécom issue. [00:10:33] AlexandrDmitri: Hmm. Yeah, I would suggest asking them :/ sorry I can't be of more help [00:11:35] Don't worry. I'm sure it will sort itself out. [00:11:44] Thanks anyway AlexandrDmitri [00:12:01] Bye. [00:16:47] whoops, it looks like my change broke stuff. sorry guys. in my defence, i asked multiple people multiple times to review it, and no one did until it was merged. hooray for gerrit's review process. [ https://gerrit.wikimedia.org/r/#/c/30361/ ] [00:18:46] what do you mean, "no one did until it was merged"? [00:19:44] clearly someone did or it wouldn't be merged... that's the point of merging [00:20:13] NikeRabit +1, Siebrand merged [00:20:15] sumanah: Siebrand just came one day and merged it, after it was rotting for a month with no reviews at all [00:20:36] (well, except for the one by NIklas) [00:21:02] i think i added even more people as reviewers, but apparently they removed themselves [00:21:03] I see some reviews by Siebrand, Alex [00:21:44] sumanah: PS2 is an entirely new change, comments before it were for something different [00:22:03] since then till niklas, it's only me and jenkiins [00:22:12] My 'review' was basically just asking whether it should be part of another change [00:22:23] i had to rebase the change multiple times to match the changes in master [00:22:39] and probably everyone on the review list got mailed every time [00:22:42] and no one cared [00:23:09] MatmaRex: that's interesting, because the issues that caused were visible in beta labs for the last three days, but none of us realized they were significant, we thought they were caused by a misconfigured memcache https://bugzilla.wikimedia.org/show_bug.cgi?id=42452 [00:23:12] seems like the lesson for the future is, as usual: we need to train more people to review code, get more people to +2 in core and in important extensions, and get better unit testing [00:23:34] sumanah: and test in beta labs! [00:23:37] also, when MatmaRex whines to get faster merges, ignore him [00:23:37] this wouldn't be caught by unit testing, i'm afraid [00:23:52] as the code *is* correct [00:23:58] but didnt take the WMF config into account [00:24:17] i wasn't expecting styles and HTML to be cached in such different ways [00:24:38] i'm not whining for faster merges, i'm whining for any reviews at all [00:24:42] MatmaRex: exactly. not even an automated browser test would have failed, it takes human eyes to see that sort of issue [00:25:14] MatmaRex: I have seen you many times say "will someone please merge my code" although yes in this case you were mostly banging the drum every day to get your code reviewed [00:25:34] one day AI will be good enough...a bunch of second class citizen robots that do testing all day... [00:25:38] I'm trying to get *everyone's* code reviewed, not just the people who yell the loudest [00:26:57] and sometimes that means you'd wait somewhat longer, because code reviewers' time is finite, although as you know I am trying to plant more trees (as the metaphor goes) [00:27:48] Is anyone currently deploying anything? If not, I am going to syncdir CentralNotice [00:29:39] mwalker: now seems as good a time as any [00:30:57] awight: ready to test? [00:32:52] MatmaRex, chrismcmahon, andre__, Krinkle, I'm looking at wikitech.wikimedia.org/view/Incident_response and wondering whether it's worth putting together a quick postmortem on this incident [00:33:05] ? [00:33:28] I mean on today's deployment fun [00:33:33] pgehres: yep [00:33:40] Right [00:33:52] awight: live on testwiki [00:34:14] i see! great [00:34:40] sumanah: i'm off to sleep now. thanks for fixing this, everyone [00:36:39] James_F|Away: are you now less awway? [00:38:28] pgehres: I'm happy with that. [00:38:32] kk [01:01:49] can anyone load this page without it choking their browser? http://en.wikipedia.org/wiki/Portal:Featured_sounds [01:04:07] I guess it's an issue with loading a huge ton of instances of the new media player. [01:04:09] or something. [01:04:34] Chromium finally chewed through them. [01:04:40] ragesoss: it's takinga full core and nothing rendered yet (chrome, os x) [01:06:02] ragesoss: still hasn't rendered.... [01:06:05] LeslieCarr: once it processes every last one of those media player instances, it'll let go of the cpu and behave fine. [01:06:11] might take a few minutes, though. [01:06:20] haha [01:06:51] I got like 4 "page is unresponsive" notifications in the meantime, though. [03:59:14] 28 05:15:20 < jeremyb> grrrrrrr, what is this
Namespaces
thing? is new? [04:00:10] * jeremyb noticed it ~22.75 hrs ago... should have raised more alarms maybe? [04:00:21] that was prod but I don't remember exactly where [04:10:51] Next time, try all caps. [06:03:34] * Brooke bothers apergos. [06:11:42] apergos: "ori-l put en.wiki's articles wikitext dump into Google's BigQuery and was able to run regex on it in about 10 seconds." [06:11:48] apergos: I thought you might find that interesting. [06:11:52] I find it magical. [07:02:41] what is google's bigquery? [07:02:49] Brooke: [07:03:26] and also I wonder if they put in the full history or the current revs only [07:19:04] as a resdential windows pc/laptop repair technician, 15 or more per week, what would be the easiest way to avoid the nearly 200 total updates from microsoft after a fresh install of Winxp sp3, vista sp2 & Windows 7 sp1 [07:21:37] in vista sp2 (clean install), first update was 123, then 28, then 1 netframe 4 then, then additional netframe 4 updates.... lol [07:23:23] just a warning that this is a question very far off topic for this channel; this is for discussion of technical issues around the wikimedia projects [07:23:48] (so odds of getting an answer here arelikely slim) [07:25:16] o [07:26:29] do you have to be a liberal to discuss things here? [07:29:07] another happy customer [07:30:37] I actually had a answers for that question [07:31:07] shoulda spoken up sooner :-P [07:31:28] i was doing something else ;) [07:31:37] heh [08:13:08] is the stuff in 'docroot' versioned? [12:46:41] how can i add image instead of text via addPortletLink()? [13:24:59] anyone here? [18:27:22] Can someone take a look at https://it.wikisource.org/wiki/Pagina_principale?action=edit , please? [18:28:26] Tpt: what should I see? [18:29:10] valhallasw: A server error: [18:29:11] A database error has occurred. Did you forget to run maintenance/update.php after upgrading? See: https://www.mediawiki.org/wiki/Manual:Upgrading#Run_the_update_script [18:29:14] Query: UPDATE `user` SET user_touched = '20121129182314' WHERE user_id = '5104' AND (user_touched < '20121129182314') [18:29:16] Function: User::invalidateCache [18:29:18] Error: 1205 Lock wait timeout exceeded; try restarting transaction (10.0.6.44) [18:29:41] Tpt: Ah. I guess I got another server - no such error here. [18:30:15] and looking at the error, it's probably just a fluke - it's a timeout [18:30:56] oh, I do get a timeout when I just visit the normal page - but my italian sucks [18:31:12] I'm not the only user to have this issue, there is a report on oldwiksource scriptorium: https://wikisource.org/wiki/Wikisource:Scriptorium#it.wikisource_is_down [18:31:13] In questo momento i server sono sovraccarichi. [18:31:13] Troppi utenti stanno tentando di visualizzare questa pagina. Attendere qualche minuto prima di riprovare a caricare la pagina. [18:31:16] Timeout durante l'attesa dello sblocco [18:32:09] I've merge your change on LST. [18:32:35] Yep, I saw it. Let's see how quickly it gets deployed... [18:33:10] LST? [18:33:28] Extension:LabeledSectionTransclusion [18:33:44] is this also happening when logged out? [18:34:28] jeremyb: it seems so, yes [18:35:06] oh, it's poolcounter maybe [18:35:23] > Sorry, the servers are overloaded at the moment. Too many users are trying to view this page. Please wait a while before you try to access this page again. Timeout waiting for the lock [18:36:56] not this shit again [18:37:02] hah [18:42:10] This patch fix a major bug on Wikisources, can someone make it live, please? https://gerrit.wikimedia.org/r/#/c/35879/ [18:47:44] meh not too many db errors [18:48:18] ^demon: could you backport and push Tpt's change? [18:49:05] only needs backporting to wmf5 [18:50:25] robla: trying to verify that the new oai stuff is live, but i don't have the credentials >_< just poked ^demon about it, but he seems to be busy. [18:50:34] <^demon> I don't have that. [18:50:37] <^demon> robla: On it. [18:51:03] ^demon: ok, thanks... [18:51:15] hm, someone recently offered the credentials to me... who was that?!... [18:51:19] damn :) [18:51:42] robla: any idea who i should ask about that? [18:54:01] DanielK_WMDE_: that's an ops thing. notpeter may be able to help [18:54:22] robla: ha! never mind, found it in my key chain ;) [18:54:35] * jeremyb chuckles [18:55:51] Tpt: is that gerrit change related to the itwikisource breakage? [18:56:02] * jeremyb guessed not [18:56:34] but not really sure [18:57:09] jeremyb:Not at all. [18:58:27] robla: ^ [18:59:06] wait, huh? [18:59:30] 29 18:35:06 < jeremyb> oh, it's poolcounter maybe [18:59:33] 29 18:35:23 < jeremyb> > Sorry, the servers are overloaded at the moment. Too many users are trying to view this page. Please wait a while before you try to access this page again. Timeout waiting for the lock [19:00:22] robla: that's the front page of itwikisource and it sounds like it's been hours since it was working [19:00:50] of course it may just be the symptom not a root cause [19:01:17] back in a bit [19:01:55] 29 18:31:12 < Tpt> I'm not the only user to have this issue, there is a report on oldwiksource scriptorium: https://wikisource.org/wiki/Wikisource:Scriptorium#it.wikisource_is_down [19:02:40] woosters: going to run a bit late as we figure out this issue [19:02:47] <^demon> Argh. [19:02:51] ok [19:02:59] ^demon: what's up? [19:03:04] <^demon> php-1.21wmf5 has diverged from what's in gerrit. [19:03:19] whose deployment window is it now? [19:03:23] * robla looks [19:03:34] ^demon: I can get you a patch for the LST version that is currently deployed [19:03:56] <^demon> I'm not worried about that. [19:04:02] <^demon> I was just going to update to master. [19:04:06] ah, OK. [19:04:56] tewwy: is kaldari planning on using the deployment window? I don't see him on IRC, so I'm assuming not [19:05:16] at any rate, we need to ask y'all to hold off [19:05:23] I heard not. I'll tell him [19:06:37] robla: He was hoping to use the window, but he can wait. :-) [19:06:54] robla: Tell him if you guys can free it up. :-) [19:07:45] kaldari: we're diagnosing a site outage with itwikisource now, which is why we're asking you to hold off [19:07:55] OK, thanks for the info [19:08:03] I'll hold off [19:08:41] I didn't even know we had an it.wikisource :) [19:08:59] https://it.wikisource.org/wiki/Wikisource:Bar#Problema [19:09:11] Discussion regarding the main page issue. [19:09:24] oh interesting, only the main page? [19:12:22] No, it appears to have affected other pages, and for some days now. [19:12:32] (translating Italian is hard) [19:15:04] Some users reporting the problem with NS0, others with NS:User [19:16:11] The work-around has been to ask for dev attention and otherwise just work on pages which do load. [19:17:22] Actually, they reported it here: http://wikisource.org/wiki/Wikisource:Scriptorium#it.wikisource_is_down [19:20:09] hello guys [19:20:20] I was wondering if there's any project doing machine learning or AI on wikimedia dumps [19:21:12] Might ask that in #MediaWiki? [19:23:08] machine learning and AI is good! [19:23:16] you can do fancy stuff like… have machines learn [19:23:17] domas: it sure is [19:23:18] and take over the world [19:23:57] skynet.wikimedia.org [19:24:40] I recently got some machine learning 101 [19:24:48] I remotely have some understanding what it is [19:25:02] as far as I understand, you need lots of resources for that! [19:25:06] domas: do you work on some projects that involve wikimedia data and machine learning or AI ? [19:25:23] domas and brion are plotting! [19:25:25] domas: yes you do, but I can get some Amazon EC2 [19:25:53] average_drifter: I work on some projects that involve wikimedia data and some projects that involve machine learning and AI [19:26:09] domas: ok so they're separate [19:26:30] why would you want to do any learning out of wikimedia data? [19:26:33] there's nothing useful there [19:26:53] ^demon: AaronSchulz: do we need to enlist ops help on itwikisource? [19:27:02] <^demon> I sync'd it already [19:27:11] http://it.wikisource.org/wiki/Pagina_principale [19:27:33] average_drifter you should ask the research list and maybe also the xmldatadumps-l list [19:27:33] I wanted to do some k-means calculation for pageview data at some point in time [19:27:48] and some kind of clustering based on that [19:27:51] i guess convert_wikidata_dumps_to_infobot_brain_format.sh would be poor-man's-pseudo-AI [19:27:54] the graph weighting based on views [19:27:58] but it is pita with all the year articles [19:28:02] <^demon> robla: I haven't done anything with pool counter. All I did was sync out the LST fix. [19:28:03] and template induced links [19:28:09] <^demon> (I thought that was itwikisource's problem) [19:28:33] is it just a matter of undoing a template edit maybe? [19:30:05] well either way it needs (long term) a better error msg [19:41:43] I say we ask brion to fix it.WS. [19:42:15] * brion hides [19:42:44] Amgine: do you ever do open mic nights? [19:43:01] No, no I don't. [19:43:16] <^demon> +1 to brion getting his hands dirty ;-) [19:44:18] hah [19:48:49] I'm not seeing any obvious problems. AaronSchulz, ^demon, maybe we can try rolling just itwikisource back to 1.21wmf4 and see what happens [19:49:05] <^demon> I can do that. [19:49:09] (rather, I'm not seeing any obvious template changes that could explain this) [19:50:12] and that fixed it [19:50:31] alright, now we need to understand why that fixed it [19:50:34] yep [19:51:09] seems likely they're big LST users, and that's the biggest parser related change I'm aware of [19:51:55] kaldari: go ahead and deploy what you were going to. sorry to keep you waiting [19:52:13] robla: thanks! [19:54:36] Tpt: valhallasw: want to relay the news? [20:06:22] ^demon, robla, jeremyb: Thanks a lot! I'll relay the news. [20:07:01] Valerie Juarez? https://bugzilla.wikimedia.org/show_bug.cgi?id=40497 [20:07:38] <^demon> Tpt: Well, we had to roll it back to 1.21wmf4. Even with the fix 1.21wmf5 was still broken. [20:11:12] where is mediawiki-config/wmf-config/CommonSettings.php history prior to 2012-02 ? [20:11:12] ^demon: You have roll back it Wikisource or all Wikisources. en.wikisource is ever in wmf5 and the LabeledSectionTrenaclusion bug looks fixed. [20:11:50] I'll take a look at the itwikisource problem... [20:12:01] <^demon> Tpt: Just it.wikisource [20:12:23] <^demon> spagewmf: There is none. We didn't copy the history from SVN. [20:12:30] ^demon:Ok. Thanks. [20:12:39] Tpt: enwikisource is still running the version with bug [20:13:01] <^demon> spagewmf: It was in a private svn repo that lived on fenari, and the history was littered with private stuff. [20:13:14] <^demon> In short: we were lazy and just copy+pasted back in Feb. [20:15:54] ^demon, thanks. It would be great your explanation was on https://noc.wikimedia.org/conf/ , but it isn't a wiki page. [20:17:08] <^demon> No, but it's all handled via puppet. Anyone can tweak it. [20:18:50] valhallasw: Strange. The bug looks fixed. The pages that was reported to have issues works fine now. [20:19:48] Tpt: http://en.wikisource.org/wiki/Wikisource:Sandbox [20:20:06] it's fixed after purging indeed [20:20:37] An other Wikisource related bug: the djvu text extraction is broken since wmf4: https://bugzilla.wikimedia.org/show_bug.cgi?id=42466 [20:21:13] After some investigation the issue is at the extraction of the text layer by MW core. [21:20:19] ^demon, robla: I've tried to find what happened with itwikisource, but I cannot find a clear reason. It also doesn't help that I don't know the PoolCounter mechanics at all... [21:21:04] valhallasw: I doubt that poolcounter is the problem, so don't worry about that [21:21:15] poolcounter gets involved on really expensive page parses [21:23:35] robla: well, the error was related to the poolcounter lock - it that a symptom of a parse that is just too heavy? [21:23:57] valhallasw: yeah, exactly [21:24:19] basically, poolcounter limits the number of simultaneous parses of a given page [21:24:38] however, if the page never successfully parses, then it'll fail [21:24:49] (or rather, if it hits a timeout) [21:25:18] valhallasw: here's what I would suggest: [21:25:40] this page also failed: http://it.wikisource.org/wiki/Pagina_principale/Sezioni [21:25:58] (and was transcluded in Pagina principale) [21:26:23] I'd suggest exporting that page and all of it's templates, and see if you can repro the problem locally [21:27:39] ok. One of my worries with the new parsing code was that it might be too resource heavy, as it does two parses for the page that is transcluded [21:28:30] and it probably does that a dozen times for Pagina principale, because Pagina Principale/Sezioni is transcluded a dozen times... [21:36:20] I'm trying to limit a number of parallel processes in bash. I've got http://etherpad.wikimedia.org/ZVFiVE3E09, but that tells me "line 6: let: [[: syntax error: operand expected (error token is "[[")". [21:36:46] Anyone comfortable enough with bash to help out? [21:39:15] siebrand: i think you are just missing a ";" [21:39:24] mutante: Checking [21:39:57] mutante: yay, that's it :) [21:40:49] mutante: Now let's see if I can work that into my real script :) [21:41:34] robla: my laptop already chokes rendering the more mundane pages that are included... [21:42:01] even the pages that do nothing with LST, so that makes debugging somewhat hard [21:43:49] gotta run out for a little bit. maybe you can try to repro on test2.wikipedia.org? that's running 1.21wmf5 [21:45:01] I'll give it a shot [22:10:05] robla: er, does test2 have LST activated? [22:11:15] robla: ah, it's not. In any case, see http://test2.wikipedia.org/wiki/Pagina_principale [22:11:18] I'm gone now! [22:47:35] hey, ori-l and I are doing E3 deployment and there are lots of changes on fenari 1.21wmf4, including includes/Linker.php change. Meanwhile git wmf/1.21wmf4 has changes to CentralAuth, SwiftCloudFiles, WikimediaMaintenance, Message.php. [22:47:52] I'm going to do a hard reset [23:04:15] AaronSchulz, even after git reset in fenari 1.21wmf4, these extensions have different commit ids than 1.21wmf4 in git. Any idea what's up? [23:05:28] that makes sense if there was no submodule update...but why does it matter for those extensions? [23:05:38] are you updating the extensions? [23:05:44] ^ CentralAuth, SwiftCloudFiles, WikimediaMaintenance extension versions different (newer?) [23:06:27] maybe someone didn't do a submodule update in 1.21wmf4. I only want to update E3Experiments extension, but puzzled about them. [23:06:52] you can ignore them can't you? [23:07:16] when people update extensions they just update that one extension [23:07:37] e.g. "git submodule update extensions/E3Stuffs" [23:07:42] Reedy, got a sec for https://gerrit.wikimedia.org/r/36105? [23:07:52] AaronSchulz right, maybe I'm over-checking. [23:07:55] after pull (or even fetch if you are just doing an extension) [23:08:15] it's live hacks in core that are more annoying (though fetch still gets around it for exts) [23:17:13] [[Tech]]; Steven (WMF); /* New namespace here on Meta */ new section; https://meta.wikimedia.org/w/index.php?diff=4701892&oldid=4603906&rcid=3734895 [23:25:11] about to run scap [23:29:01] hmm crossposting [23:34:38] sulwatcher bot is pinging [23:34:44] ack