[11:26:54] For folks with shell: Could someone look up the stacktrace of the MWException at https://th.wikipedia.org/w/index.php?title=%E0%B8%9C%E0%B8%B9%E0%B9%89%E0%B9%83%E0%B8%8A%E0%B9%89:Octahedron80/legacyThaiPUA.json&action=edit and paste it into https://phabricator.wikimedia.org/T108663 please? [11:26:57] Thanks in advance! [13:23:54] tto, hey [13:24:07] hey Krenair [13:24:26] can't stay for long, only 5 mins or so [13:24:31] so I think this might be something we want to test thoroughly on beta before using in production [13:24:36] naturally! [13:24:53] I know this is still blocked on dev so nothing to worry about just yet [13:25:01] One issue with beta is there aren't many wikis there [13:25:34] I think there's enough to test it with? [13:25:51] Hopefully should be. [13:26:12] maybe with the config though - move wgImportSources to wmgProdImportSources and only set it to the real variable if wmfRealm === 'production', and check the same for 'labs' before using the new hook? [13:26:34] oh yes, I suppose that makes sense for a testing phase. [13:26:53] Do you think the approach is workable/sensible? [13:26:57] (in general) [13:27:18] I haven't reviewed it in detail yet [13:28:44] that's ok. Anyway thanks for giving it a poke! You never know what wonders await when you poke a long-untouched bug [13:29:02] haha, yeah [13:29:09] just been going through wikimedia-site-requests [13:29:25] found some random language changes that I never even knew were features of mediawiki [13:29:42] (having been a mediawiki developer for 3 years now) [13:29:57] oh are you going through site-requests? [13:29:59] I want to hug you [13:30:17] Seriously... that is long long overdue [13:36:01] tto, I processed a load of tasks during wikimania [13:36:03] and a bit before that [13:36:15] and I've been trying to organise the backlog [13:38:40] Krenair: Did you see my suggestion (forget where it was, some Phabricator task) to split up site-requests into separate projects? [13:38:54] maybe [13:38:57] I've been wanting to do something similar [13:39:01] what was your suggestion exactly? [13:39:17] Last time I checked there was lots of stuff in there that wasn't site requests (like ops stuff, and there was even some weird bug that only showed up on old Japanese phone handsets) [13:39:38] https://phabricator.wikimedia.org/T90468#1062029 [13:41:29] ops stuff I am moving out when I find it [13:41:54] there are now "only" 100 tasks in the Backlog column [13:43:01] do you think it makes sense for a separate project for cluster-wide "requests" and tasks? [13:43:15] I think site requests was originally conceived for individual wikis to post their local requests for their site [13:43:44] T19316 is a good example, it's not a site request in any sense of the term [13:43:56] right, and those are the tasks I've been focusing on [13:44:30] are you moving them out to some other project? e.g. general/unknown? [13:44:36] Isn't that about w/404.php in mediawiki-config.git? [13:44:50] depends on what they are [13:44:55] ops stuff -> #operations [13:45:41] if there are good proposals for splitting up "site-requests", let's discuss them in that task. However it's not only about the folks working on them, it's also about good instructions for requesters to put them into the right basket (Phab project) [13:45:54] That's right [13:46:07] plus if stuff is in the wrong basket, edit the task and update the associated projects, e.g. replace site-requests by operations [13:50:53] tto, I'm looking through your config and trying to make some sense of this [13:51:05] first of all you're assigning 'chapter' to all affiliate sites. please don't do that [13:53:30] then we're adding to the top of each project list, en of that project and the language matching the local wiki's one [13:54:51] ( $project !== 'wikipedia' || isset( $wikipedias[$dbname] ) ) && // Remove non-Wikipedias [13:54:54] tto, what is that? [13:55:11] why are you removing non-wikipedias? [13:55:30] why are you checking that project is not 'wikipedia' or dbname is in the list of wikipedias? that makes no sense [14:54:35] For folks with shell: Could someone look up the stacktrace of the MWException at https://th.wikipedia.org/w/index.php?title=%E0%B8%9C%E0%B8%B9%E0%B9%89%E0%B9%83%E0%B8%8A%E0%B9%89:Octahedron80/legacyThaiPUA.json&action=edit and paste it into https://phabricator.wikimedia.org/T108663 please? [14:55:04] ok [14:56:27] thanks [16:24:30] Hi [16:25:15] Someone could helpme to install airtime using commons ogg files to make a playlist for a wikipedia radio online [16:25:18] ? [18:55:12] MatmaRex, Krinkle_ : now that Trevor is big cheese, who is the maintainer for oojs/core and oojs/ui ? [18:59:07] spagewmf: I'm the release manager, for now. [19:01:01] James_F: all roads lead to you :-) I want to improve OOjs events doc, and I'm not sure whom to e-mail/irc/Hangout [19:03:03] spagewmf: Krinkle_ and Roan are the OOjs events gurus. [19:04:19] James_F: thx. Is it OK if I update https://www.mediawiki.org/wiki/Developers/Maintainers ? [19:05:05] spagewmf: With what? "The event system inside OOjs" is a bit too specific to be listed. [19:07:03] I need to know the date of creating user account for each user who edited a page, how do I know this query? [19:08:53] select user_registration from user, revision where user_id = rev_user and rev_page = ?; [19:10:29] https://en.wikipedia.org/w/api.php?action=query&list=allusers&auprop=registration&aulimit=max [19:10:55] Does he mean a specific page? Or any page? [19:11:04] Reedy: specific page [19:11:36] Reedy: give me a sample with a specific page [19:11:52] Krenair gave you the sql query to do it [19:13:33] The_Photographer: if you are unable to run the query you can use popups to easily see each user's registration date [19:13:56] assuming you don't want to go back very far in the revision history [19:14:11] I need a xml/json from this [19:14:26] I guess I could run the query. What page are we talking about? [19:15:15] MusikAnimal: this https://es.wikipedia.org/wiki/Wikipedia:Cumplea%C3%B1os and https://es.wikipedia.org/wiki/Wikipedia:Wikicumplea%C3%B1os [19:16:14] MusikAnimal: the problem is that, I need the xml/json in real time [19:16:21] MusikAnimal: non static [19:17:19] Dah I see. I'm not aware of such a service that suits those needs [19:17:42] you could do it with the API through a series of loops [19:17:47] MusikAnimal: thats why maybe better is a api query [19:18:10] via the API, first get the revisions, then loop through each and query for the user's registration date [19:24:19] MusikAnimal: do you have some guide for do that? [19:25:05] the relevant APIs are at https://www.mediawiki.org/wiki/API:Revisions and https://www.mediawiki.org/wiki/API:Users [19:25:10] Reedy: Why I cant use "title" in your query? [19:25:20] Because it's querying users [19:26:44] I don't think you can do "joins" through the API, but you could conceivably do this in as few as 2 requests [19:27:06] MusikAnimal: well, a SQL query work fine [19:27:41] MusikAnimal: we could make a tool for do that [19:27:56] MusikAnimal: its best practice than do some kind of parser [19:28:05] generators [19:29:03] its for wikimanians birthday celebration [19:31:39] well that sounds worthwhile to implement [19:35:06] yes, its very usefull [19:35:51] I have a tool I'm about to deploy that runs similar queries, I could add an endpoint for this [19:36:09] looking good [19:36:11] but we'll have to limit it to something, maybe the last 500 revisions [19:36:50] MusikAnimal: if you could use a join, it could work without a limit [19:37:10] yes it will work, but if the page has a crap ton of revisions the query could take forever to run [19:40:05] This page obviously will have many revisions, one for each user that adds to itself, placing a limit not do this functional. If this is running slowly it is because you keep the idea of a loop instead of a join. [19:40:38] joins will still be slow on pages with huge amounts of revisions [19:41:43] let's see how the proper query runs [19:42:12] 1) Get user list who edited a page non repeat 2) get date account creation of this user. Thats all [19:42:38] if you do it right, you can do that in one sql query [19:43:37] that first page you mentioned had the ID 259833, I ran `select user_registration from eswiki_p.user, eswiki_p.revision where user_id = rev_user and rev_page = 259833;` and got 538 rows [19:43:41] ran pretty quickly [19:45:07] MusikAnimal: DISTINCT :) [19:45:16] Though [19:45:25] 315 [19:45:27] 315 rows [19:45:30] yep haha [19:45:47] 49 have no registration date either [19:46:29] nice!!! [19:46:39] time execution? [19:46:48] < 1 s [19:46:50] .015 secs for me [19:47:10] its faster [19:47:12] but it's doubtfully that fast for every page [19:47:30] Hmm. I wonder if quarry would be a good place for this? [19:48:30] hmm, I ran it on a page with > 5000 revisions and got 1000 rows returned [19:48:30] Excellent work guys [19:49:20] ah [19:49:35] I did it with and without DISTINCT and still got 1000 rows [19:49:38] so it's getting cut off [19:49:55] it's possible on labs it is [19:50:04] yeah this is repl database [19:53:03] Reedy: what is quarry? [19:55:15] http://quarry.wmflabs.org/ [19:56:00] neat-O! [19:57:38] QOQ [20:00:23] Reedy: Can't connect to MySQL server on 'enwiki.labsdb' ([Errno -2] Name or service not known) [20:00:33] lol. [20:00:50] -_- [20:01:10] what happend here? [20:01:30] Try "enwiki_p" [20:01:59] I got zero results [20:02:18] or rather, it says "query status: complete" but doesn't show anything [20:02:54] no resultset [20:05:15] ready [20:05:16] http://quarry.wmflabs.org/query/4716 [20:05:40] awesome! [20:05:40] MusikAnimal for both ids? [20:06:13] use rev_page IN (1234, 5678) [20:07:37] however its taking the current revision [20:07:42] ? [20:07:46] the revision is not static [20:08:22] eh [20:08:32] No it's not [20:08:44] it's querying all revisions with that rev_page [20:31:31] MusikAnimal: need some query help? [20:31:51] nah this was for The_Photographer but I think we got it figured out [20:32:01] thank you though! [20:32:24] the first sysop in spanish wiki [20:32:25] SELECT user_name,user_editcount FROM eswiki_p.user LEFT OUTER JOIN eswiki_p.user_groups ON eswiki_p.user_groups.ug_user = eswiki_p.user.user_id WHERE ug_group == "sysop" [20:32:40] MusikAnimal: yes thanks [20:33:08] MusikAnimal: any time, Ive been playing with the DB since 2006, so just ping me if you need something [20:33:45] The_Photographer: that is not 100% correct [20:33:50] Can't connect to MySQL server on 'enwiki.labsdb' ([Errno -2] Name or service not known) [20:34:19] The_Photographer: I exclude the database name in the query [20:34:28] and I use: [20:34:34] sql eswiki_p [20:34:40] to connect [20:34:52] Betacommand: ok [20:39:19] your query would be: select user_name, user_editcount from user left join user_groups on user_id = ug_user where ug_group = 'sysop'; [20:40:01] Betacommand: Can't connect to MySQL server on 'enwiki.labsdb' ([Errno -2] Name or service not known) [20:40:28] The_Photographer: how are you running that command? [20:40:48] use ; select ... ; [20:41:03] no need [20:41:08] exit mysql [20:41:23] Betacommand: I am using http://quarry.wmflabs.org [20:41:48] The_Photographer: what wiki do you want the report for? [20:42:17] es [20:42:21] eswiki [20:42:33] eswiki_p [20:42:51] Betacommand: your query is the sysop with more time and not the first [20:44:04] The_Photographer: its the same query you posted just adjusted [20:45:30] Betacommand: the problem is the page, not your query [20:45:34] Betacommand: thanks [20:46:30] The_Photographer: http://tools.wmflabs.org/betacommand-dev/reports/es_sysops.txt [20:47:16] Betacommand: thanks [20:47:41] Betacommand: date of the flag assignement? [20:48:01] The_Photographer: thats in the logging table [20:48:08] Well, I had to go [20:48:15] See you tomorrow [20:48:15] and a pain in the ass to extract [20:48:18] Good night [20:48:46] Betacommand: I was waiting for you today, however, I had to go [20:48:48] Betacommand: A hig [20:48:51] *hug [21:22:48] spagewmf: Hmm. https://www.mediawiki.org/w/index.php?title=Developers/Maintainers&diff=0&oldid=1778720 – CSSJanus, OOjs, OOjs UI, UnicodeJS and a few others weren't "split out" of MediaWiki, they were born free. [21:23:05] Also, we already have a list of these libraries somewhere. [21:29:39] https://www.mediawiki.org/wiki/Upstream_projects [21:35:58] legoktm: Bingo, thanks. [21:36:16] Maybe instead of content, we should have a "see also" link?