[00:00:31] I'm about to deploy parsoid [00:00:46] since that's all unrelated to PHP you don't need to worry [00:09:35] Newyorkadam: scan started [00:10:14] Newyorkadam: however Im still using the 20140102 dump [00:11:11] Newyorkadam: http://tools.wmflabs.org/betacommand-dev/reports/db_scanner_Clement_Barbot.log [00:11:41] and hes already left [00:16:17] http://es.wikipedia.org/wiki/Consorcio_de_Transporte_Metropolitano_del_Campo_de_Gibraltar gives an 503 error [00:21:15] only if you hit the "wrong" cache [00:43:53] hi [00:44:26] Betacommand: thanks it's working :) [00:47:18] Hi everyone [00:47:51] hi jem- [00:48:01] hi jem- [00:48:10] Is there a known reason that could cause es.wikipedia API to not show any results since 21:00 h UTC with Snoopy access? [00:48:23] It's a really weird problem [00:48:54] Also, another pywikipedia-based bot is showing "Result: 503 Service Unavailable" since that hour [00:49:27] But no problems with other projects or with direct access e.g. with lynx [00:50:09] I'm about to smash the wall with my head [00:51:22] Anyone? [00:52:48] BonifaceFR ? Newyorkadam ? [00:53:19] jem-: what? [00:53:29] The problem I just wrote about [00:53:33] hmm [00:53:46] What snoopy access? [00:53:58] I mean, using the Snoopy library [00:54:15] For the API submits and logins [09:07:03] jeremyb and others, other commands suggested to test *bandwidth+ from HK to ulsfo/eqiad? https://bugzilla.wikimedia.org/show_bug.cgi?id=60283 [09:21:15] Request: GET http://en.wikipedia.org/w/index.php?search=Natural+gas&title=Special%3ASearch, from 91.198.174.60 via amssq57 amssq57 ([91.198.174.67]:3128), Varnish XID 924136987 Forwarded for: 95.15.192.55, 91.198.174.60 Error: 503, Service Unavailable at Fri, 14 Feb 2014 09:20:31 GMT [09:21:31] hi [09:41:44] Wikimedia Foundation [09:41:44] Error [09:41:44] Our servers are currently experiencing a technical problem. This is probably temporary and should be fixed soon. Please try again in a few minutes. [09:46:43] Wake till San Francisco wakes up? :-) [09:46:49] Wait* [10:18:38] strnage [10:19:08] http://en.wikipedia.org/w/index.php?search=Natural+gas&title=Special%3ASearch 503 [10:19:18] but adding &cache=foo works: http://en.wikipedia.org/w/index.php?search=Natural+gas&title=Special%3ASearch&cache=foo [10:19:19] :D [10:23:06] looks like LVS in esams lost a machine right about when the 5xx started, http://ganglia.wikimedia.org/latest/?c=LVS%20loadbalancers%20esams&m=cpu_report&r=day&s=by%20name&hc=4&mc=2 + https://graphite.wikimedia.org/render/?title=HTTP%205xx%20Responses%20-1day&from=-1%20day&width=1024&height=500&until=now&areaMode=none&hideLegend=false&lineWidth=2&lineMode=staircase&target=color(cactiStyle(alias(reqstats.5xx,%225xx%20resp/min%22)),%22blue%22)&ta [10:26:44] wow, 300k/s is the several-months peak [10:29:44] weird, SAL has nothing about amslvs4.esams even though it's mentioned in -operations [10:29:58] yesterday CET: 21.30 < apergos> 33% packet loss pinging from palladium to amslvs4 [10:30:01] 22.09 < apergos> palladium to amslvs4 looks better [10:30:42] afaict and I am on the host now, it's not the one getting traffic, that's amslvs2, for upload [10:31:03] and it is up and fine too so I don't know what gangla doesn't like, I tried even restarting gmond on it just in case [10:32:03] that explains why it's not in SAL :) [10:32:14] ping to it from palladium is fine today, I tried that too earlier just ... out of paranoia right [10:32:28] probably the packet loss made it lose connection to ganglia which gave up, or something? [10:33:19] Hi everyone, I'm having 503 responses in the API of es.wikipedia since 21:00 h UTC of yesterday [10:33:35] Relocate to the United States [10:33:39] Problem solved [10:34:07] It's very strange, it does work with direct queries or other wikis [10:34:40] But not with Snoopy access [10:35:12] twkozlowski: You mean me? [10:37:22] Yeah [10:37:39] So it's a known problem then? [10:37:48] Yes :-( [10:37:54] Uf [10:38:11] And it's started happening rougly when you said it did [10:38:13] 21:00 UTC [10:38:26] So it's not me getting crazy [10:38:27] since 21:00 utc yesterday hmm [10:38:39] ok then at this point I think it's "help, mark!" [10:38:54] and wait a sec, what is snoopy access? [10:39:06] The Snoopy library [10:39:11] Used for PHP [10:39:28] can you retrieve uh [10:39:38] http://en.wikipedia.org/wiki/Selberg_sieve this one via snoopy? [10:39:49] If I use CURL it seems to work, or just lynx, from the same node [10:40:04] I try [10:40:08] I didn't think to try wget from here, I was just usin ff [10:40:11] let me do that too [10:40:26] But I have no problem outside es.wikipedia [10:40:37] nope, 503 for me with wget too [10:40:40] That's why I thought I was getting crazy [10:41:00] is it due to variance on Accept:? [10:41:47] can I try purging it or you're still experimenting? cuz https://en.wikipedia.org/wiki/Selberg_sieve?sdgsar works - wonder if it's caching [10:42:30] apergos: No problem with that fetch [10:42:41] and where are you located? [10:42:54] Valladolid, Spain [10:42:59] ok beats me [10:43:13] you wanna add that to the bug report though because it's new info [10:43:17] sigh [10:43:41] apergos: do some debugging with varnishlog on why varnish is sending 503s [10:43:49] jeremyb: jeremyb-phone is you too? [10:44:01] The problem appers in Toolserver also, but not in Labs [10:44:05] appears* [10:44:17] So the "relocate to US" solution seems to be right [10:44:30] :-P [10:44:34] :) [10:45:31] Anyway, if it isn't getting solved soon, I'll have to migrate to Labs right away [10:45:35] I've been getting a 503 error for the past 5 minutes or so when trying to submit a small edit on en.wikipedia, is there a known problem? [10:46:07] topic [10:46:58] is there a way to bypass the amsterdam servers? [10:47:12] could you please paste the error message here? [10:47:57] "There is a bug." [10:48:23] https://bugzilla.wikimedia.org/show_bug.cgi?id=61364 mark [10:48:24] I see "no backend connection" for one of these typical 503s [10:48:52] well yes [10:48:58] I was also getting the same erorr as in 61364 for a diff page on Commons earlier [10:49:17] from uni network i also get issues, i didn't save error msg number, msg was from db server 'too many connections' [10:49:18] it's specific to amssq57 [10:49:28] it probably was a 503, i dunno, it's ok from home [10:50:06] !log restarted varnish backend on amssq57 [10:50:14] Logged the message, Master [10:51:04] are people still seeing 503s? [10:51:54] amssq57 looks happy now [10:52:13] the backend was running ovr there, I looked at that [10:52:20] it was running [10:52:32] * twkozlowski hugs mark [10:52:32] but that's about all that could be said about its health [10:52:40] Yes, mark [10:52:45] Solved here [10:52:49] cool [10:53:13] Toolserver also [10:53:35] toolserver just uses esams and probably ended up on amssq57 for api or something [10:55:09] Well, the strange thing that was getting me mad was that the problem was only for es.wikipedia [10:55:17] Anyway, thanks :) [11:03:20] you're welcome :) [11:03:23] thanks for reporting [14:01:03] StevenW: http://lists.wikimedia.org/pipermail/mobile-l/2014-February/006511.html [14:01:23] o_0; Munaf had nothing to do with this [14:01:33] this was 100% volunteer-managed [14:04:06] twkozlowski: i think it was a participant in the Google project [14:04:39] because i remember how all those favicon's got changed and improved [14:04:57] mutante: a few of them, actually [14:05:03] it was a thing under Quim's .... [14:05:04] including one of the students who won :-) [14:05:04] yea that [14:05:08] ack [14:05:33] mutante: yeah, so I have no idea why Steven says a WMF designer did that [14:05:40] * twkozlowski puzzled [14:06:10] dunno either, i just added that link to MS/IE about why they say one should have 32x32 and 64x64 .bla bla [14:06:50] wikibugs> (mod) Searching for "black liquor" breaks the wiki search - [14:06:52] wth :) [14:19:40] many icons were turned into SVG by m4tx (GCI student), see deps in https://bugzilla.wikimedia.org/show_bug.cgi?id=32101 [14:20:05] that's the winner :) [14:20:10] he also did a few favicons [14:20:21] yeah [14:20:35] oh, you meant icons = favicons [14:20:37] twkozlowski: just reply on list? [14:20:59] Nemo_bis: whatever, he'll see that when he gets on IRC [14:21:15] favicons, our favorite icons [14:36:12] twkozlowski: meh, did myself [15:08:45] Reedy: here's the short-term fix for that EducationProgram error: https://gerrit.wikimedia.org/r/#/c/113301/ [16:54:01] okay, who broke [16:54:10] https://meta.wikimedia.org/wiki/Tech/News/2014/01 [16:54:23] not I [16:55:03] https://gerrit.wikimedia.org/r/#/c/111426/ yay for breaking things [16:55:50] guillom: ^^ :-( [16:59:07] twkozlowski: you mean SUL? or where? [17:00:04] (it is for SUL, otherwise clarify :) ) [17:00:06] SUL? [17:02:53] twkozlowski: https://meta.wikimedia.org/wiki/SUL [17:03:30] No jeremyb, I mean the links are now broken [17:03:48] On December 23, Wikimedia Labs was broken for 4 hours due to an [[m:w:Network File System|NFS]] problem. [17:04:03] because of https://gerrit.wikimedia.org/r/#/c/111426/ [17:04:12] 14 10:43:49 < twkozlowski> jeremyb: jeremyb-phone is you too? [17:04:38] jeremyb: oh. You made an edit somewhere, I think on Commons [17:04:52] twkozlowski: i saw you fixed my editprotected :) [17:05:00] dat [17:05:28] I'm seeing a few errors like this on English Wikipedia: [17:05:29] GET https://bits.wikimedia.org/static-1.23wmf13/extensions/Math/modules/MathJax/fonts/HTML-CSS/TeX/woff/MathJax_Greek-BoldItalic.woff 404 (Not Found) [17:05:50] jeremyb: I'd +autopatrolled that account if it's you :-) [17:05:50] (at https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)) [17:06:19] twkozlowski: i can't do that myself? [17:07:09] are you a sysop on Commons? [17:07:29] i think not [17:07:35] helderwiki: WFM [17:07:47] otherwise i would have just waited until i was at a computer and done the edit myself [17:07:51] :) [17:08:20] I'll take it as a 'yes, that's me, editing through a phone, dunno why not from my main account' [17:08:45] I think mobile supports editing now :-)) [17:09:06] ewww, i use desktop on mobile. i did try "mobile" edits once or twice [17:09:12] doesn't allow giving an edit summary [17:09:48] no, it's more about not having the non-phone creds on my phone. (and I have neither of them memorized) [17:10:01] kk [17:11:12] twkozlowski: https://en.wikipedia.org/w/index.php?title=Special%3ALog&type=&user=&page=user%3AJeremyb-phone&year=&month=-1&tagfilter=&hide_patrol_log=1&hide_review_log=1&hide_thanks_log=1 [17:12:04] twkozlowski: the problem only happens if I enable MathJax [17:12:19] better link: https://en.wikipedia.org/wiki/Special:Log?page=user:jeremyb-phone [17:12:36] twkozlowski: thanks for the promotion :) [17:12:59] oh, I was just about to link to that log [17:13:08] sure thing [17:15:18] Our servers are currently experiencing a technical problem. This is probably temporary and should be fixed soon. Please try again in a few minutes. [17:15:22] :( [17:15:55] greg-g: I know it might not be your area of expertise [17:16:06] greg-g: but if you go through Tech News archives this year... [17:16:16] ... you'll see we have at least one outage every single week [17:16:22] that's Bad News. [17:16:27] :) [17:16:34] so, it is kind of my area :) [17:16:42] Great [17:16:44] I smile only in that: yes, it's bad [17:16:51] Can we please not have any more outages then? :) [17:16:56] twkozlowski: do you see any patterns? [17:17:11] in fact, that's a good data point, I'll bring it up in my meeting with eng managers in an hour [17:17:21] ori: yes, they all take about half an hour at most [17:17:28] that's "good" [17:20:14] I see outages on Jan 2, 6, 9, 13, 21, Feb 3, 6 [17:21:36] twkozlowski: alright, I'm going to make a table of these now, thanks for the prompting [17:21:44] twkozlowski: what, the m: prefix is broken on Meta? [17:22:13] Nemo_bis: you can't link with m: [17:22:19] for others following along (I know twkozlowski knows about this page), there's also https://wikitech.wikimedia.org/wiki/Incident_documentation, but not everything "graduates" to that level of outage :) [17:22:34] m:en:Wikipedia produces [[m:en:Wikipedia]] now, Nemo_bis * [17:22:46] dunno if that's 'brokenness' [17:32:26] twkozlowski: so, one thing I want to be careful of, is jumping to the conclusion that we're worse now than before. I think there *might* be a reporting bias here (ie: we think we're doing worse now because we're getting better at reporting outages) [17:33:13] greg-g: No, I don't want to say we're worse or better than before [17:33:16] * greg-g nods [17:33:24] My only point is that we have outages; short ones, but still [17:33:34] Our end users don't really care if we're improving or not :-) [17:33:35] yeah, I was just creating that table and thinking "huh, some of these are fairly tiny" [17:33:39] * greg-g nods [17:38:07] twkozlowski: I did a first dump here https://wikitech.wikimedia.org/wiki/Incident_documentation/Table_of_outages stupid dumb wikitable right now, format suggestions appreciated [17:43:00] greg-g: it would be nice to know if we're getting better or not, though :) [17:43:13] we definitely don't know [17:43:17] greg-g: on a related note, what was the Feb 11 about and how long did it take? [17:43:25] Feb 11 outage* [17:43:47] Nemo_bis: sure, so let's try to get some kind of data to see what sense we can make (if any) [17:43:57] put it in wikidata ?:P [17:44:09] :P [17:44:10] instance of outage [17:44:19] twkozlowski: I..... dont' remember... [17:44:26] it's been a long week :) [17:44:30] <^d> Let's make a new special page. [17:44:30] guillom says Parsoid [17:44:33] <^d> Special:Outages [17:45:00] ^d: which auto-fills by polling Special:BlankPage? [17:45:03] oh right [17:45:30] twkozlowski: un sure of root cause, there's a thread of ideas on email, I'll ask for a summary [17:46:17] ^d: kind of like SAL [17:46:29] <^d> hehe [17:46:33] I see ^d did https://gerrit.wikimedia.org/r/#/c/112755/ [17:46:39] but log doesn't say if that caused any problems [17:46:46] <^d> No, that didn't. [17:47:07] <^d> I'm surprised switching so many wikis from 12 -> 11 -> 12 and then 12 -> 13 broke nothing. [17:47:21] greg-g: you mean the opsen list? because I searched Wikitech-l earlier today :) [17:47:23] <^d> Basically all 800-something wikis switched versions in about 15 minutes. [17:47:37] Quick ballet [17:47:44] twkozlowski: yep, sorry, internal [17:47:47] I just pinged on it [17:47:54] rapid deployment, talk about agile [17:48:12] is there any reason those things are private? [17:48:25] because if I have to ping people for info each time, that's not very scalable [17:48:46] it's almost like buying domains [17:49:04] :-D [17:49:12] lol [17:49:46] mutante is very witty lately :D [17:50:29] * mutante hides :) [17:50:37] twkozlowski: yeah, sometimes initial discussions about an outage (certain types) talk about things we don't necessarily want publicly logged eg: how much bandwidth we have or a known obvious DDOS attack vector [17:51:18] s/known obvious/known to us and a few and easy to attack/ [17:51:45] mutante: I think that's part of it [17:51:56] our velocity has increased on the dev side of things [17:52:11] Nemo_bis: twkozlowski , but there is also a serious part to it, in that both things have in common that indeed there might be some reasons why it's not 100% public, but more semi public, like volunteer with NDA can have it etc.. you know [17:52:34] greg-g: yes [17:53:14] and for domains it's because you don't want to make it so easy for the grabbers [17:53:26] to see what we are willing to pay for and what not [17:53:29] mutante: that's why such stuff has always been stored on internal wiki where trusted volunteers most often working on it could also see/work on it [17:53:48] this changed after ~08 Nemo_bis [17:53:50] blame Obama [17:53:56] Nemo_bis: fair, yea [17:54:08] the ops list also works that way, there are some volunteers active in security/sysadmin matters [17:54:14] nod [17:54:38] twkozlowski doesn't qualify I'm afraid :P he'll have to content self with SAL and stuff [17:55:10] please don't ask me about list policy, i don't know [17:55:19] i just do tickets(tm) :) [17:55:24] * twkozlowski RTFLs as much as he can [17:55:34] L = logs [17:56:48] same for seeing core-ops RT [17:57:40] the only policy I know about ops@ list is that anyone with deploy privs better be on it and reading [17:57:57] greg-g: so any more details about the Parsoid thing? [17:58:10] what parsoid thing? [17:58:14] I'd like to finish https://meta.wikimedia.org/wiki/Tech/News/2014/08 for today [17:58:20] and let the translators do their job [17:58:45] ori: the Parsoid outage on Tuesday evening UTC / morning PST [17:59:40] ori: the one from Feb 11th as well [17:59:56] Feb 11 = Tuesday greg-g :) [17:59:57] there's a thread on ops@ as well :), I just pinged Roan/Gabriel to make it public [18:00:05] twkozlowski: sorry, was answering before reading your answer :P [18:00:34] then just cross-post to wikitech [18:00:45] in those cases it was just on ops but should be public, right [18:00:57] mutante: yeah [18:01:24] just that cross-posting are confusing people and fork threads :p [18:02:08] in practice parsoid was mostly up [18:02:21] there were some errors, as some requests were sent to broken boxes [18:03:18] gwicke: (duplicating my email in IRC) please post the outage report to incident reports :) [18:03:24] greg-g: somehow the engineering list is in between there, hrmm. but i would personally just think of wikitech [18:04:49] greg-g, ok [18:04:58] thankya :) [18:07:54] not for today, but ..eh..some day.. maybe.. too many? .. not sure.. ops/engineering/wikitech-l/wikitech-announce/mediawiki-core/mediawiki-l [18:08:11] wikitech-announce? i didnt even know [18:08:33] well, ok Announcements of activities for developers and other technical contributors. [18:08:37] wikitech-ambassadors [18:08:39] :-) [18:09:50] so which list corresponds most to #wikimedia-dev [18:10:04] if they'd match the channels ? [18:10:20] there are also so many of them, it's hard to keep up [18:10:23] what's the diff between -tech and -dev? [18:10:27] shrug [18:10:35] it's a little bit of #mediawiki in it [18:11:11] and i go there when i want to see the bot saying i merge stuff in projects that are not ops/puppet [18:11:51] twkozlowski: well, ok,to be fair, dev is development of the software and tech is more infra of course [18:12:08] here is where people come when the site is down to get info [18:12:08] isn't that what -operations is for? infra? [18:12:25] well no, that's for the ops team to be able to work [18:12:31] while the general public joins -tech [18:12:40] and asks a buch of questions [18:12:53] afaik [18:13:04] * twkozlowski shrugs; has seen people fix stuff in -staff [18:13:20] :p i suppose so, yea [18:13:56] to me it's like the channel of wikitech-l, so to say [18:51:52] [[Tech]]; MF-Warburg; TNT is broken, for whatever reason; https://meta.wikimedia.org/w/index.php?diff=7494461&oldid=7365737&rcid=4989218 [18:52:33] [[Tech]]; MF-Warburg; /* VE on esWiki */; https://meta.wikimedia.org/w/index.php?diff=7494466&oldid=7494461&rcid=4989221 [19:23:03] now https://bugzilla.mozilla.org/show_bug.cgi?id=758857 that's an interesting bug [19:24:30] "To determine the number of requests that would be switched to HTTPS, can't Wikipedia just search their server logs for page requests that include Firefox's "Mozilla-search" URL parameter?" [19:25:10] Can we just turn it on? We've been using HTTPS for a while [19:36:48] twkozlowski: Might be worth getting an OK from Ops first... Or at least, notifying on like wikitech-l [19:36:49] https://ganglia.wikimedia.org/latest/?r=hour&cs=&ce=&s=by+name&c=SSL%2520cluster%2520eqiad&tab=m&vn=&hide-hf=false [19:36:56] The eqiad ssl cluster is hardly busy [19:37:21] twkozlowski, Reedy: I brought this up on ops@ mailing list, for those who can access it [19:37:33] aha :) [19:38:12] "Firefox search and HTTPS access to Wikipedia search" - on Jan13 [19:39:32] ^d: https://bugzilla.wikimedia.org/show_bug.cgi?id=56619 can be closed, I think? [19:40:29] <^d> twkozlowski: Yeah, closed. [19:40:32] <^d> Thanks for the reminder [19:42:47] there's also a similar 'update to version' bug for Solr https://bugzilla.wikimedia.org/show_bug.cgi?id=49245 [19:42:52] not sure which version we're using now [19:43:37] twkozlowski: i wonder how 44893 sat for so long :( [19:44:41] <^d> twkozlowski: No clue. [19:44:49] <^d> We'd actually like to move that to Elastic and off Solr :) [19:45:48] jeremyb: Dunno; I only stumbled upon it because it got 'updated' on Bugzilla update [19:46:13] oh, you mean wikibugs-l ? [19:46:16] CC removal [19:46:31] also, i don't have time for this now but someone should go through both dns and redirects (apache-config) and make sure nothing else is broken the same way [19:47:54] mutante is assigned to that bug :-) [19:48:06] hah [19:48:53] jeremyb: probably my fault that 44893 didn't received attention for the last six weeks. Meh :-/ [19:49:14] twkozlowski: he's not assigned any more... [19:50:01] andre__: not just yours... we need some better alerts / classification of bugs [19:50:30] a way to flag stuff for retriaging on reopen and alerts when something is open for x period without a triage [19:51:52] jeremyb, feel free to file me an enhancement ticket so I might come up with a query [19:52:00] * andre__ busy with other stuff right now :-/ [19:52:23] andre__: not just a query. a workflow [19:52:37] you can't query without criteria [19:52:44] oh well [19:53:48] jeremyb, "In ASSIGNED state && assignee=real person && no updates for six months" or "In PATCH_TO_REVIEW status && all patches merged && no updates for two months" are also good candidates for "something should happen here, right?" [19:53:57] first step for me is trying to come up with a query to find them [19:54:22] afterwards you can try to discuss a workflow, if common sense doesn't cover it enough yet :) [19:54:53] twkozlowski: 44893? did somebody sell tartupeedia.ee .. no idea. when i made that redirect it worked [19:55:16] mutante: did you see my comment? [19:55:27] how is it related to lvs change? [19:55:30] mutante: https://bugzilla.wikimedia.org/44893#c16 [19:56:06] the domain resolves to the same place it did way back. it's just that that place is no longer listening to HTTP/HTTPS [19:56:42] so the link to https://gerrit.wikimedia.org/r/#/c/101880/ [19:56:44] mutante: because we switched from unified to star? [19:56:46] about LVS monitoring is intended [19:56:48] ? [19:56:51] err, other way around [19:57:35] well it's not monitoring that broke it but it's related. i did spend 20 more secs looking at other stuff on the same gerrit topic and didn't see anything more relevant [19:57:45] err, other way around <--- star to unified [19:58:55] i'm amazed about my own comment from 2013-02 :p [19:59:03] would not have rememberd [20:02:09] https://en.wikipedia.org/w/index.php?title=Wikipedia:WikiProject_Fungi/fungus_articles_by_size&oldid=186704908 nice [20:02:32] Internal error - {{SITENAME}} :-) [20:03:07] http://en.wikipedia.org/w/api.php?action=query&prop=revisions&revids=186704908&rvprop=content|ids too [20:10:08] curious. twkozlowski - you've been in -staff? :) [20:10:23] No. [20:11:43] twkozlowski: heh, too bad. -staff (and related) might be a better place if that weren't true [20:31:25] would this be a good place to ask an api related question? [20:36:44] ah, nm i found my answer =) [20:50:48] andre__: I'm thinking of upstreaming https://bugzilla.wikimedia.org/show_bug.cgi?id=56372 [20:51:43] twkozlowski, yeah, looks like that makes sense [20:51:56] twkozlowski, if you do, feel free to add a9016009@ to the CC in upstream [20:53:01] ok, going through documentation now [20:54:00] andre__: https://wiki.mozilla.org/Bugzilla:Committing_Patches isn't really newbie-friendly [21:05:15] ^d: internal server error opening a patch that is C+2 V+2 but still "open", what to do? https://gerrit.wikimedia.org/r/66223 [21:05:34] <^d> Cry in a corner :( [22:08:54] not sure where best to ask this: how can I get the canonical url for an article? for example Lights_out_management redirects to an article called "Out-of-band management", but the URL isn't rewritten. aside from cutting and pasting, how can i get the canonical url? [22:10:24] oh i can copy it from the Read tab next to Edit, nm [22:15:29] Who cares about spammy sitenotices which can't be hidden? https://gerrit.wikimedia.org/r/112951 [22:15:46] jgage: rel=canonical? [22:17:56] nemo_bis: i'm afraid i need a bit more context. appending ?rel=canonical to the noncanonical url doesn't cause it to be rewritten. however finding the Read link solved my problem. [22:19:03] I mean in the [22:19:31] But yes, for normal navigation the read/page tab is the place :) [22:21:01] can someone take a quick look at: https://commons.wikimedia.org/wiki/File:Lophophorus_impejanus_%28Himalayan_Monal%29_1.JPG [22:21:12] and explain why it's not showing up in pagehistory? [22:29:38] Withoutaname: there is a bug for descriptionless files, if that's what happened [22:30:50] uploaded file without writing up description page yes [22:31:04] but there's no past revision entry in ?action=history either [22:31:25] though if you give me a link to the ticket, I'd appreciate it [22:32:57] it's called "descriptionless files", search [22:33:33] k [23:30:42] i'm having a heck of a time trying to get an api query with prop=pageimages to return more than 10 results. i've specified pilimit=50, as per https://en.wikipedia.org/w/api.php but no joy =/ i can pass in gaplimit=50 (since i'm using the allpages generator, but that produces results where almost none have page images =/ [23:34:06] >_< i need both pilimit and gaplimit