[00:12:16] Brooke: i think maybe that was a reference to https://en.wikipedia.org/wiki/User:Jasper_Deng/IPv6 [00:12:39] I don't think that was quoted in her post. [00:12:45] But that's the one I was thinking of. [00:12:50] I thought it was on Meta-Wiki. [00:12:56] But at Meta-Wiki he properly used the main namespace. [00:12:58] So.. [00:13:06] I guess a wikitech-l feed in here might be nice. [00:13:09] Mayne. [00:13:11] b [00:15:25] Maine? [00:16:06] and a wikitech-ambassadors and a /me tries to think of something else [01:35:31] User talk page modification: error "Gateway Time-out" occurred while contacting the API. [01:43:25] Evening guys, are you aware of any issues with Wikimedia's API? People are reporting errors in tools which use it [01:43:42] Gateway time out, warning messages from the API and so on [01:44:49] here too [01:45:31] I would check for probs myself, but I'm using a text only browser, and the status page and nagios don't show up properly in it. [01:45:48] I'm minus a GUI at the moment, which slows shit down dramatically for me :( [01:46:11] https://en.wikipedia.org/w/api.php?action=query&list=recentchanges&rctoken=patrol&rclimit=1&format=jsonfm [01:46:22] Says I don't have the 'patrol' right, and I'm logged in [01:46:48] BarkingFish: done :) [01:47:48] MJ94: it's fixed? [01:48:03] Nope, I just meant done, I alreay allerted them [01:48:06] :) [01:48:29] Ah, ok then - sorry for bugging you dudes, I'll leave you in peace. I'll keep watching for any notes though :) [01:53:13] BarkingFish: Haha, no one was listening to me, so you helped. [01:53:56] MJ94: they listen, believe me. The reason nobody responds is that most likely, they're up to their necks in trying to work out what went feet up :) [01:54:05] I've learned that from a long time reporting stuff in here :) [01:54:12] hahaha yes [01:54:43] BarkingFish: I consider myself pretty good with computers, but I'm no problem solvers, so hi5 to the techies. [02:07:13] Is anyone else seeing 504 Gateway Time-out from api.php? Request: GET http://en.wikipedia.org/w/api.php, from 10.64.0.131 via cp1005.eqiad.wmnet (squid/2.7.STABLE9) to () [02:07:13] Error: ERR_CANNOT_FORWARD, errno (11) Resource temporarily unavailable at Fri, 08 Jun 2012 02:06:08 GMT [02:07:52] anomie: Yes [02:07:55] anomie: yep, errors on the API have been reported here :) [02:08:00] and elsewhere too :P [02:08:06] VPT, for instance [02:08:18] And everyone who had the patience to hear me complain [02:14:55] !log LocalisationUpdate completed (1.20wmf4) at Fri Jun 8 02:14:55 UTC 2012 [02:15:02] Logged the message, Master [02:15:10] !log Hello, world! [02:15:16] Logged the message, Master [02:15:20] ._. [02:15:23] o.O [02:15:26] Don't do that, SigmaWP. [02:15:27] Er [02:15:28] O.O [02:15:30] Sorry! [02:15:32] :| [02:15:37] Everyone type /clear please [02:15:55] A wild Bsadowski1 appears! [02:23:09] !log LocalisationUpdate completed (1.20wmf3) at Fri Jun 8 02:23:09 UTC 2012 [02:23:13] Logged the message, Master [02:43:18] tstarling cleared profiling data [02:50:46] is the only problem that the API is down? [02:51:06] seemed more widespread from nagios [02:51:21] i think all the complaints have been about the API [02:51:37] oh charming [02:51:41] http://en.wikipedia.org/w/api.php [02:51:46] gmond is using 100% CPU and 23GB RSS [02:51:52] Our servers are currently experiencing a technical problem. [02:52:56] SigmaWP: fyi, that dents and tweets and maybe does other stuff too. i reverted the one place i could for you. http://wikitech.wikimedia.org/index.php?title=Server_admin_log&diff=47798&oldid=47797 [02:53:37] !log on cp1002: killed gmond, which was using 100% CPU and 23GB RSS. Restarting squid which had died [02:53:42] Logged the message, Master [02:56:21] !log cp1001: same as on cp1002, restarted gmond [02:56:25] Logged the message, Master [02:56:39] but they've been doing that for a good 4 days at least. why now? was it just building the whole time? [02:56:56] (could be 10 days even for all i know) [02:57:03] jeremyb: That dents tweets? :O [02:57:11] SigmaWP: dents and tweets [02:57:20] Oh, ok [02:57:24] ganglia does not show any useful data for that cluster so it's hard to tell [02:57:26] * SigmaWP WHAT HAVE I DONE? [02:57:31] jeremyb: Thanks for the revert [02:57:49] SigmaWP: no big deal. just thought you should know it's not just a wiki. there's real people following the streams [02:58:38] yeah, it's got a small blip of data [02:58:45] OK [02:59:15] * jeremyb waves Ryan_Lane [02:59:22] api issues [02:59:35] tim booted gmond on both cp100[12] [02:59:41] oh, i guess that's not really a Ryan_Lane [02:59:50] he's automated [03:00:33] (i was wondering, it's kinda early there! if he's still there) [03:01:23] !log on fenari: copied *.text and *.upload from /home/wikipedia/conf/squid/generated/clusters to /etc/dsh/group [03:01:28] Logged the message, Master [03:02:54] !log on cp1002: killed gmond again, it was leaking memory again, already up to 27GB in the few minutes since I restarted it [03:02:58] Logged the message, Master [03:08:19] TimStarling: so what's happening? [03:08:27] !log stopped gmond on cp1001 with kill -STOP for memory leak debugging [03:08:32] Logged the message, Master [03:08:34] heh [03:08:35] is the API working again now? [03:08:51] Let's try it. [03:08:57] String[]: Betacommand Bsadowski1 anomie ping [03:08:59] No, still down. [03:09:05] jeremyb: pong [03:09:09] ^^^ [03:09:12] try the API [03:09:35] TimStarling: No [03:09:47] works about 50% of the time [03:10:25] Interestingly enough, if I hack /etc/hosts to locally point en.wikipedia.org to 208.80.152.201 (wikipedia-lb.pmtpa.wikimedia.org) instead of 208.80.154.225 (wikipedia-lb.eqiad.wikimedia.org), it seems to work ok. [03:10:25] TimStarling: take a look at cp1005 also [03:10:59] anomie: what is it otherwise? [03:11:21] jeremyb- I get eqiad here, 208.80.154.225 [03:11:22] Error 504: Gateway Time-out, if anyone cares [03:11:27] Betacommand: you're getting an error there? same one anomie reported before [03:11:33] String[]: right [03:12:12] jeremyb: Im getting the wiki has a problem page and rror: ERR_CANNOT_FORWARD, errno (11) Resource temporarily unavailable at Fri, 08 Jun 2012 03:10:10 GMT [03:12:51] Betacommand: yeah, but why did you say cp1005? is that what the error said? [03:12:58] via cp1005.eqiad.wmnet [03:13:15] The error always mentions cp1005.eqiad.wmnet, as far as I've seen. [03:13:25] Hrm... [03:13:36] I don't know enough about things to know if that's unusual [03:13:40] it's amazing how unuseful http://ganglia.wikimedia.org/latest/?c=Text%20squids%20eqiad&h=cp1005.eqiad.wmnet&m=load_one&r=hour&s=by%20name&hc=4&mc=2 is ;-P [03:13:44] I just changed the &format to jsonfm, and it worked [03:13:55] Nevermind, just a 1-time thing [03:14:03] String[]: example URL? [03:14:10] https://en.wikipedia.org/w/api.php?action=query&prop=templates&tllimit=500&tlnamespace=10&titles=Wikipedia:Sandbox&format=jsonfm [03:14:18] danke [03:14:43] jeremyb: http://en.wikipedia.org/w/api.php fails for me [03:14:51] i'm gettign a lot of cp1004 here [03:14:54] getting* [03:15:20] "Request: GET http://en.wikipedia.org/w/api.php, from 10.64.0.127 via cp1005.eqiad.wmnet (squid/2.7.STABLE9) to () [03:15:20] Error: ERR_CANNOT_FORWARD, errno [No Error] at Fri, 08 Jun 2012 03:15:02 GMT " [03:15:23] :O [03:15:31] yup [03:16:05] maybe being out of memory broke it [03:16:22] but these boxes aren't running gmond [03:16:30] (cp100[45]) [03:17:57] Im seeing errors from 004 about half of the refreshes [03:18:05] !log restarting squid on cp1005, maybe out of FDs or something, cachemgr shows exactly 1000 open connections to 10.2.1.1 [03:18:09] Logged the message, Master [03:18:41] what's that # on cp1004? [03:18:52] hello everyone [03:19:10] kgb[]: what's your error today? ;) [03:20:50] I would like to create gallery site based on *all* wikimedia images. What is better: to crawl entire wikicommons with bot and download them OR just hardlinkthem from my site to wikicommon servers? What its better from the perspective of wikicommons server [im aware that i might be causing big load or "stealing bandwitch"]. thats why im asking before trying. ps im aware of the size of wikicommons [03:21:21] well you definitely shouldn't just crawl [03:21:28] if you want them all you can rsync them [03:21:57] also make it wikimedia commons or just "commons" but not wikicommons [03:22:05] Starting to see errors mentioning cp1003.eqiad.wmnet now. [03:22:49] ok. didint knwo that u got rsync :) [03:22:55] sorry for typos [03:23:09] so running rsync is ok yeah? and i will not risk ban? [03:23:15] kgb[]: you should join xmldatadumps-l [03:23:25] no, rsync is fine [03:23:35] @search lists [03:23:35] Results (Found 7): 1.19, announce, list, lists, repeat, smw, support, [03:23:40] !list [03:23:40] mediawiki-l and wikitech-l are the primary mailing lists for MediaWiki-related issues. See http://lists.wikimedia.org/ for details. [03:23:40] meta.wikimedia; page; Tech [03:23:43] gah [03:23:47] !lists [03:23:47] mediawiki-l and wikitech-l are the primary mailing lists for MediaWiki-related issues. See https://www.mediawiki.org/wiki/Mailing_lists for details. [03:24:13] !mailarchive is http://lists.wikimedia.org/pipermail/$1 [03:24:13] Key was added [03:24:56] !listinfo is https://lists.wikimedia.org/mailman/listinfo/$1 [03:24:57] Key was added [03:25:04] !pipermail alias mailarchive [03:25:04] Created new alias for this key [03:25:19] !listinfo xmldatadumps-l | kgb[] [03:25:19] kgb[]: https://lists.wikimedia.org/mailman/listinfo/xmldatadumps-l [03:25:41] thx buddy [03:25:44] let me browse it [03:27:23] kgb[]: http://bots.wmflabs.org/~petrb/logs/%23wikimedia-tech/20120606.txt starting at 22:01:07 [03:29:10] nice :) [03:30:34] Assuming the X-Cache and X-Cache-Lookup also correspond to whichever cp100x.eqiad.wmnet host is involved, cp1005 is still giving me failures (and seems to be hit most often). Infrequently cp1005 gives me a successful response. cp1003 works most of the time, but sometimes not. Also have seen cp1004, cp1002, and cp1001, no errors from them so far for me. [03:31:02] cp1004 is ging me plenty of errors [03:31:17] if you don't want to use cp1005, then don't post to http://en.wikipedia.org/w/api.php, use some other URL [03:31:31] giving* [03:33:18] jeremyb: http://dumps.wikimedia.org/commonswiki/20120601/ Which one contains the images? From what i see those are only dumps with metadata? [03:33:47] What else is giving problems besides the API? [03:33:47] did you read the log? [03:33:59] more or less [03:34:00] http://ftpmirror.your.org/pub/wikimedia/images/wikipedia/commons/ [03:34:02] gah [03:34:03] kgb[]: there isnt a image dump [03:34:24] ok then [03:35:15] "As of November 2011 the image and other media files take up about 17T, most of it already compressed media." :) [03:35:25] hah [03:36:27] is there any ls -lR file avalible from ftp mirror? [03:37:58] * jeremyb is digging a little... [03:38:57] it's clicktracking [03:39:09] heh [03:39:17] a new deploy? [03:39:27] meta.wikimedia.org seems painfully slow to me. [03:39:31] makes sense that it would be API [03:40:19] kgb[]: so, that http link above corresponds to rsync://ftpmirror.your.org/wikimedia-images/wikipedia/commons/ ; you can browse http://ftpmirror.your.org/pub/wikimedia/images/ to see which other parts of the corresponding rsync tree you want to fetch [03:42:09] kgb[]: what site is this for? [03:46:48] ah, the e3 thing was a recent deploy and that uses clicktracking [03:47:50] what e3 thing? [03:48:25] LastModified/E3Experiments/js/ext.E3Experiments.Timestamp.js ? [03:48:29] yes [03:48:35] it was on for only testwiki from may 31 and then kaldari turned it on today for enwiki [03:49:04] well, it's going away now [03:49:21] i think you can just turn it out in initialisesettings [03:49:27] and that should do it [03:49:47] !g I292efae75418 [03:49:47] https://gerrit.wikimedia.org/r/#q,I292efae75418,n,z [03:50:07] just back that out [03:51:03] !log tstarling synchronized wmf-config/InitialiseSettings.php [03:51:07] Logged the message, Master [03:53:49] jeremyb: i wanna create some "gallery" website and i want to steal wikicommons content lol [03:54:16] kgb[]: also make it wikimedia commons or just "commons" but not wikicommons [03:54:56] kgb[]: do you understand all of the legal ramifications? i.e. how to properly credit image authors on your site? [03:55:07] !log disabled LastModified extension due to overload on cp1005 [03:55:12] Logged the message, Master [03:55:47] did the e3 team take the site down already? [03:55:48] TimStarling: did you push to gerrit yet? [03:55:48] jeremyb: well im not an expert but yeah i understand it more or less. [03:55:53] Eloquence: yup ;) [03:55:56] I'm working on it [03:56:25] Eloquence: mostly API. but at least 5 people noticed it and complained here. surely they weren't the only ones to notice [03:56:27] the change is in the local repo on fenari, I'm working on pushing it to gerrit [03:56:48] Eloquence: every page view was leading to a POST to the API [03:57:13] gah [03:57:26] TimStarling: only one per view? [03:57:27] squid broke first, but if I fixed squid, apache probably would have broken instead [03:58:59] it's hard to imagine the apache cluster being able to support such a high request rate [03:59:49] i wonder how it lasted so long? it was deployed ~8 hrs ago [03:59:50] Any idea if it was the JS or the PHP hitting the API? [04:00:21] err, 7.5 hrs rather [04:00:28] Brooke: JS? [04:00:33] why would php use the api? [04:00:54] You can do FauxRequests, I think. [04:00:57] i know there's those faux requests. but those don't leave the box even, right? [04:01:10] that's why they're called faux [04:01:10] Not sure why the JS would be posting all the time. [04:01:23] They still hit the API, I think. [04:01:34] for clicktracking. to report datapoints to be recorded [04:01:42] what does e3 mean anyway? [04:01:52] It seems like it d be better to disable the ClickTracking extension. [04:01:58] Editor engagement experiments [04:02:03] it'd [04:02:41] ahh [04:03:09] Eloquence: I'll post an incident report to a mailing list [04:03:24] maybe engineering [04:04:00] Unserstand that you maybe busy ATM, I'm wondering how I request a server-side upload of some large vids? [04:04:11] TimStarling, thanks. [04:04:15] Bidgee|Away: file a bug [04:04:37] https://bugzilla.wikimedia.org [04:04:56] Bidgee|Away: provide an http(s) URL where the files can be downloaded from. provide a .txt file per file with the description page contents [04:05:13] Bidgee|Away: file 1 single bug for the batch of files once that's all ready [04:05:21] https://meta.wikimedia.org/wiki/Tech [04:05:27] Two threads at the bottom of the page if anyone wants them. [04:06:08] Cheers, I've uploaded them onto Archive.org [04:06:18] great [04:06:23] now you have to make description pages [04:07:07] [[Tech]]; MZMcBride; /* Help regarding an analysis tool! */ +reply; https://meta.wikimedia.org/w/index.php?diff=3817730&oldid=3814452&rcid=3337341 [04:07:23] > Few Indic language wikipedians have asked a feature (for the respective language wiki) in Special:Upload page to make sure that the users are selecting a license and type a description before uploading images. Do we have a feature in mediawiki to support this? I have seen similar feature in Commons where some of the actions are mandatory. Is this is a Gadget/extension/something else?--Shiju Alex (WMF) (talk) 11:27, 5 June 2012 (UTC) [04:07:29] Anyone know that one? [04:07:43] en.wiki made its own file upload wizard. Commons has UploadWizard. [04:08:25] Brooke, en.wiki's wizard is sauron while commons' is gandalf. [04:08:44] Heh, I'm not even nerd enough to understand that. [04:08:48] Cheers again. Thank you for your help! :) [04:09:47] Sauron --> bad; Gandalf --> good. [04:12:58] Bidgee|Away: https://bugzilla.wikimedia.org/enter_bug.cgi?bug_severity=normal&bug_status=NEW&component=Site%20configuration&keywords=shell&product=Wikimedia&short_desc=Batch%20upload%20large%20files%3A%20XYZ%20lorem%20ipsum [04:13:25] Bidgee|Away: how big are they btw? [04:14:16] Just in case confirmation is needed, API seems to be working now here. Goodnight! [04:15:08] http://archive.org/details/OpalsPressConferenceAtAisWithLaurenJacksonCarrieGrafAndJennaOhea Part 4 is the only one that is below the 100mb limit [04:15:28] Bidgee|Away: k. file away [04:15:35] no [04:15:45] Bidgee|Away: (let me know when you're done) [04:15:47] k: ? [04:16:07] ?! [04:16:08] Bidgee|Away: err, wait [04:16:55] Bidgee|Away: you could upload 2 yourself. the limit's now 500 if you turn on the experiment [04:17:04] Bidgee|Away: but 2 you'd still need to file for [04:17:06] for now [04:17:26] Done that but still said 100mb :( [04:17:47] where? [04:17:51] what did you do? [04:18:49] https://commons.wikimedia.org/wiki/Commons:Chunked_uploads [04:19:58] Turned on the experiment in the UploadWizard Preferences [04:21:32] hrm [04:22:24] It worked this time. No idea why it didn't work before [04:23:07] cool [04:23:19] anyway, pls point me to the bug for the 2 big once [04:23:21] ones* [04:23:32] http://archive.org/download/OpalsPressConferenceAtAisWithLaurenJacksonCarrieGrafAndJennaOhea/MVI_5423.ogv needs to be uploaded over http://commons.wikimedia.org/wiki/File:Opals_press_conference_at_AIS_with_Lauren_Jackson,_Carrie_Graf_and_Jenna_O%27Hea_%28part_1%29.ogv [04:24:15] Bidgee|Away: what is it, lower res? [04:24:50] yer, so it could fit under the 100mb limit [04:25:11] also, why not make it all a single video? [04:25:15] not 4 parts? [04:25:35] http://archive.org/download/OpalsPressConferenceAtAisWithLaurenJacksonCarrieGrafAndJennaOhea/MVI_5424.ogv http://commons.wikimedia.org/wiki/File:Opals_press_conference_at_AIS_with_Lauren_Jackson,_Carrie_Graf_and_Jenna_O%27Hea_%28part_2%29.ogv [04:26:47] My computer will not allow me to do it as it is four different videos (DSLR only allows ~10min recording time) [04:27:20] oh, ewww [04:27:42] I've got the software to due it but not the hardware (being HD it really eats the memory and HDD space up) [04:27:58] yeah [04:29:49] Hoping that there is a hack that will override the limit in the future as it seems Canon isn't going to remove it. [04:30:29] what do you mean, it's not just you ran out of disk? [04:30:31] what camera? [04:32:30] Bidgee|Away: http://publiclaboratory.org/wiki/camera-trigger [04:34:00] Canon EOS 60D [04:34:36] It has a limitation in the camera's firmware. [04:35:48] It has its own "Record" button [04:37:50] yeah, just you made me think of it [04:38:55] Looking at using http://magiclantern.wikia.com/ [04:39:41] how about t3i? ;) [04:39:46] does it have that problem? [04:40:40] Yep [04:42:34] Even the Nikon has the same limit [04:49:53] crazy [04:50:07] maybe ok for shooting a movie. but not for live events! [05:31:17] TimStarling: do we have any idea how LastModified killed the API? (As in any particular component.) [05:34:33] StevenW: I'm told it was related to ClickTracking. [05:35:06] Sounds likely. [05:36:06] https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/LastModified.git;a=blob;f=E3Experiments/js/ext.E3Experiments.Timestamp.js;h=78f0731f4a3c15131163ce78b352c525313c1a23;hb=1519f4e7e1b3cde1cd3045deae6e50bf8c73ccb9 [05:36:15] Maybe that? [05:37:07] Yes, because IIRC it was not sampling impressions but logging them all. [05:37:30] In the past, all of that has been sampled at a low rate. [05:37:42] Who reviewed the code? [05:38:25] Roan reviewed the bucketing and clicktracking part, Kaldari reviewed LastModified proper. [05:38:45] Roan didn't notice the API hit? Hm. [05:51:28] StevenW: the post data was action=clicktracking&format=json&eventid=ext.lastModified%401-ctrl1-impression&token=...&namespacenumber=0 [06:50:19] !log on cp1001: disabled HTCP plugin in gmond for testing, seems to work so I will disable it properly [06:50:24] Logged the message, Master [11:29:16] mark: around? [11:29:36] (affraid not as you were talking about weekend yesterday) [14:01:40] Any op around? [14:01:47] We have Russian Planet broken [14:18:58] https://bugzilla.wikimedia.org/show_bug.cgi?id=37408 -- a DejaVu thumbnail isn't being created at upload.wikimedia.org, EOF error [14:19:03] here i am [14:19:53] * sumanah waits for operations folks to take a look [14:29:00] notpeter: ping [14:29:18] (hope you are enjoying your travel!) [14:33:43] DejaVu? [14:35:14] yeah [14:35:40] Yes. Moreover, in the bug report I didn't write that, if you lower 5100px to e.g. 3000px, the image appears [14:36:32] sdv_: add it? [14:36:43] sdv_: you can just enter that in the "comment" area [14:36:49] I don't know how to edit the message [14:37:02] ah ok i'll add a new comment [14:37:19] yeah [14:37:52] if you look at other bug reports (example: https://bugzilla.wikimedia.org/show_bug.cgi?id=35046 ) you see that people keep adding information with new comments, and follow up with questions. [14:48:33] apergos: https://bugzilla.wikimedia.org/show_bug.cgi?id=37408 -- a DejaVu thumbnail isn't being created at upload.wikimedia.org, EOF error . Should I wait till Aaron Schulz comes online? [14:58:38] yes and I don't know how much he knows about djvu format thumbs either but I surely know even less :-D [14:59:01] damn. [14:59:11] I was wondering whether this was related to the Swift changeover [15:33:55] sdv: so, I need to go [15:34:20] sdv: this might be related to how ProofreadPage (the extension) processes or displays the thumbnails? but I don't know [15:34:36] other people here and in #mediawiki and on Bugzilla will know more. [15:50:54] [[Tech]]; MarcoAurelio; /* Checkboxes at Special:UserRights — Order changed for no reason? */ new section; https://meta.wikimedia.org/w/index.php?diff=3818754&oldid=3817730&rcid=3337930 [17:34:27] domas: around? [17:35:19] maybe [17:35:36] not for too long [17:37:44] domas: would it be difficult for you to make avaiable the Page view info of a single project? [17:40:13] yes [17:40:15] maybe [17:41:50] it would be quite usefull, i need to track the development of a small project, with aprox 700 pages, so it would be a nightmare to download the big datafiles, just to get some few data [18:14:05] [[Tech]]; MZMcBride; /* Checkboxes at Special:UserRights — Order changed for no reason? */ +reply; https://meta.wikimedia.org/w/index.php?diff=3818980&oldid=3818754&rcid=3338025 [18:47:25] !log reedy synchronized php-1.20wmf4/languages/messages/ 'Pushing out updated files upon siebrands request' [18:47:29] Logged the message, Master [18:47:36] There's a scap needed for that [18:47:40] will do it in a little bit... [19:39:30] !log reedy Started syncing Wikimedia installation... : Rebuilding localisation cache for message updates [19:39:34] Logged the message, Master [19:40:22] Updating ExtensionMessages-1.20wmf3.php... [19:40:22] Database name bnwikimedia is not listed in pmtpa.dblist [19:40:22] Updating LocalisationCache for 1.20wmf3... Database name bnwikimedia is not listed in pmtpa.dblist [19:40:22] done [19:40:38] O_O [19:57:28] !log reedy Finished syncing Wikimedia installation... : Rebuilding localisation cache for message updates [19:57:33] Logged the message, Master [19:58:00] 20 minutes?! [20:00:44] Yeah, the concurrency was massively reduced [20:01:54] still... [20:02:01] does the nightly one take so long? [20:02:16] bbl [20:04:55] !log reedy rebuilt wikiversions.cdb and synchronized wikiversions files: 2 wikimedia wikis to 1.20wmf4 [20:05:00] Logged the message, Master [20:07:55] Bah, sync-dblist hasn't been merged yet [20:08:25] !log reedy synchronized all.dblist [20:08:29] Logged the message, Master [20:08:51] Reedy: btw, idk who it was but i see you syncing now... last night dblists were out of sync with gerrit [20:09:06] Whar do you mean? [20:09:13] !log reedy synchronized wikimedia.dblist [20:09:17] Logged the message, Master [20:09:17] I just noticed chwikimedia and bnwikimedia weren't in all or wikimedia [20:09:20] someone changed wikiversions without commiting it [20:10:15] I haven't touched it before today since... monday? [20:11:02] prolly aaron on the 6th at 16 UTC http://wikitech.wikimedia.org/index.php?title=Server_admin_log&diff=47781&oldid=47780 [20:11:24] heh [20:11:47] git-deploy will fix that i hope [20:13:56] bleh [20:18:23] deb rebuilds, pffttt [20:32:35] !log Updated php to point to php-1.20wmf4 rather than php-1.20wmf3 [20:32:39] Logged the message, Master [21:29:40] https access to Wikimedia wikis seems slow and high latency in the past few days. [21:30:03] seems ok in europeland [21:30:05] Shit just seems to be taking longer than it used to. [21:30:12] When I click a talk page tab or just navigate around. [21:30:14] It'll just sit there. [21:30:21] Might be my connection. [21:33:46] Reedy: do you have a second to run a database query for me, I want to see if the toolserver is having data corruption or if its a mediawki issue [21:34:05] Possibly [21:34:07] select count(*) from templatelinks where tl_namespace = 10 and tl_title = "Don't_know"; [21:34:25] enwiki? [21:34:28] yeah [21:34:40] 255 [21:34:56] ok, then we have a mediawiki issue [21:35:10] * Betacommand goes to file a bug report