[02:26:45] !log LocalisationUpdate completed (1.20wmf6) at Thu Jul 5 02:26:45 UTC 2012 [02:27:03] Logged the message, Master [07:33:41] good morning [10:22:51] Hi again. Russian version of Planet Wikimedia not updated since November 2011. Who can fix it? [10:42:25] putnik: maybe we can :-) [10:42:32] I am not familiar with planet though [10:43:56] putnik: looks like I found the file :-] [10:44:16] !gitweb [10:44:16] https://gerrit.wikimedia.org/r/gitweb?p=$1.git [10:44:22] !gitweb operations/puppet [10:44:22] https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git [10:44:34] hashar, it's cool! =) [10:45:06] putnik: the conf files are in https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git;a=tree;f=files/planet [10:45:13] look at ru_config.ini [10:45:43] has not been updated for sometime now https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git;a=history;f=files/planet/ru_config.ini [10:46:27] ok that is just the config file. So maybe that planet is just broken :( [10:47:37] planet is still actually done from svn I believe [10:47:46] and its more likely a planet cache issue [10:47:52] apparently :-( [10:48:09] or a faulty feed address in the config but the former is more likely [10:48:30] putnik: Can you please submit a ticket into bugzilla about it please [10:49:00] I think problem with http://skybon0000.livejournal.com/data/rss?tag=RuWiki [10:49:12] May you delete it? [10:49:23] might be [10:49:30] hard to know though and I don't have access to the planet server [10:49:41] so definitely open a ticket mention ru planet is not being updated [10:49:49] and that skybon0000 is no more available [10:50:04] (which might be need to be a different bug) [10:50:16] once opened, I will cc one person I think might help [10:52:38] brion knows a lot about planet and how to unbroke it [10:52:52] <^demon> I remember when I broke it with malformed utf-8 [10:53:19] planet breaks by doing anything ;) [10:55:37] putnik: are you able to open a bug in https://bugzilla.wikimedia.org/ ? [11:00:42] hashar, done [11:02:01] putnik: thanks. I am doing some additional paperwork [11:03:57] putnik: I opened a ticket for the WMF operations team :-D [11:04:19] mutante: got you a ticket for ru planet not updating. See bug 38198 / RT 3227 [11:04:40] putnik: so now we have to wait :-] [11:08:26] hashar, I think it's not a big problem, we wait since November =) [11:11:18] putnik: http://ru.planet.wikimedia.org/ [11:12:14] putnik: is this July in Russian? Июль [11:12:56] Yes, it's July. [11:13:04] !log add missing Russian locales on singer, run localegen, run ru.planet update [11:13:15] Logged the message, Master [11:13:54] the problem was missing locale, so the planet update never ran, saying unsupported locale in the config [11:14:11] fortunately we had this before with other locales and it's in puppet [11:15:22] It's sad. But now update will run automatically? [11:18:41] hashar: fyi, planet.pp uses generic::locales::international , (in generic-definitions.pp), which uses puppet:///files/locales/local_int , so the following gerrit commit is where i add it: [11:19:08] !change 14293 | hashar [11:19:09] hashar: https://gerrit.wikimedia.org/r/#q,14293,n,z [11:19:14] putnik: checking cronjob [11:19:18] sorry [11:19:19] back [11:19:41] this may be helpful if you use Gerrit... we wanted to see who does the most reviews here at Wikidata, so I came up with this script: https://github.com/johl/gerrit-review-leaderboard [11:20:16] ruby *shudder* [11:20:43] <^demon|away> Jens_WMDE: In gerrit 2.5, they've added a new plugin/extension interface. Would be awesome do this sort of thing there :) [11:20:51] <^demon|away> Granted, we're on 2.3 so there's no rush. [11:20:58] mutante: you are the pro I guess :-° [11:21:00] ;) [11:21:16] ^demon|away: aren't you upgrading to 2.5 today? [11:21:20] err 2.4 [11:21:28] <^demon|away> Yup, 2.4. [11:21:32] <^demon|away> 2.5 isn't released yet :) [11:21:53] <^demon|away> Speaking of, I should check to see what schema updates there are in 2.4, if any. [11:23:42] putnik: if there is a problem with one or more feeds in a planet, that should usually not stop the whole planet from updating, it would just skip feeds, if you want to add or remove feeds please see this http://meta.wikimedia.org/wiki/Planet_Wikimedia#Requests_for_Update_or_Removal [11:24:07] <^demon|away> mutante: So, is gerrit totally in git now (config too?) [11:24:38] off for lunch [11:24:44] (well just grabbing a snack actually) [11:25:32] putnik: i can only guess it but i think this is for Russian Planet http://ru.wikipedia.org/wiki/%D0%92%D0%B8%D0%BA%D0%B8%D0%BF%D0%B5%D0%B4%D0%B8%D1%8F:%D0%9F%D0%BB%D0%B0%D0%BD%D0%B5%D1%82%D0%B0_%D0%92%D0%B8%D0%BA%D0%B8%D0%BC%D0%B5%D0%B4%D0%B8%D0%B0 [11:26:36] ^demon|away: apache config. no, well it is in both svn and git and there is still just the old sync script to push to cluster [11:27:36] <^demon|away> yuck. [11:27:45] <^demon|away> didn't we just complain about half-migrations earlier this week? ;-) [11:28:30] yes, but the complaints have been heard, i would expect changes soonish [11:29:42] just used the old way, but also merged in git, because wikidata.org had the wikimania deadline [11:31:05] <^demon|away> deadlines, psh :p [11:31:21] they printed flyers with the URL :p [11:32:01] <^demon|away> Print...? People still do that? [11:32:07] <^demon|away> They should've just made a PDF and put it in the cloud. [11:32:14] <^demon|away> And given people a QrCode to link to it [11:32:15] yes, never saw a Wikidata and Render flyer? [11:32:32] oh yeah, that was at Berlin hackathon [11:33:12] http://commons.wikimedia.org/wiki/File:Wikidata-RENDER_summit_009_-_Berlin_2012.jpg [11:33:55] <^demon|away> I wasn't cool enough to go to Berlin this year :) [11:34:18] <^demon|away> Instead, I was being cool and staying in school. [11:37:37] putnik: yes, it should update automatically from now on, it was in a cronjob all the time, and "ru" was in the languages [11:38:26] putnik: might make other changes to planet and/or install planet-venus instead soon, but unrelated to this issue the ru planet had [12:05:53] back [13:11:51] seeing an error at enWS ... [13:11:52] A database query syntax error has occurred. This may indicate a bug in the software. The last attempted database query was: [13:11:52] (SQL query hidden) [13:11:52] from within function "SqlBagOStuff::set". Database returned error "1114: The table 'pc060' is full (10.0.6.50)". [13:11:58] yes, me too... [13:12:01] on my own talk page on commons [13:12:29] your a jinx Trijnstel ;-) [13:12:33] <^demon> Table pc060? [13:12:55] I have pc097 [13:12:58] recovered at at enWS, after 4th attempt [13:13:33] it's still the same... [13:13:35] https://commons.wikimedia.org/wiki/User_talk:Trijnstel [13:13:37] can't read it [13:14:23] another one, of ez [13:14:25] at Trijnstel's I get [13:14:26] A database query syntax error has occurred. This may indicate a bug in the software. The last attempted database query was: [13:14:26] on meta [13:14:26] (SQL query hidden) [13:14:27] from within function "SqlBagOStuff::set". Database returned error "1114: The table 'pc077' is full (10.0.6.50)". [13:14:28] "SqlBagOStuff::set". Il database ha restituito il seguente errore "1114: The table 'pc073' is full (10.0.6.50)". [13:14:28] Database error A database query syntax error has occurred. This may indicate a bug in the software. The last attempted database query was: (SQL query hidden) from within function "SqlBagOStuff::set". Database returned error "1114: The table 'pc024' is full (10.0.6.50)". [13:14:31] again [13:14:41] what's happening? [13:14:42] nice Vito, in Italian [13:15:12] looks like some of the backend db are misbehaving [13:15:19] and Mexicano too on nlwiki [13:15:21] "SqlBagOStuff::set". De database gaf de volgende foutmelding "1114: The table 'pc156' is full (10.0.6.50)". [13:16:17] https://commons.wikimedia.org/wiki/Commons:Village_pump [13:16:22] from within function "SqlBagOStuff::set". Database returned error "1114: The table 'pc200' is full (10.0.6.50)". [13:16:59] http://en.wikipedia.org/wiki/User_talk:217.111.141.100 if you guys haven't seen this yet. [13:17:11] (database error) [13:17:33] it's solved for the two pages on commons [13:17:45] already reported wctaiwan, but thank you the same [13:17:47] seems fixed here too. [13:17:50] OK, good good. [13:17:54] wctaiwan: it displays fine for me [13:17:55] Hello, I've got some login problems.. got directed here. anyone here able to help me out? [13:18:12] sDrewth: yeah just now it loaded fine. [13:18:34] Database error A database query syntax error has occurred. This may indicate a bug in the software. The last attempted database query was: (SQL query hidden) from within function "SqlBagOStuff::set". Database returned error "1114: The table 'pc024' is full (10.0.6.50)". [13:19:05] Same error [13:19:10] מתוך הפונקציה "SqlBagOStuff::set". בסיס הנתונים החזיר את השגיאה "1114: The table 'pc193' is full (10.0.6.50)". [13:19:11] for me as well [13:19:13] me too [13:19:52] reported as https://bugzilla.wikimedia.org/38202 [13:19:57] ez: you should be able to login, we are just seeing some database server issues, like you pointed to [13:20:16] thanks! [13:20:20] ill try again [13:20:44] baj [13:20:59] <^demon> It should be in the Status: part, and needs to remain short. [13:21:16] awww shoot, it wont let me. I tried to many times already. [13:23:09] *crys* upload failed from within function "SqlBagOStuff::set". Database returned error "1114: The table 'pc006' is full (10.0.6.50)". *crys some more* [13:23:31] https://bugzilla.wikimedia.org/show_bug.cgi?id=38202 [13:23:32] someone accidentally stepped on the read-only switch :p [13:24:24] wctaiwan: someone banging away on the keyboard [13:25:51] @info 10.0.6.50 [13:25:51] Krinkle: [10.0.6.50: ] db40 [13:26:46] 'pc191' turn [13:27:22] No day without some server failure (leaving aside the permanent errors..). Argh. [13:27:32] LOL [13:27:44] just got the same issue on zh [13:28:12] DB server is eatting too much data it seems ;) [13:28:28] <^demon> Maybe if we got rid of all the old revisions, we'd have more space ;-) [13:28:37] If your table is full, move some food to another room... [13:28:49] its take out :P [13:29:05] Awesome [13:29:27] If your table is full, then it means you're not sharing. ;) [13:29:37] <^demon> Krinkle: We ordered takeout? Why didn't anyone ask me what I wanted to get? :( [13:29:53] do you want bits with that? [13:30:04] A database query syntax error has occurred. This may indicate a bug in the software. The last attempted database query was: [13:30:06] (SQL query hidden) [13:30:09] from within function "SqlBagOStuff::set". Database returned error "1114: The table 'pc005' is full (10.0.6.50)". [13:30:14] <^demon> wilfredor: Already reported, thanks. [13:30:19] subject^ [13:30:21] wilfredor: see topic :) [13:30:33] https://bugzilla.wikimedia.org/38202 [13:30:44] guys, who is looking into it from ops? [13:30:48] the strange thing is: it seems edits don't get trough - but in fact my edits went trough [13:30:56] apergos: I think [13:31:03] thanks [13:31:05] no, matanya [13:31:13] what? [13:31:14] nope, apergos [13:31:17] what? [13:31:25] matanya is not ops? [13:31:28] I know my upload did, twice! http://commons.wikimedia.org/wiki/File:Qantas_%28VH-TJK%29_Boeing_737-476_landing_at_Canberra_Airport_%281%29.jpg :| [13:31:31] srry [13:31:33] or have I been away that long!? :P [13:31:40] matanya, ;) [13:31:44] looking at it but it's gong to be slow, my db fu is pretty limited [13:31:54] what? http://www.guy-sports.com/fun_pictures/bunny_butt_hurts.jpg [13:31:57] k apergos [13:32:30] <^demon> Saibo: Edits will go through because it's the parser cache DB that's messing up. Only the parser cache should be affected--edits should complete fine. [13:32:57] yeah, I just edited a page and turned it into a db error [13:32:58] * wctaiwan sighs [13:33:01] hey Theo10011 [13:33:06] hiya [13:33:08] yeah, except that you don't see that it worked until you go to history [13:33:09] k, demon. The error should tell that it is no complete failure. [13:33:24] Saibo: it's a database error. The server doesn't know anything. [13:33:50] stupid hamsters! but don't get them more food - the table is alredy full! [13:33:53] (as in, they didn't expect this, so it's hard to expect that they'd have an informative error message ready) [13:34:13] who is on this? [13:34:16] On meta: from within function "SqlBagOStuff::set". Database returned error "1114: The table 'pc156' is full (10.0.6.50)". [13:34:26] shizhao: see topic [13:35:04] matanya: thx [13:35:09] np [13:36:10] Now its 'pc096' turn! :| [13:36:51] pc234 for me. [13:36:59] does every edit make a page unavailable? :P [13:37:02] pc* [13:37:15] they're all from the same server (db40 in cluster 7) [13:37:18] also on en.wp [13:37:19] https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)#I_can.27t_go_to_the_user_namespace [13:37:26] @info db40 [13:37:27] Krinkle: [db40: s7] 10.0.6.50 [13:40:30] Some tool like hugle for linux for wikipedia and commons? [13:41:05] I think there's a beta of a new version of Huggle that's designed to be cross-platform [13:41:09] never tried it myself, though. [13:41:50] mmm [13:41:52] oh, not released yet... [13:42:02] ok :( [13:42:23] look into http://en.wikipedia.org/wiki/Wikipedia:Huggle/Wine, maybe? [13:44:11] hmm, I just got a (Cannot contact the database server: Unknown error (10.0.6.50)) upon trying to reach https://en.wikipedia.org/wiki/K2000 [13:44:23] DarkoNeko: see topic [13:44:24] the soruce code doesn't seems to contain any more precision [13:44:28] hmmm [13:44:45] I'm now having (Kan geen verbinding maken met de databaseserver: Unknown error (10.0.6.50)) [13:44:50] just got worse [13:44:55] on en "(Cannot contact the database server: Unknown error (10.0.6.50))" [13:44:57] yeah, that [13:44:58] I mean, nl [13:45:02] in English [13:45:05] yep, me too [13:45:24] http://lh.rs/OeUR8ITEickc if anyone wants a screenshot [13:45:35] we know what the db error looks like :) [13:45:59] is it specific to enwiki ? [13:46:03] no [13:46:05] * DarkoNeko clicks [13:46:05] <^demon> No, it's affecting all wikis. [13:46:08] ouch [13:46:12] can confirm at cawiki [13:46:26] talks definitely not working [13:46:33] sometimes others fail [13:46:42] now everything fails [13:46:44] <^demon> Yes, we know. Being investigated. Please hang tight. [13:46:52] ok! [13:46:54] The last thing I saw was: Funktion „SqlBagOStuff::set“. Die Datenbank meldete den Fehler „1114: The table 'pc030' is full (10.0.6.50)“. [13:46:55] thanks a lot [13:47:30] So Wikipedia is all done. It's full ;-) [13:47:35] XD [13:47:37] :D [13:47:49] nice jokes today. I liked Saibo's take too. [13:47:51] down down the river it goes [13:47:52] A work-in-progress: Finished. [13:47:54] hashar: you should remove the first part [13:48:03] <^demon> I still say maybe we should stop saving old revisions ;-) [13:48:06] <^demon> I mean, who needs those. [13:48:18] or implementing them as diffs? [13:48:19] hashar: you chopped the topic, it already said down, now the url is half [13:48:23] soo much space wasted [13:48:26] oh my [13:48:29] my client is bad [13:48:33] Just saving space [13:48:41] it's not your client [13:48:43] seems like pages that were showing db errors is now showing the plain error page while others work fine. [13:48:44] but..you [13:48:45] :p [13:48:47] <^demon> Let's see how many different ways we can change the topic to say the same thing. [13:48:47] * Vito hides [13:48:50] jesus guys, way to mutilate a topic [13:49:05] ^demon: people tend to see the last part of the topic [13:49:11] more mutilation >_> [13:49:29] ., [13:49:32] not just editing, restored tail, removed dupe down mentiones [13:49:32] poor topic :-\ [13:49:43] /. [13:49:44] for the love of satan, we know! :) [13:50:28] to read article, logging out may help, as you'll hit html cache then [13:50:28] sorry for the downtime, we are looking at how to extend the cumulative table size of the parchercache db [13:50:56] I imagine that the engineers have taken the database offline and are now talking threateningly to the box while poking it hard with a screwdriver [13:51:04] Krinkle: also, reading oldids seems to work [13:51:09] Can't you just delete all the discussions on Young Earth Creationism? That'll free vast quantities of space, and the reduction in hot air will help with climate change [13:51:11] instead of &action=render [13:51:22] joancreus_: not all http://meta.wikimedia.org/w/index.php?title=Talk%3AWikidata&diff=3879256&oldid=3879182 [13:51:23] not for me, at least a few minutes ago... [13:51:31] hmm did the trick for me [13:51:32] brianmc, ... [13:51:40] joancreus_: if the db is not found mediawiki doesn't care what you request, it will refuse the request [13:51:42] joancreus_: Are you Catalan? [13:51:44] xDD [13:51:49] Tokvo: yes!! [13:51:54] no, we haven't taken the db offline [13:51:55] xDD [13:52:04] but the process is no longer responsive [13:52:05] (independentist, fyi :D) [13:52:08] we can edit the topic instead of wiki [13:52:13] hahahaha [13:52:17] it's going to stay that way til we can fix the size issue, sorry for the delay [13:52:20] joancreus_: Hahaha [13:52:28] * wctaiwan hands apergos a glass of lemonade [13:52:32] are you too Tokvo ? [13:52:36] I just want you to know, we're all counting on you. :) [13:52:41] Yeh. [13:53:07] btw, Krinkle , wouldn't implementing revisions as diffs be more efficient? many times subtle changes such as adding a comma add 100k or more! [13:53:27] Tokvo: do you know #wikipedia-ca ? [13:53:34] Si [13:53:52] joancreus_: storage is not an issue, and that would not be more efficient afaik because then you'd have to fetch a lot of old version to get one page. [13:54:07] people usually get the latest [13:54:09] that one cached [13:54:25] <^demon> Space efficient perhaps. Not time efficient if you want to see $random3YearOldRevision and you have to calculate it based on diffs. [13:54:33] it is "full" but the drive isn't full, this is a bug :) [13:54:35] and the change of corrupting data is way higher... [13:54:47] yeah, loose one old rev and bye page [13:54:54] implementing revs as diff would make single revision delete a huge pain, methink [13:55:05] hmm yes [13:55:07] (and vice versa) [13:56:47] the tables are full due to a configuration setting in mysql; it's being looked at [13:57:13] Wikipedia becomes read-only, geek productivity globally goes up 5% [13:57:19] apergos: Why not truncate them in such an emergency? [13:57:30] brianmc: there's an xkcd for that... decrease in IQ [13:57:33] let me search it [13:57:40] it could be in the topic of the channel [13:57:43] not sure what impact truncating parser cache tables would have on the dbs/servers [13:57:44] Yes, but that's assuming no read access [13:57:49] msot of XKCD could be in topics [13:57:51] http://xkcd.com/903/ [13:57:52] 10.0.6.50 goes on breaking my patience [13:58:05] <^demon> apergos: Theoretically it should be harmless, but let's not test that theory right this minute :) [13:58:07] Vito, then slowly steps away from the F5 key and go take a coffee [13:58:12] let's not [13:58:17] brianmc: nope, any increase in productivity is cancelled out by high schools being unable to hand in their reports... [13:58:17] apergos: well, it performs a manual drop, depending on the file system that can take between ms and ages... :7 [13:58:22] hi [13:58:35] DarkoNeko: I should finish some stuffs and then go [13:58:39] I'm more thining of the impact of having nothing in the parser cache for a pile of articles [13:58:39] <^demon> guillom: Hi guillom [13:59:00] <^demon> apergos: Perhaps we could DELETE old entries, but again, I'd rather not test the harmlessness during downtime. [13:59:03] apergos: Some time ago we had fully empty squids and the cluster recovered [13:59:10] except for sysadmins, we could try to leave the basement, maybe? there *might* be something outside [13:59:21] Krinkle: why aren't you a developer? :) [13:59:24] joancreus_: They got another channel [13:59:25] you clearly know a lot [13:59:31] I've never heard of anyone coming back from The Outside to report on it... [13:59:35] joancreus_: I looked once and promptly hopped off the stool I was standing on, screaming... [13:59:37] Trijnstel: he is a developer [13:59:44] afaik [13:59:52] oh, I thought he wasn't... [13:59:55] <^demon> joancreus_: I hear there's this big bright thing in the sky that helps your body metabolize Vitamin D. Perhaps you could investigate that and report back? [14:00:13] ok! [14:00:22] oh god god god god it's soo bright! [14:00:26] back in everybody! [14:00:36] Trijnstel: I am [14:00:51] ah :) [14:00:53] * Krinkle isn't reading the text of the flow [14:00:58] did I miss something? [14:00:58] sorry then [14:01:03] Krinkle: a bad joke of mine [14:01:47] The outside is where beer and pizza come from, so at least it has some redeeming features. [14:02:32] Trijnstel: joancreus_ I'd like to my user page, but its down right now :P [14:02:40] <^demon> brianmc: Well, unless you know how to bake your own pizzas and brew your own beer ;-) [14:02:41] http://www.mediawiki.org/wiki/User:Krinkle\ [14:02:58] Yes to both. [14:03:29] ^demon, you'd still have to get the raw ingredients to make those :) [14:03:46] <^demon> That's what delivery services are for :D [14:03:48] <^demon> Or Amazon [14:04:11] ..arguably, that's no different than premade pizza and beers :D [14:04:53] did all these 1114 codes happen today? [14:05:12] When will the problem be solved? [14:05:15] Apparently the database servers don't want me to contribute to Wikipedia? [14:05:22] How long is a piece of string? [14:05:30] Hildanknight: see topic [14:05:45] Okay. [14:05:49] it's as long as a piece of string >.> [14:06:05] a G one ? [14:06:12] Emperyan: as soon as it is (sorry but "soon" is the best I can give you) [14:06:38] oh dear [14:06:40] Cannot contact the database server: Unknown error (10.0.6.50)) [14:06:48] :| [14:07:09] yes, there are problems [14:07:10] wctaiwan, twice as long as half its length. [14:07:17] Bidgee: see topic [14:07:43] apergos: Thank you! I hope it will be solved in a few min [14:07:44] :) [14:07:57] !log midom synchronized wmf-config/InitialiseSettings.php 'disabling parser cache for now' [14:08:05] Logged the message, Master [14:08:15] oO [14:08:23] hmm, so, it'll be slower but it'll work? [14:08:51] ok that's a solution as well [14:09:23] good that the higgs event was yesterday (maybe that page parses fast enough though) [14:09:25] Bidgee: see topic <-- I know but this time the db isn't full [14:09:54] is the "parser cache" thing basically the cache? [14:09:55] Bidgee: it seems it's a db config issue? [14:10:05] IANAS (i am not a sysadmin) [14:10:07] <^demon> wctaiwan: It's one of many caches we have. [14:10:11] Seems to be working at the moment. [14:10:13] (as in, the stuff we see if we don't log in) [14:10:13] should be up btw [14:10:16] or it killed itself [14:10:19] The parser cache was full of eels [14:10:22] yeah, seems to be working now [14:10:29] yes! [14:10:29] someone decided to put an arbitrary limit on parser cache [14:10:32] and not clean it up [14:10:33] works at cawiki [14:10:39] <^demon> brianmc: s/eels/myspace bands/ :) [14:10:44] now I wonder if I will be able to recover it [14:10:49] it is hitting some fun problems [14:11:10] ^demon: what does the "parser" of the cache part mean? [14:11:20] Solved! [14:11:34] <^demon> wctaiwan: It caches output from the parser, specifically, ParserOutput objects. [14:11:40] Emperyan: Parser has been disabled temporarily. So reading is available again [14:11:43] <^demon> That way we don't have to re-render pages if they haven't changed. [14:11:53] oh parsing too [14:11:56] uh oh ddos is coming [14:11:57] ah, right. just not cached [14:12:00] get ready [14:12:07] well, just for logged-in users [14:12:13] ^demon: so basically it's what the "cached revisions" that IPs see reside on? [14:12:23] <^demon> No, that's the squid caches. [14:12:30] ohhkay. [14:12:34] I'll read up at some point :p [14:12:41] <^demon> Even logged in users hit the parser cache. No point in re-parsing a page if it hasn't changed :) [14:13:14] so how bad is it? Every template is being transcluded each time a page is loaded? [14:13:28] LOL... [14:14:10] Well done for the moment! Happy housekeeping! [14:14:35] I really fail to see the point of status.wikimedia.org [14:14:57] you *could* just see it as one of the many metrics [14:15:16] it is always 100% uptime with no problems [14:15:18] if it's down on status., it means that the server the page uses can't reach whichever server... [14:15:20] ever [14:15:40] nope, right now it's showing service disruption for multiple entries. [14:15:53] not to me. [14:16:01] odd. [14:16:04] <^demon> Is for me too. [14:16:12] fr, etc. are down on status, for me. [14:16:38] PANIC!! [14:17:01] [18:12:31] sooo... if I remeber it right, the fix to parser cache growing too huge was to truncate a couple of tables out of 256, right? [14:18:04] talking about truncating.. the banks truncate any parts of a cent and direct the truncated cent hundereds onto a truncation acount.. [14:18:31] Office space movie comes to mind [14:19:21] juxonrails: bank just create billions in a click, they don't need to do something like in the Superman movie :-] [14:20:28] When they're making money by borrowing at < 5% and investing with 10% return days of free money are evidently over [14:20:56] whats up [14:21:11] wiki's up [14:21:15] :D [14:21:30] What's happening? :S [14:21:37] see topic [14:21:55] what topic [14:22:03] a server was causing issues, temoparily taken out for now. services should be up for everyone. [14:22:33] hashar: I cannot belive these eurozone politicians.. 100B€ for banks.. If I were in charge the Spanish banks wouldn't see a cent before they make directed stock emission to the coffers of the taxpayers putting the money up thereby diluting the existing shares who haven't been supervising the banks businesses enough [14:22:50] I mean in 20 or 30 yrs time those shares might be worth something ( bankrupt and thus 0€ price ) [14:23:54] juxonrails: I'm Spanish and I can tell you something: This bailout is a disgrace, Spanish government nationalizes Bankia and they said that it wouldn't be with public money, and then this. [14:24:07] yay, political talks in the tech channel [14:24:26] <^demon> Ok, let's take those to a /msg or #austeritysucks or something. [14:25:28] Tokvo: There was this one radial guy in ##economics that suggested that all spanish euroform debt be cut by 50% with a statute or law, that would healthify the system [14:25:28] [14:25:48] or to #wikipedia-en :D [14:26:01] Solved??? [14:26:22] Emperyan: Apparently it is. [14:26:38] :D [14:26:42] Good... [14:26:44] :) [14:26:45] :D no:2 [14:27:24] Tokvo: if you want to talk about this subject I suggest we take it to ##economics, more then ontopic there [14:27:36] <^demon> hashar: Did you +o me when you opped yourself? Just noticed :p [14:27:40] Okey, let's go. [14:28:16] Tokvo: I'm there [14:29:25] ^demon: / ^demon was promoted to operator by ChanServ. 30 minutes ago [14:29:26] ^demon: I did it indeed [14:29:45] topic is free here [14:29:47] a wizard did it [14:30:02] <^demon> Yeah I totally missed that between the austerity talk, the dozens of "zomg mywiki too!" and everything else :) [14:30:11] heheh [14:30:23] I would love to have the tech people to be automatically +o [14:30:29] good by [14:30:33] would let everyone know whom to ask :-) [14:30:45] lol [14:30:54] <^demon> I don't like being asked things :) [14:31:02] <^demon> I prefer to hide behind a veil of ignorance. [14:31:03] * mutante sets mode +omg hashar [14:31:22] I don't think I can edit any channel configuration :/ [14:32:38] is the site down? [14:33:49] <^demon> Up for me... [14:34:04] ok [14:36:27] ..em rof pu [14:36:38] Well, always more highbrow entertainment in here when there's an outage. #wikipedia fills up with highly contagious Chicken Little sufferers [14:37:05] <^demon> There's a reason I stopped going to non-tech channels ;-) [14:37:43] heh, yes "mutante: So, they could lock the web 'til they fix it" [14:37:52] should go in and do /nick ColSandrs [14:38:26] :D [14:38:36] thx for the quick fix anyway. C-ya! [17:09:53] I have a q regarding the checkuser extension [17:10:13] is there a crosswiki check tool? [17:12:42] matanya, afaik no [17:12:56] sterwards need to perform each check on each wiki [17:13:12] well, can such a check be done strait in the DB? [17:13:28] no [17:13:47] enwiki is in its own servers, for instance [17:14:10] well, I have an ip that I know have created at least one spam account on every wiki [17:14:34] how can I found out what are those usernames with the need to check each wiki [17:15:55] trick someone to do the work for you? [17:16:01] :) [17:16:10] it could be done by a bot [17:16:29] do we have a CU bot? [17:16:44] this might be a good Idea for our group. [17:16:45] but in any case, the task is the same: go to one wiki, check, repeat... [17:16:56] how am I supposed to know? [17:17:08] it could certainly be done [17:17:11] just wondering [17:17:37] well, if you some how might write such a bot, please let me know [17:18:14] using the api, I suppose [17:32:50] !log midom synchronized wmf-config/InitialiseSettings.php 'reenabling db40' [17:32:58] Logged the message, Master [17:37:03] I have a stupid question! [17:37:14] If parser cache is not on db40 now, where is it? [17:37:16] Memcache? [17:38:59] it is on db40 [17:39:06] and yes, it was backed by mysql only for a while [17:39:09] ergh, memcached only [17:40:14] ?help [17:40:18] !help [17:40:19] http://www.mediawiki.org/wiki/Help:$1 [17:40:19] !(stalk|ignore|unstalk|unignore|list|join|part|quit) [17:40:31] !quit [17:40:48] Where is the help for wm-bot? [17:42:18] Nathan2055: https://labsconsole.wikimedia.org/wiki/Nova_Resource:Bots/Documentation#wm-bot [17:43:15] thx [18:14:50] is anyone familiar with pagetriage on enwiki? [18:15:11] I am. [18:15:58] binasher: What do you need help with? [18:16:00] REPLACE /* ArticleCompileProcessor::save */ INTO `pagetriage_page_tags` [18:16:14] has the use of pagetriage_page_tags recently changed? [18:18:47] 15% of all enwiki db write queries are replace queries on pagetriage_page_tags, seemingly many of them for every edit [18:19:17] <^demon> Ouch :( [18:20:22] (i'm looking into why the enwiki slaves are generally lagged now) [18:25:45] bsitu: PageTriage discussion ---^^ [18:58:09] !log preilly synchronized php-1.20wmf6/extensions/MobileFrontend 'fix cache issue' [18:58:17] Logged the message, Master [19:15:37] Warning: mysql_query() expects parameter 2 to be resource, boolean given in /home/wikipedia/common/php-1.20wmf6/includes/db/DatabaseMysql.php on line 46 [19:15:38] * AaronSchulz sighs [19:20:48] lots? [20:03:46] !log preilly synchronized php-1.20wmf6/extensions/MobileFrontend 'fix cache issue with token' [20:03:54] Logged the message, Master [20:11:21] no idea if you already knew this, but another error: [20:11:23] A database query syntax error has occurred. This may indicate a bug in the software. The last attempted database query was: [20:11:25] (SQL query hidden) [20:11:26] from within function "SqlBagOStuff::set". Database returned error "1637: Too many active concurrent transactions (10.0.6.50)". [20:13:23] same here [20:15:59] <^demon|away> Thanks for reporting folks, it's being looked at. [20:25:58] binasher: I like how are db cache manages to increase downtime ;) [20:26:04] *our [21:16:42] deploying config change to make AFTv5 go to 20%... [21:18:44] !log kaldari synchronized wmf-config/InitialiseSettings.php 'Bumping AFTv5 lottery percentage to 20% of en.wiki' [21:18:51] Logged the message, Master [21:23:02] eh, I meant 2% :) [21:27:31] about to run scap for ClickTracking update [21:29:00] scap started [21:31:56] kaldari: enlabs uses aft right? [21:32:15] seems to be missing a schema change...not sure if anyone cares though [21:32:29] not that I know of, but maybe someone set it up [21:32:52] Unknown column 'af_is_featured' in 'field list' (10.0.6.21) [21:33:22] I don't have access to that machine [21:33:40] (as far as I know) [21:35:10] !log kaldari Started syncing Wikimedia installation... : [21:35:10] Logged the message, Master [21:47:04] !log kaldari Finished syncing Wikimedia installation... : [21:47:12] Logged the message, Master [21:48:41] !log reedy synchronized wmf-config/InitialiseSettings.php 'Random root page enabled everywhere but wikipedias' [21:48:49] Logged the message, Master [21:48:50] AaronSchulz: we've been testing AFT on prototype. I'd like to get it to beta labs, along with E2 [21:50:00] !log reedy synchronized wmf-config/InitialiseSettings.php 'revert' [21:50:12] Logged the message, Master [21:55:57] !log reedy synchronized wmf-config/ 'Strike 2' [21:56:06] Logged the message, Master [22:16:31] hi [22:16:50] I received a request to delete this page with > 5000 revisions [22:16:52] https://en.wikipedia.org/wiki/User:28bot/edit-tests-found/2012-June/old [22:16:58] is it safe to delete? [22:17:19] (vvv adviced us to contact you guys before deleting these stuff) [22:17:46] !log reedy synchronized wmf-config/ 'Tidying config for randomrootpage' [22:17:56] Logged the message, Master [22:18:17] Weird, toolserver can't find it [22:18:26] I know... [22:18:45] is it because it's in someone's userspace? [22:18:56] Not really [22:19:01] It's just replication lag [22:19:09] Trijnstel: http://toolserver.org/~vvv/revcounter.php?wiki=enwiki_p&title=User%3A28bot%2Fedit-tests-found%2F2012-June [22:20:21] Apparently it has less than 5000 revisions [22:20:49] well, the deletion page says: [22:20:52] This page has a history with approximately 5,928 revisions: Page history [22:20:53] This page has a large edit history, over 5,000 revisions. Deleting it may disrupt database operations of Wikipedia; proceed with caution. [22:21:01] small difference ;) [22:21:01] That's possible [22:21:23] You see, this thing, to the best of my knowledge, does not really count the revisions [22:21:29] It estimates their count [22:21:40] btw, that's not the correct page [22:21:45] you missed "-old" [22:21:59] "/old" [22:22:01] sorry [22:22:17] Yes [22:22:34] It was moved from that title a few hours ago [22:22:37] oh [22:22:41] And Toolserver database is not updated yet [22:22:47] anyway, it's safe to delete? :) [22:23:19] heh [22:23:29] I *assume* so, since it has <5000 revisions in reality. I'd rather wait for an op though, just in case it is not [22:24:11] okay, well, if it takes too long, I'll go to bed [22:24:36] Does a delete on a page with lots of edits hit the db straight away? I though it went to some queue and got executed slow time after the latest was removed for ux purposes. [22:25:01] Damianz: at least when it was introduced, it did it in a single INSERT SELECT query [22:25:27] There's staff around ;) [22:25:52] hmm, I go know if you don't mind [22:26:16] will do it tomorrow if someone else didn't do it already ;) [22:26:20] Trijnstel: well [22:26:22] nah, go for it :p [22:26:25] You can do it now [22:26:27] oh [22:26:32] quick then [22:26:34] I mean, there are people to save the site [22:28:39] ok, done [22:28:41] I think [22:28:43] :) [22:28:51] https://en.wikipedia.org/wiki/User:28bot/edit-tests-found/2012-June/old [22:29:21] View or restore 4,567 deleted edits? <- you were correct, vvv :) [22:29:31] Hmm [22:29:57] I wonder why does it use exact algo on deleted edits count, but uses the crappy one on checking real facts [22:30:10] hmm, don't now either [22:30:13] *know [22:30:16] but ehm, nn ;) [22:34:33] !log reedy synchronized wmf-config/ [22:34:41] Logged the message, Master [22:47:32] !log preilly synchronized wmf-config/mobile.php 'add subdomain check' [22:47:40] Logged the message, Master