[02:27:02] !log LocalisationUpdate completed (1.20wmf6) at Tue Jul 3 02:27:02 UTC 2012 [02:27:20] Logged the message, Master [02:54:06] Any ETA on when/if the purge module will be re-enabled on Commons? [03:24:39] Avic: is it possible for Dispenser's toolserver access to be revoked? [03:25:10] TimStarling, I think you'll find about 30 in protest of that [03:26:03] I'm sure I'll cope [03:26:42] never mind the project that dependent on function that I provide, some which are used by the WMF [03:27:32] maybe we can find someone to provide that function who is willing to listen to sysadmin advice and stop running bots when asked [03:28:19] Only one bot currently running, its checking and archiving external links [03:28:20] WP:PERF doesn't quite extend to ignoring direct requests from sysadmins looking at ganglia graphs [03:30:03] how about I post to toolserver-l and see what the response is? [03:31:32] TimStarling, what do you want anyway? [03:32:43] I want you to assure me that you're not going to run your purge script again [03:32:59] Err, it shouldn't be possible for non-privileged users to disrupt the site. [03:33:11] I said so here: https://commons.wikimedia.org/w/index.php?title=Commons:Bots/Work_requests&diff=prev&oldid=72966882 [03:33:12] If it is, that's the sysadmins' responsibility to fix, surely. [03:33:28] well, a non-privileged user did disrupt the site, and I fixed it [03:33:42] Avc is not satisfied with my fix [03:33:47] Fixed it by disabling the API's purge module? [03:33:49] yes [03:34:11] look, I have a lot of things to do [03:34:12] TimStarling about that (not sure if this is related) [03:34:21] TimStarling, will you take requests to rerun the Relinks script? [03:34:30] it's much easier for me to get Dispenser's access revoked than it is to implement comprehensive resource limiting for API clients [03:34:33] I was wondering what impact would editing a template with 10 million transclusions have ons ervers [03:35:13] Dispenser: you mean the server-side script I ran to fix the links? [03:35:14] Tim, I could run it on my home computer or any number of other places [03:35:16] Yes, because making requests to the API can only be done from the Toolserver... [03:35:34] Dispenser: You're being a bit unreasonable here. [03:35:54] Tim is right about time/resource constraints. And he wrote a script to fix the underlying table issues. [03:36:01] I can deal with abuse of the API from home IP addresses by blocking them [03:36:13] it's less practical to block the toolserver [03:36:56] I can document the procedure to fix links so that it can be executed by anyone with shell access [03:37:06] and then you can file subsequent requests via bugzilla [03:37:08] You could also block the user-agent... [03:37:10] Disabling the entire purge module seems excessive when it's only purges of pages in a particular namespace that cause problems. If it's that trivial to overload the site, the purging all thumbnails feature really ought to be disabled. [03:38:04] Thumbnail generation has always been an easy attack vector. I'm surprised it's taken this long to become a problem. [03:38:05] it wasn't that easy [03:38:13] And you too a few days to notice as well. I was running it on the "slow" setting. [03:38:19] he was using a concurrency of about 10 IIRC [03:38:37] I don't think launching screen sessions is something most vandals would have difficulty doing. [03:38:59] * ToAruShiroiNeko eyes Brooke [03:39:05] Harder. [03:39:08] we deal with attackers by blocking them [03:39:22] http://www.onepiecepics.com/wp-content/uploads/2009/03/brook.jpg [03:39:36] ToAruShiroiNeko: Editing a template with 10 million transclusions is fine. [03:39:48] Though you should try to minimize the number of edits, obviously. [03:39:59] And you may hit timeouts on page save due to an open bug. [03:40:07] Brooke ah ok [03:40:11] I dont want to break anything [03:40:16] the edit would be throughly sandboxed [03:40:24] my worry is editing the informaiton template on commons [03:40:29] which I feel needs to be expanded [03:40:43] ToAruShiroiNeko: Day-to-day editing is never a problem. If it becomes one, a sysadmin will intervene and block the actions. :-) [03:40:44] So again, will you run the Relinking script when somebody informs you, TimStarling, that the database is inconsistent? [03:40:48] but no consensus for that yet but at least I do not need to worry too much about technical issues [03:41:05] I will document the procedure for running it so that you can request it on bugzilla [03:41:18] and I will run it myself if nobody else handles it within a couple of months [03:41:18] Brooke perhaps you could note this somewhere? [03:41:32] ToAruShiroiNeko: Note that it's okay to edit pages? I think it's implied. [03:41:32] so that I can quote you :) [03:41:49] TimStarling: Is the underlying bug about the links tables the same one that I filed about the read timeout? [03:41:51] Brooke this is true but some people areconcerened about lag and breakage [03:41:56] Just wanna make sure it doesn't get lost. [03:42:22] ToAruShiroiNeko: Some people are concerned with lots of things. I'm not sure why it matters. [03:46:00] https://commons.wikimedia.org/wiki/File:Mediawiki_logo_sunflower.svg # New vectorization of the MediaWiki logo, BTW. [03:46:09] Doesn't seem to scale down well, though. Center turns green or something. [03:46:28] I think the center just needs to be simplified. [03:46:36] Petals look nice, though. [03:46:46] Related: https://commons.wikimedia.org/wiki/File:Wikitech_logo.svg [03:48:16] Ok TimStarling, I wont run the purge-script again [03:48:51] https://commons.wikimedia.org/wiki/User:Tim_Starling/fixing_link_tables [03:49:12] Brooke: yes, I referenced it in that page I just wrote [03:49:17] Dispenser: thanks [03:50:02] Brooke oh? my request is finaly fulfilled [03:50:04] <3 [03:51:06] Group hug. [03:52:40] * Dispenser hugs Brooke, TimStarling, ToAruShiroiNeko, Avic [03:53:39] !log tstarling synchronized wmf-config/CommonSettings.php 're-enable API action=purge on commonswiki' [03:53:48] Logged the message, Master [03:54:21] \o/ [04:17:01] TimStarling, this doesn't bar from developing a Commons JavaScript Attack widget for Anonymous, does it? [04:17:56] no it doesn't [04:18:13] we'll probably have to move to a system of API keys eventually [04:18:16] like most APIs on the web [04:18:35] with query limits per key and some sort of key request system where you can put throttling [04:18:46] but it'll be a b/c breaking change [04:23:24] Ok, good to know [04:31:19] TimStarling: would not it be just much more simpler to just switch API to logged-in only and use user login as a key? [04:33:00] vvv: purge is already rate limited for non-logged-in users [04:33:23] Well, then we'll probably rate-limit it for logged-in users as well [04:35:01] I was only averaging purging 2-3 pages per second [04:35:57] we use the API for reading, not just for editing [04:36:32] things like search autocomplete [04:37:03] you couldn't require people to log in for that [04:37:15] also mobile apps, who would be the user? [04:37:31] an API key is kind of equivalent to a user agent, not a user [04:39:47] TimStarling: well, then we can probably rate limit them by UA+IP? [04:40:32] apps that run in the browser have no control over UA [07:19:42] I am so going to vote "avoid" on "Samsung" in Consumium .. most of their wares are low quality and badly designed products and misleading advertising [07:22:04] like this netbook from Samsung.. has a 2GB memory ceiling and one day after reboot the system is swapping out and they advertise it as up-to 14.5hrs when with Windows 7 I've seen 6.5hrs and with Debian GNU/Linux it's like 2-3 hrs, tops ... so where is that 14.5hr battery..? [07:22:29] * jubo2 prlly needs a custom kernel [09:24:21] I need to know about how the http://status.wikimedia.org works, how it checks the health of the various services etc. etc. for use in http://status.consumium.org ( no automation yet ) [09:25:12] i.e. I want to copy whatever wikimedia uses [09:26:02] 2nd day of having a 2nd server .. http://develop.consumerium.org/wiki/ is so low volume we hadn't needed an another server [09:27:24] untill I decided that I want an automized system where backups are taken many times a day and automatically transported to the backup server which verifies that the backups _are fully working_ and MD5's the files [09:28:03] I do not yet know how to make this happen [09:29:08] so I thought if I could pry into your wisdom you've accumulated in running teh Wikipediae free of advertising ( ! ) [09:31:45] status.wikimedia.org is via watchmouse iirc [09:32:13] check the bottom right links on that page [09:33:21] we don't do backups many times a day; we do have db snapshots and we also have slaves that cna be promoted to master if the master dies for some reason [09:35:25] http://wikitech.wikimedia.org/view/Database_snapshots this is some old but mostly accurate for your use anyways info on the snapshots [10:33:23] <- this nub is having trouble adding one measley entry into interwiki [10:34:00] what kind of trouble [10:35:09] insert into interwiki values iw_prefix="lsb" iw_url="http://let.sysops.be/wiki/$1", iw_local=0, iw_trans=0; [10:35:21] gives a syntax error [10:36:12] this is ridiculous. I got good degree in relational algebra and query optimization done by wetware of your's truly [10:36:23] So now .. I can't type SQL .. [10:37:17] how was it.. ... [10:37:32] "push selects as up the structure as possible" [10:38:26] doing selects before the other stuff gives you better optimization.. like intersections or products or what have you.. [10:38:58] shouldn't you be asking in #mediawiki ? [10:39:52] SPARQL > SQL [10:40:44] sounds more like mediawiki problem than wikimedia [11:01:59] jubo2: insert into interwiki (iw_prefix, iw_url, iw_local, iw_trans) values ('lsb', 'http://let.sysops.be/wiki/$1', 0, 0) might work/ [11:04:01] jubo2: "So you got your Ph.D.! Just don't touch anything..." :-) [11:05:09] Hi guys. Russian version of Planet Wikimedia not updated since November 2011. Who can fix it? [11:38:30] saper: tnx got it already .. looked at /maintenance/interwiki.sql and constructed from that example [11:38:31] Consumium gets 4,000 skeletons of company articles from DBpedia.org datasets and ontology and much much more .. I may start practicing SPARQL queries at the SPARQL endpoint that DBpedia.org kindly provides at [13:40:54] oizor piippölszorz .. I need to set up a status.consumium.org ... where can I read about the status.wikimedia.org and see some code or something.. [13:44:28] jubo2: it's a 3rd party service [13:45:15] insert non free in there too [13:45:39] http://www.nimsoft.com/content/Nimsoft/en/index/solutions/nimsoft-cloud-user-experience.html [13:59:07] 'k tnx for info Reedy [13:59:54] Reedy: what about dumps.wikimedia.org and the XML and SQL dumping programs ? [14:00:05] That's all available [14:00:14] sql dump will literally just be mysqldump with parameters [14:00:23] xml stuff is mostly in mediawiki itself [14:00:39] I am looking to automize multiple times a day backup to a safe server without needing user attention [14:01:07] how does one go about verifying that the backups work .. [14:01:16] import it back in, same as any backup [14:01:19] if it loads into MySQL without an error then it works ? [14:01:21] Depending on amount of data, I'm not sure xml will be quick enough for a daily backup [14:01:39] jubo2: there is also some Nagios and/or Icinga monitoring stuff for monitoring (like nagios.wikimedia.org) that is puppetized and you can find in the operations/puppet git repo in nagios.pp [14:02:26] jubo2: to verify backups work humans get email with a success or failure [14:02:43] from amanda [14:02:45] f.e. [14:03:14] Nagios is FOSS ? [14:03:21] yes [14:03:28] Nagios core is and Icinga is [14:04:01] http://en.wikipedia.org/wiki/Nagios [14:22:55] mutante: big thanks.. will look into Nagios and Icinga [14:31:33] jubo2: yw. https://labsconsole.wikimedia.org/wiki/Git#Checking_out_the_repositories [14:37:54] > Since a few days I can not reach Wikipedia from my home. Just suddenly stopped working. My IP is ${IP}. I've got no virus warnings from my Norton system or anything else like that. Have never posted anything, so no spamming Why? [14:38:18] i checked and the IP has no contribs ever on any of our wikis [14:38:23] (per toolserver) [14:38:30] any other ideas? [14:38:50] <^demon> position of the moon? [14:38:55] (I can give someone the IP if they want to look further) [14:39:06] jeremyb: tracert/ traceroute maybe [14:39:13] nslookup -type a results? [14:39:25] hoo: not the kind of thing I like to do by email ;-( [14:39:32] but i guess so [14:39:38] (this is OTRS) [14:41:55] hrmmmm, maybe they block ICMP there? ;( i can't ping there from either my linode or labs [14:42:27] mtr dies in the right ISP's router for labs but dies in cogent for linode [14:43:02] i guess for now i'll just add a note to the ticket. hopefully someone else will pick it up [14:43:45] (2012070310006042 if you want it!) [15:01:54] jeremyb: ipv6 brokenness ? [15:02:38] djhartman: maybe? [15:02:48] jeremyb: and there has been the eastcoast storms, which have caused some mess in routing over the weekend. [15:02:57] djhartman: it's a v4 address that he gave [15:03:26] jeremyb: doesn't say a thing. dnslookup of the OS determines which way you go. [15:03:30] it's a swedish IP+user fwiw [15:03:59] djhartman: yeah, sure. i was just saying all i had to test with was v4 [15:04:01] many people have ipv6 addresses now, even if they don't know it, or if they end at their NAT box. [15:04:35] that's the problem with these bugs, takes ages to find the real cause, and often it's an ISP problem. [15:04:53] but good luck with the ISP helpdesk blame game :D [15:06:37] if you have an IP, i'm sure mark or leslie or someone can check if the routing is problematic at our side, but usually it's the other side of the route. [15:07:28] well mark and leslie both have access to OTRS and the ticket # is above. if someone wants to ask them be my guest. i can't do much more on this today ;( [15:27:03] Reedy: I investigated a bit more about what we spoke yesterday, no cache for logged in users if there is user-language dependent content on a page. Seems there is a cache for that after all. Simply try to put a timestamp function on that page, it will only update when purging the page. But there is a cache for all "uselang" versions. [15:35:38] anyone understand how extract2.php works? i need help [15:36:18] in particular why is this hardcoded? [15:36:19] $useportal = $wgRequest->getText( 'title', 'Www.wikipedia.org_portal' ); [15:36:19] $usetemplate = $wgRequest->getText( 'template', 'Www.wikipedia.org_template' ); [15:36:35] looks like everything's symlinked there, not just wikipedia.org [15:38:08] the last time anyone touched it in git is the git imports ;( [15:39:02] or is gerrit:operations/mediawiki-config.git/extract2.php even in sync with prod at all? [15:40:28] Yes [15:40:35] What's in git is what's on fenari [15:43:10] Reedy: ;( [15:46:19] i think i figured it out now [15:46:46] the problem was getText did something totally different than what i expected [16:12:00] Reedy: what's info.txt? some (or all) of them say st. petersburg florida [16:15:02] !log reedy synchronized wmf-config/InitialiseSettings.php 'wgCheckSerialized is deaded' [16:15:19] Logged the message, Master [16:19:51] jeremyb: I know how it works very well. What do you want to know? [16:19:54] jeremyb: say wut? [16:20:21] from apache config files it is passed parameters [16:20:23] Krinkle: i think i figured it out [16:20:24] jeremyb: ^ [16:20:26] yes i got that [16:20:31] 03 15:46:17 < jeremyb> i think i figured it out now [16:20:31] 03 15:46:46 < jeremyb> the problem was getText did something totally different than what i expected [16:20:33] and "_portal" is no longer used [16:20:36] thanks though [16:20:45] right, i made one that just says emtpy, see other file [16:20:47] jeremyb: what're you up to though? Or just curious [16:21:03] Krinkle: adding a new domain [16:21:08] oh? [16:21:15] Krinkle: i just don't understand what these info.txt things are [16:21:19] Krinkle: wikidata.org [16:21:24] jeremyb: where is info.txt? [16:21:51] Krinkle: many docroots [16:22:12] $ locate info.txt | wc -l [16:22:12] 6 [16:22:27] > some (or all) of them say st. petersburg florida [16:22:48] the non-www docroots don't have it [16:23:01] ok, still a mystery! ;) [16:23:57] jeremyb: but yes, that's where the office used to be [16:24:07] sure [16:24:07] those files were likely added on request from someone somewhere [16:24:17] not important for anything on our end [16:24:22] how odd. does something use them? [16:25:14] Reedy: Is the old repo still on fenari? Perhaps you can svn-blame one of those info.txt files and see how/when it was added [16:25:17] e.g. https://gerrit.wikimedia.org/r/gitweb?p=operations/mediawiki-config.git;a=blob;f=docroot/www.wiktionary.org/info.txt;hb=HEAD [16:25:18] https://gerrit.wikimedia.org/r/gitweb?p=operations/mediawiki-config.git;a=blob;f=docroot/www.wikipedia.org/info.txt;hb=HEAD [16:25:19] etc. [16:25:24] Most htings weren't in svn [16:25:27] only wmf-config [16:25:55] right [16:26:17] well, the wiktionary file was clearly just copied from wikipedia [16:26:19] Someone who's been around a while might know, ie brion [16:26:21] it even contains the wikipedia slogan [16:26:22] not important. just mysterious [16:26:40] and url [16:26:51] ah Reedy (unrelated), guess who restarted en wiki dumps this morning and no extra lag on db12 at all... so it was the sha1 stuff in the end [16:27:03] www.wiktionary.org / info.txt; 1;2 # Contact info submission;3; 4 url: wikipedia.org/ [16:27:52] At least we know.. [16:28:11] according to dbbot-wm db12 replay has been increasing with about a second every 10 minutes last hour [16:28:12] @replag [16:28:14] Krinkle: [s1] db12: 4s [16:28:16] @replag [16:28:18] Krinkle: [s1] db59: 1s, db60: 1s, db12: 3s; [s2] db53: 1s, db57: 1s [16:28:23] hm.. [16:28:37] last hour I dunno [16:28:39] seems to be decreasing again :) [16:28:55] but I restarted my job about 10 hours ago [16:29:04] and it's been humming right along [16:29:32] it might be nice to do watchlist stuff on more than one host (why couldn't we do that?) [16:30:15] with the watchlist stuff it bounces around from between 2 to 7 or so secs [16:30:19] Look: [12:30] <+EarwigBot> Nathan2055: Replag on enwiki_p is 22998 seconds. [16:30:32] Almost 7 HOURS lag. [16:30:50] that is on toolserver isn't it? [16:30:58] !log reedy synchronized docroot/mediawiki/xml/export-0.7.xsd [16:31:08] Logged the message, Master [16:31:08] it's not on our boxes :-P [16:31:24] http://noc.wikimedia.org/dbtree/ [16:41:13] Reedy, you're bugzilla admin aren't you? [16:41:27] among many other things [16:41:51] Reedy, I need a new component created, can I ask you? [16:43:43] Proably ;) [16:43:59] Reedy, so just for https://www.mediawiki.org/wiki/Extension:CleanChanges with Niklas as default [16:45:36] !log reedy synchronized wmf-config/ 'Various config changes' [16:45:46] Logged the message, Master [16:47:24] Does Niklas want to be assignee? [16:47:43] Reedy, yes [16:48:18] done [16:48:49] thanks [17:07:41] Reedy / Krinkle: want to sanity check for me? [17:07:46] !g I0f244484a5a63879 [17:07:46] https://gerrit.wikimedia.org/r/#q,I0f244484a5a63879,n,z [17:07:47] sure [17:08:04] !g I4407d0685fb597a0 [17:08:04] https://gerrit.wikimedia.org/r/#q,I4407d0685fb597a0,n,z [17:08:21] oh, whoops, found an error already [17:08:28] have to remove from redirects.conf ;) [17:09:41] ok, repushed [17:19:00] Krinkle: you have reply [17:22:12] Krinkle: also should author really be WMF? not WMDE? (in ) [17:22:48] jeremyb: I think so, yes. I don't know though. Are the servers going to be ran by WMDE? Or the development of the project? [17:23:05] Krinkle: the text on that page was written by WMDE [17:23:50] This is the general meta author for the domain (at least that's what search engines will use it for - aside from the fact that nobody uses anymore... ) [17:24:29] jeremyb: If I read extract2.php correctly, it looks like those portal pages are being fetched and rendered on every request, and then the variable is unused. [17:24:56] it's used for str_replace i thought [17:34:25] Krinkle: thanks for the review [19:31:46] !log asher synchronized wmf-config/db.php 're-add db36, db32 (low weight), es3 (innodb)' [19:31:55] Logged the message, Master [19:34:38] !log asher synchronized wmf-config/db.php 'lowering db36 weight' [19:34:46] Logged the message, Master [19:56:18] @replag [19:56:20] Nathan2055: [s1] db32: 30s, db59: 2s; [s7] db56: 4s [20:09:41] have a nice 4th of July. I am off for tonight, will be back tomorrow. [20:18:13] !log asher synchronized wmf-config/db.php 'lowering db32 weight' [20:18:22] Logged the message, Master [20:37:46] !log mlitn Started syncing Wikimedia installation... : [20:37:55] Logged the message, Master [21:01:26] !log mlitn Finished syncing Wikimedia installation... : [21:01:35] Logged the message, Master [21:07:56] hashar: are you ready for fireworks? coming to NY/DC? [21:08:07] jeremyb: are you? I am not sorry :/ [21:08:19] * jeremyb is approximately all of the above [21:08:35] :) [21:09:33] jeremyb: wikimania is a great place to meet ton of various people [21:09:44] definitely try to meet some of the tech people from the wmf [21:10:03] hashar: i noticed in haifa ;) [21:10:30] wikimania looks fun, can't quite afford the trip right now though with little things like moving house. [21:11:10] jeremyb: have you met Amir Aharoni from the i18n team? He is a language geek :) [21:11:21] hashar: oui [21:12:24] Damianz: the trip is pretty cheap i think. ~450 including housing and transport and some of the food i think. (some you'll end up getting on your own surely) [21:12:37] (also including registration) [21:12:56] that's for 7 nights [21:14:44] Hmm see that's reasonable, another 4/500 for flights isn't so just for the week and I'm too busy at work to take 2/3 weeks out for a trip. [21:24:50] oh i get database error [21:26:44] I got one too [21:27:34] Paste them? [21:27:41] Information helps [21:28:01] Reedy: You don't want to play guess the wiki, database cluster, endpoints their hitting game? :D [21:28:08] they're* [21:28:14] I can play the Damianz hitting game... [21:28:39] Please, I'd like a change from watching openvz rape it's self for 35 hours :( [21:31:01] Just to make a mention of it - got an error while saving a page: [21:31:05] Technical details about this error: [21:31:07] Last attempted database query: (SQL query hidden) [21:31:08] Function: SqlBagOStuff::set [21:31:10] MySQL error: 1637: Too many active concurrent transactions (10.0.6.50) [21:31:11] 00:23 .: Saibo: gnah, gnah: Error: aus der Funktion „SqlBagOStuff::set“. Die Datenbank meldete den Fehler „1637: Too many active concurrent transactions (10.0.6.50)“. [21:31:25] z funkce „SqlBagOStuff::set“. Databáze vrátila chybu „1637: Too many active concurrent transactions (10.0.6.50)“. [21:31:29] Works fine now after pressing save again, but well, maybe something is in need of some love n care [21:31:42] yay, parser cache [21:51:46] * StevenW is getting the "1637: Too many active concurrent transactions (10.0.6.50)". error too [21:51:48] geting that again [21:52:24] Evening guys. Something is wrong in wikiworld :P [21:52:28] me as well [21:53:30] Hidden query via Function SqlBagOStuff::set - something about too many concurrent transactions [22:00:39] Well, some reports about the above on the enwiki village pump, and several other irc channels. :) [22:04:28] again with 10.0.6.50 ? [22:04:54] yep [22:15:15] * MartijnH notes that possible SqlBagOStuff isn't the most elegant of names [22:15:33] possibly even [22:16:06] yeah not really :) [22:16:12] * brion takes the blame for that one [22:16:28] i actually threw BagOStuff together for other projects after using memcached on mediawiki [22:16:41] and decided to throw it in for non-memcache caching [22:17:51] oh, I figured that was some PHP construct [22:18:16] :) nope not this time [22:36:05] !log preilly synchronized php-1.20wmf6/extensions/MobileFrontend 'weekly update' [22:36:15] Logged the message, Master [23:03:02] !log mlitn synchronized php-1.20wmf6/extensions/ArticleFeedbackv5/ [23:03:11] Logged the message, Master [23:14:18] "Seriously, what the flying F-sharp is up with the site? It should not be this b0rky. Fix your damn servers or whatever's making this site so buggy of late." [23:14:22] sigh :/ [23:19:12] thedj, where did you see that? [23:19:20] Someone clearly isn't happy... :P [23:21:01] . o O (MySQL error: 1637: Too many active concurrent transactions (10.0.6.50) SqlBagOStuff problem today?) [23:22:36] saper: yes.. the last one was about an hour ago though [23:22:49] Stop users editing [23:23:10] don't know how old is this comment, though [23:24:44] load is back down again [23:27:36] BarkingFish: VP/T [23:29:01] just after the last interrupt. I hate it if people are such a holes [23:30:12] yeah, it's not nice. I know some people drop in here and piss off the server guys, and that's never good - that's why I always tell them if they want me to stfu and get lost, I will happily do so :) [23:33:23] blegh. told him off. probably gonna get me into shit again, but i'm done letting people get away with that incivility.