[00:15:27] no hashar... [00:16:00] AaronSchulz: ping? [00:16:37] !c 7792 [00:16:42] !change 7792 [00:16:49] !b [00:16:49] https://bugzilla.wikimedia.org/show_bug.cgi?id=$1 [00:17:05] !change is https://gerrit.wikimedia.org/r/$1 [00:17:06] Key was added [00:17:09] !change 7792 [00:17:09] https://gerrit.wikimedia.org/r/7792 [00:18:04] !gerrit [00:20:08] petan|wk: We need to alias the brain to this channel as well (once channel-aliasing is implemented) [00:20:36] !gerritsearch is https://gerrit.wikimedia.org/r/#q,$1,n,z [00:20:36] Key was added [00:20:41] !gerrit alias gerritsearch [00:20:42] Created new alias for this key [00:20:46] !g alias gerritsearch [00:20:46] Created new alias for this key [00:20:52] !change alias gerritsearch [00:20:52] Created new alias for this key [00:21:03] !git alias gerritsearch [00:21:03] Created new alias for this key [00:21:15] !change 7792 [00:21:15] https://gerrit.wikimedia.org/r/7792 [00:21:18] heh ;) [00:21:38] !change del [00:21:39] Successfully removed change [00:21:40] Krinkle: !git should maybe be for the whole repo? not a specific change? [00:21:41] !change 123 [00:21:41] https://gerrit.wikimedia.org/r/#q,123,n,z [00:21:49] !g 5451033fd3be633b46398aa037251dc964258df1 [00:21:49] https://gerrit.wikimedia.org/r/#q,5451033fd3be633b46398aa037251dc964258df1,n,z [00:22:08] anyway, how does testwiki work? [00:22:20] it's an NFS mount so not hetdeploy? [00:22:20] !git e9322da3c208d51613226b4259c71304b9795f22 [00:22:20] https://gerrit.wikimedia.org/r/#q,e9322da3c208d51613226b4259c71304b9795f22,n,z [00:22:47] what determines what version it has? where are changes to that version recorded? [00:23:03] jeremyb: I'm not entirely sure, but yes, it is using /home/wikipedia instead of /apache/ stuff [00:23:11] but the scripts should be the same [00:23:17] e.g. MWVersion etc. [00:23:42] so het-deploy counts for testwiki as well afiak [00:24:07] oh, i see the problem [00:24:13] except that it runs on code that doesn't have to be scap'ed. [00:24:15] someone never commited from fenari? [00:24:22] or rather, it is the origin of the scap. [00:24:32] wikiversions.dat in git is not up to date. at least not vs. noc.wm.o [00:25:04] one can commit either from fenari or from your own computer and then pull down on fenari. [00:25:20] But if you do it from fenari and you risk getting a conflict in theory [00:25:28] not sure what the policy is on that [00:25:44] jeremyb: Can you check git status? [00:25:57] how do you mean? [00:26:20] on fenari, check if everything is committed [00:26:35] if someone made changes, then scap'ed it but not committed, then it wouldn't be on noc.wikimedia.org I think [00:26:50] not sure.. depends on how noc.wm.o [00:26:53] not sure.. depends on how noc.wm.o works [00:27:00] post-scap I think.. [00:27:01] i don't have fenari access [00:27:13] Oh.. then how can you tell wikiversion.dat doesn't match? [00:27:16] or i would check ;) [00:27:22] what doesn't match what? [00:27:35] one sec [00:31:02] Krinkle: http://dpaste.com/hold/752889/ [00:31:38] Yeah [00:31:53] In that case my suspicion is right [00:31:57] noc.wm.o is updated post-scap [00:32:12] Reedy did the deployment. I guess he forgot to commit on fenari [00:32:49] jeremyb: nice diff command you got there [00:32:58] double curl in label with diff [00:32:58] not quite perfect [00:32:59] nice [00:38:29] @replag [00:38:31] Krinkle: [s1] db36: 2s, db32: 2s, db59: 2s, db60: 2s, db12: 2s; [s5] db44: 1s [00:43:11] Krinkle: better: http://dpaste.com/hold/752890/ but i still need to get the -u at the right place (now it moved to the end) [00:44:12] Krinkle: anyway, see THO|Cloud's stack? that's 7792 i guess [00:44:33] * Krinkle learns about "shift;" in bash and ${@}. Presumably shift is like array_shift in PHP and ${@} means all remaining parameters ? [00:44:36] i pinged AaronSchulz because he reviewed and hashar's not here [00:44:57] i guess someone should mail [00:45:03] jeremyb: I have no clue what you're talking about? Who's what stack ? [00:45:14] 28 23:37:27 < THO|Cloud> Reedy: wmf4 is causing central notice issues on test.wiki [00:45:17] 28 23:37:28 < THO|Cloud> https://test.wikipedia.org/w/index.php?title=Special:CentralNotice&method=listNoticeDetail¬ice=POTY+Test+Campaign+01 [00:45:40] that bug was supposed to be fixed [00:45:40] (in here, UTC) [00:45:48] Ah, this is another occurrence of the bug [00:46:04] Aaron fixed it last week in ResourceLoader, apparently it found its way in CentralNotice as well [00:46:07] so it's known but not all callers were fixed? [00:46:14] Language::factory called with null [00:46:16] that used to default to 'en' [00:46:27] heh [00:46:30] but since that's wrong, it should be provided in all cases. factory is not the place to do fallback. [00:46:44] this way we can find them [00:47:01] or we do static analysis? [00:47:29] In RL for example there was a call to some method passing $this->language directory instead of $this->getLanguage() (which is a lazy-init thing) [00:47:37] directly* [00:48:06] This may be the same kind of thing, where it should self-initialize instead of passing directly (which fails it is the first reference) [00:48:21] the self-initializer probably has a better source for the language (e.g. user, or context or whatever) [00:48:47] yep [00:48:47] $wgLang = Language::factory( $this->language ); // hack for {{int:...}} [00:48:50] found it [00:48:57] fixing.. [00:52:55] !b 37170 [00:52:55] https://bugzilla.wikimedia.org/show_bug.cgi?id=37170 [00:54:43] !g 681970be [00:54:43] https://gerrit.wikimedia.org/r/#q,681970be,n,z [00:54:43] damnit Krinkle i keep midairing you [00:54:49] oh? [00:54:58] yes. twice in a minute [00:55:04] amazing. sorry ;-) [00:55:13] heh ;) [00:55:45] and that gerrit link comes up empty [01:22:09] i see now it's 9210 but the link above is still broke [02:24:21] !log LocalisationUpdate completed (1.20wmf3) at Tue May 29 02:24:21 UTC 2012 [02:24:25] Logged the message, Master [02:37:45] !log LocalisationUpdate completed (1.20wmf4) at Tue May 29 02:37:44 UTC 2012 [02:37:48] Logged the message, Master [08:20:06] Error while using book tool: http://pastebin.com/qe3X5x6n [08:20:07] Just reporting [08:20:13] Repeated [08:28:11] is there anyone around who can explain me what's wrong with this page ? http://fr.wikipedia.org/wiki/Discussion_utilisateur:74.59.74.182 [08:29:16] Toto_Azero: seems to have no text except boilerplate? [08:30:26] I think FR is broke [08:30:40] Toto_Azero: i guess it should change after the user page that belongs to it has been created [08:30:57] actually, looks like the page exist (have a look at http://fr.wikipedia.org/wiki/Spécial:Contributions/74.59.74.182, the "discuter" link is blue), but it doesn't seem so… [08:32:11] the history is quite strange too :p http://fr.wikipedia.org/w/index.php?title=Discussion_utilisateur:74.59.74.182&action=history [08:32:25] mutante: so should I create the user page ? [08:33:23] or as i'm a sysop, maybe i could delete it ? [08:33:38] Toto_Azero: i don't know, but trying to see if that changes things sholdnt hrt [08:33:41] hurt [08:35:06] i have a WMF error creating this page [08:36:36] Which is? [08:36:41] (the error) [08:36:50] 403 Please donate [08:37:07] fundraiser starting early? [08:37:25] "Our servers are currently experiencing a technical problem. This is probably temporary and should be fixed soon. Please try again in a few minutes." [08:37:27] er, should be 401 [08:38:05] saper: actually there is also "If you would like to help, please donate" xD [08:38:13] Hi :) [08:38:17] lots and lots of OOMs [08:38:30] Warning: call_user_func_array() expects parameter 1 to be a valid callback, class 'languages' does not have a method 'getMessage' in /usr/local/apache/common-local/php-1.20wmf3/includes/StubObject.p [08:38:31] hp on line 79 [08:38:59] i also have this in the error : [08:38:59] PHP fatal error in /usr/local/apache/common-local/php-1.20wmf3/extensions/AbuseFilter/AbuseFilter.hooks.php line 33: [08:38:59] Call to a member function getRawText() on a non-object [08:39:24] looks like it's linked to abusefilter ?! [08:39:28] indeed [08:39:31] I've a fix in for that, but no one seems overly sure if it'll cause problems [08:39:45] Meh, I'll merge and push it, pending a better fix [08:39:49] better than giving a fatal [08:41:47] Reedy Toto_Azero : http://fr.wikipedia.org/w/index.php?title=Discussion_utilisateur:74.59.74.182&action=history [08:42:33] yeah, i had already linked to the history above ;) [08:42:53] Reedy: so what must i do ? [08:43:01] let me fix the abuse filter error first [08:43:37] ok [08:43:57] let me know when you have finished [08:48:30] Servers look like they're still asleep [08:48:52] !log reedy synchronized php-1.20wmf3/extensions/AbuseFilter/ 'AbuseFilter to trunk' [08:50:07] !log reedy synchronized php-1.20wmf4/extensions/AbuseFilter/ 'AbuseFilter to trunk' [08:50:23] "The database did not find the text of a page that it should have found, named ‘Discussion utilisateur:74.59.74.182’ (revision#: 0)." [08:50:24] Yay.... [08:50:47] err? [08:50:47] Ouch [08:51:10] :D [08:56:23] now I have an edit conflict trying to edit the page http://fr.wikipedia.org/wiki/Discussion_utilisateur:74.59.74.182 :p [09:00:16] sweet [09:14:09] what's up ? [09:15:35] I have 3 answers to that.. [09:15:37] !log reedy synchronized wmf-config/CommonSettings.php 'Make sure up to date commonsettings is on teh cluster' [09:34:02] !log nikerabbit synchronized php-1.20wmf3/extensions/WebFonts/ 'WebFonts JS comma fix' [09:34:08] Reedy: must I try to edit the page again ? [09:35:14] !log nikerabbit synchronized php-1.20wmf4/extensions/TranslationNotifications/ 'TranslationNotifications to master' [09:35:17] You can try [09:35:25] Not sure if it'll go through [09:35:35] edit conflict again :p [09:35:41] strange… [09:36:13] Reedy: maybe i can try to delete it ? [09:36:45] oh you did it :D [09:36:46] Hmm. It just let me delete it.. [09:40:06] Reedy: and what about restoring the page ? [09:40:38] There isn't actually anything to restore [09:41:05] doh that's right, no history… [09:42:21] ouch translationnotification is live now [09:42:27] !log nikerabbit synchronized wmf-config/PrivateSettings.php 'TranslationNotification' [09:42:46] is cross-wiki talkpage writing enabled now? [09:42:46] Reedy: well, thanks :) [09:43:20] saper: supposed to be [09:43:39] ouch ouch [09:43:46] and what is that master common user name? [09:44:38] saper: Translation Notification Bot [09:44:52] (we didn't like it but there were no better ideas) [09:45:08] ok I will try to implement a proxy identity, now learning CentralAuth [09:45:29] Nikerabbit: are some notifications out somewhere? [09:45:29] !log nikerabbit synchronized wmf-config/InitialiseSettings.php 'Enable Special:Interwiki - bug 22043' [09:45:58] Reedy: I think I am done [09:47:08] cool [09:48:15] Reedy: nothing special in fatalmonitor, I think we are good [09:48:40] Yup [09:49:15] If you have any issues later, you'd be best asking hashar [09:49:55] If you need me, I'll have my phone with me [09:50:27] oki [09:51:36] sure [09:51:48] note i will be out for lunch [09:51:52] but will be glad to help [09:54:58] heh, check this out: Mediawiki version "0.0.7" ?;) https://wiki.c3le.de/api.php?action=query&meta=siteinfo&maxlag=5 (generator= ...) [09:55:28] $wgVersion is a lie!Q [09:56:25] heh, yea, well, it makes them look like the winner in "who runs oldest mediawiki" competition:) [10:23:20] mutante: we never had 0.x versions ;-D [10:23:27] did phase1, phase2, phase3 -> 1.0 [10:23:32] or something similar [10:26:24] ack, the phases , i remember [10:28:18] usemod wiki ftw [11:00:11] where is morebots again??????? [11:06:46] died about 4 hours ago [11:08:04] closedmouth: and are you actually experiencing a near death experience? [11:08:27] i was very near to morebots, yes [12:18:20] !log killing / restarting morebots [12:18:25] Logged the message, Master [12:18:51] Raymond_: ^^ [12:18:56] mutante: thanks :) [12:19:26] yw. i like the docs "It has an entry in rc.local.... er no it doesn't. There is an init script /etc/init.d/morebots~, but DON'T USE it. " :) [12:19:28] hmm, make that the same for labs :) [12:19:37] ok [12:39:38] I bet that entry in the docs is me [12:40:10] yup, one of those is :-D [12:40:13] 2 random projects using Mediawiki: a) the FBI (Bureaupedia) b) lolcatbible.com [12:40:53] why does one of those make me happier than the other? [12:40:57] apergos: ah, does start_morebots.sh not start it in background? running in screen? [12:41:17] i get the "Started morebots" but never return to shell [12:41:34] you will hate me but as often as I have restarted it, I don't remember [12:41:38] you can bg it prolly [12:41:53] I vaguely think that's what I do eveery time [12:42:25] yep, ok [12:46:24] apergos: There are some static HTML dumps that are corrupted, just sayin' [12:46:35] static html? [12:46:38] I don't do those [12:46:53] never have. at this point those are without a home and have been for years [12:47:42] it would be great to find someone to play with the code some, make sure it's working, see what could be done reasonably to make it run on en wp without killing our servers or taking a year [12:48:33] yeah, the older ones, so you are not doing them :P [12:48:41] but they are horribly corrupted [12:48:56] well they are just static html [12:49:04] it's hard to imagine how html pages would be corrupted [12:50:20] oh btw, do you know how to do multipart uploading in S3? [12:50:44] I am not sure what the curl command to give [12:51:07] not specifically, I have partil code lying round for archive.org but no time to work on it till I get a couple other things straightened out first [12:51:11] having to do with the media dumps [12:51:36] there's a start upload [12:51:41] then you do a bunch of pieces [12:51:52] then you tell it you're done and name all the pieves I think [12:51:54] *pieces [12:52:57] hmm, we have to first split the file up first [12:53:05] yes [12:53:06] then give some command, which I don't know of [12:53:16] to initiate the uploading [12:53:19] or something like that [12:54:58] Hydriz: this looks like it might help http://aws.amazon.com/code/128 [12:55:05] "This Perl script calculates the proper signature, then calls Curl with the appropriate arguments." [12:55:11] I looked at that [12:55:21] oh,ok [12:55:42] but nothing helpful inside though [12:55:56] I just need the curl command [12:56:18] curl --location --upload-file something like that [12:58:39] multipart uploading is part of todo in archiveuploader.py zzz [13:00:11] oh god the media tarballs is a headache [13:00:42] 100GB, thats 10 times the "allowed" mark of archive S3 [13:03:43] hmm, but if that script calls curl, cant just copy the arguments from there even without actually using the script? [13:05:33] lol I can't read perl :( [13:06:34] hmm, seems like it has something related to content type? [13:10:29] hmm... [13:16:30] @infobot-link #mediawiki [13:16:30] petan|wk: Unknown identifier (#mediawiki [13:16:31] These channels now share the same infobot db [13:16:41] !b [13:16:41] https://bugzilla.wikimedia.org/show_bug.cgi?id=$1 [13:55:06] !bug 1911 [13:55:07] https://bugzilla.wikimedia.org/show_bug.cgi?id=1911 [13:55:10] oh no [13:55:24] petan|wk: change that ugly link to https://bugzilla.wikimedia.org/$1 ;-) [13:55:32] !del bug [13:55:58] !bug del [13:55:58] Successfully removed bug [13:56:08] !bug is https://bugzilla.wikimedia.org/$1 [13:56:08] Key was added [13:56:15] thanks [13:56:23] np [13:56:31] but isn't the longer url better? [13:56:49] skips the redirect hassle [13:57:08] hassle ? [13:57:30] it is like instant [13:57:31] anywa [13:57:32] y [13:57:35] goes directly to the page itself, isn't it going to be faster? [13:57:44] * Hydriz is on a slow connection lol [13:57:54] hehe [13:58:07] it is nicer to the eye here anyway [13:58:36] lol [13:58:38] eh, if some people are on a super slow connection, probably better to be ugly but better on the connections ? [15:09:35] hashar: that link was ugly because that's how it was in previous db, imported by krinkle [15:09:44] I am innocent [15:09:46] XD [15:10:00] btw I liked ugly link [15:10:07] because it's faster [15:13:53] petan|wk: I am not accusing anyone :-] feel free to bring back the show_bug.cgi? [15:13:59] it is indeed faster [15:14:05] trying [15:14:07] !bug del [15:14:07] Successfully removed bug [15:14:23] !bug is https://bugzilla.wikimedia.org/show_bug.cgi?id=$1 [15:14:23] Key was added [15:14:26] !bug 42 [15:14:27] https://bugzilla.wikimedia.org/show_bug.cgi?id=42 [15:14:42] \O/ What Hydriz said :) [15:15:13] lol [15:18:32] Why isn't this page redirecting correctly? http://en.wikipedia.org/wiki/Talk:Washington_Dulles [15:20:57] A null edit made it worse [15:21:33] [[Wikipedia:Resolving_placename_conflicts]] is another one [15:30:01] Dispenser: "A null edit made it worse" how? [15:30:08] Just tested and worked as expected [15:30:18] https://en.wikipedia.org/w/index.php?title=Talk:Washington_Dulles&action=history [15:30:50] Stupid Squid cache [15:31:21] yeah :P [15:31:54] The non-logged in squid version is "1. REDIRECT Talk:Washington Dulles International Airport" [15:35:46] BTW, There's lots of redirects with missing source pages :-( [15:35:47] SELECT * FROM redirect LEFT JOIN page ON page_id=rd_from WHERE page_id IS NULL AND rd_fragment IS NULL; [15:36:25] Dispenser: Delete by bot? [15:36:44] Of course only if only one revision [15:37:51] idk, but the page_ids are in similar ranges. Maybe cruft from failed deployment? [16:10:28] !log hashar synchronized search-redirect.php 'https://gerrit.wikimedia.org/r/9206 - cleanup search-redirect.php' [16:10:28] Wrong channel! [16:10:32] Logged the message, Master [16:21:54] !log hashar synchronized wmf-config/CommonSettings.php 'cleanup wgNoticeBanner_Harvard2011 https://gerrit.wikimedia.org/r/#/c/9205/' [16:21:54] Wrong channel! [16:21:58] Logged the message, Master [16:23:26] Krinkle: petan|wk: what's this about a brain? (!log) [16:27:21] !log hashar synchronized wmf-config/CommonSettings.php 'https://gerrit.wikimedia.org/r/#/c/9204/ - use protocol-relative url for nostalgiawiki wgSiteNotice' [16:27:22] Wrong channel! [16:27:25] Logged the message, Master [16:28:10] bots sending mixed msgs! [16:29:35] yeah [16:29:42] we should kick wm-bot ;-D [16:30:01] !bug 333 [16:30:02] https://bugzilla.wikimedia.org/show_bug.cgi?id=333 [16:34:07] jeremyb: wm-bot has a "brain" which contains all the !<> definitions and aliases [16:34:25] jeremyb: Back when we had MWBot, it had 1 database for all channels it was in [16:34:40] wm-bot is a much more feature-rich bot and is used in way more channels [16:34:48] as such it has a per-channel brain [16:35:11] we need to make it so that we can share the same brain between #mediawiki, #wikimedia-dev, #wikimedia-dev and #wikimedia-operations (maybe more) [16:35:23] !log del [16:35:23] Successfully removed log [16:35:26] Logged the message, Master [16:35:31] what? [16:35:38] I didn't know I had that right [16:35:42] (for morebots) [16:37:59] Krinkle: there's no authentication... [16:38:06] I figured. [16:38:57] Doesn't make much sense to me. wikitech is pretty much read-only on the outside, and in here we can just goggle freely. I expected it would have at least a nickname filter or better yet, a cloak filter (requiring identification to services) [16:39:08] oh well, AGF, WCAB :) [16:39:19] Assume Good Faith, We Can Always Block (Later) [16:39:24] :D [16:40:02] i think it's nothing to do with AGF. just KISS/WCAB [17:51:35] I think there was a 0.9, but don't remember. [18:03:54] Amgine: huh? [18:32:34] nm Jeremyb. ww. [18:32:46] i figured ;) [18:32:56] but there was no explicit ww [18:33:08] [18:33:18] toooo many networks, too many channels. [18:42:02] ww? [19:03:15] DB error on en.quote [19:03:18] from within function "SqlBagOStuff::set". Database returned error "1637: Too many active concurrent transactions (10.0.6.50)". [19:04:06] True [19:04:44] @info 10.0.6.50 [19:04:45] jeremyb: [10.0.6.50: ] db40 [19:04:52] @info db40 [19:04:52] jeremyb: [db40: s7] 10.0.6.50 [19:05:03] @replag [19:05:15] ... [19:05:19] jeremyb: [s5] db44: 1s [19:05:35] Nemo_bis: you've tried again? [19:05:52] yes, only temporary [19:05:52] That's parser cache [19:05:58] it was a diff [19:06:59] why does dbtree know nothing about db40? [19:08:17] It's not replicated [19:08:21] It's a parser cache [19:08:26] huh [19:08:32] so, s7 is a lie. again [19:09:29] there's only 1 parser cache for everyone? [19:10:16] <^demon|busy> One parser cache to rule them all. [19:10:16] Only? [19:10:26] It's a 60 GB RAM box [19:10:40] that never, ever breaks? [19:10:57] like /ms\d/ [19:10:58] what happens when we throw it into the crack of mount doom though? [19:11:32] maybe we could list all the spofs someplace on a wikitech page [19:11:42] <^demon|busy> apergos: [[Barack Obama]] ends up taking 5-6 minutes to parse :) [19:11:45] and what would be affected if they died [19:11:55] apergos: do we have many? [19:12:03] I know just two of them [19:12:05] ^demon|busy: and [[michael jackson]]? [19:12:05] we have more than one [19:12:55] we have mysql masters. but we have hot backups and you don't have to be on site to fail those over [19:13:19] We have load balancers [19:13:30] And well-backed-up media storage [19:13:41] do we have dual handoff from uplinks? [19:15:36] Also, if you want to do some trolling instead of just bringing site down, job runner can be considered spof as well [19:16:31] how so? there's more than one? [19:16:40] and you can live without it too [19:17:06] more than many other pieces. it it's got no persistent storage. so you can always just bring up a new one from scratch [19:17:11] Yes, but it will take people time to realise [19:17:28] or you get better monitoring... [19:17:33] it's not a spof. at all [19:33:39] „SqlBagOStuff::set”. Baza danych zgłosiła błąd „1637: Too many active concurrent transactions (10.0.6.50) [19:33:43] plwiki reports [19:34:56] apergos: ww==wrong window [19:36:45] ah [19:37:23] <^demon|busy> dkt. lsned [19:37:33] <^demon|busy> (didn't know that. learn something new every day) [19:37:38] haha [20:07:42] @infobot-link blah [20:07:42] Krinkle: Unknown identifier (blah [20:07:42] Permission denied [20:11:37] @trustadd .*@wikimedia/Krinkle admin [20:11:37] Successfuly added .*@wikimedia/Krinkle [20:11:43] hehe [20:11:57] let me fix others [20:12:00] @channellist [20:12:01] I am now in following channels: #huggle, #wikimedia-dev, #wikimedia-tech, #wm-bot, #wikimedia-labs, #wikimedia-operations, ##matthewrbowker, ##matthewrbot, #wikipedia-zh-help, #wikimedia-toolserver, ##Alpha_Quadrant, #wikimedia-mobile, #mediawiki, #wikipedia-cs, #wikipedia-cs-rc, #wiki-hurricanes-zh, #wikinews-zh, #wikipedia-zh-helpers, #wikipedia-en-afc, ##thesecretlair, ##addshore, #wikimedia-wikidata, ##iworld, #wikimedia-lgbt, #wikimedia-SPCom, #he.wikipedia, #cvn-hewikis, #wikimedia, #WikiQueer, #mwbot, #wikipedia-hsb, #wikipedia-zh, #wikipedia-zh-temp, [20:12:06] it's gonna take a while XD [20:14:16] petan|wk: i can do the sudo route now if you like. (the global) [20:16:08] eh? [20:16:22] you mean inserting krinkle as global admin? [20:16:41] you can, but it require bot to be restarted and that suck [20:16:55] or maybe not [20:16:57] dunno [20:17:22] petan|wk: isn't it just run restart.sh? [20:17:28] no :D [20:17:33] ... [20:17:35] ;-( [20:17:35] that's actually script to start the bot [20:17:40] right, i know [20:17:41] it restart bot in loop on crash [20:17:42] ##thesecretlair ? [20:17:43] i read the script [20:17:43] LOLWUT [20:17:53] :P [20:18:11] it's not secret anymore heh [20:19:19] anyway, is there anything needed besides running that script? [20:19:24] no [20:19:34] just kill script, bot and then run nohup restart.sh [20:19:36] to start it [20:20:07] is any of it in version control? [20:20:14] @help [20:20:14] Type @commands for list of commands. This bot is running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 1.3.6 source code licensed under GPL and located at https://github.com/benapetr/wikimedia-bot [20:20:17] on github [20:20:24] svn broke [20:20:25] the config too? [20:20:29] and I needed to commit quick [20:20:39] config is on labs [20:21:05] so, not versioned [20:21:10] config is not [20:23:52] pls use log [20:23:53] command [20:23:54] when u do stuff on labs [20:23:54] i just killed mono, and then killed sleep. didn't kill restart [20:23:55] i know [20:23:56] ah, right [20:23:57] that bot is in almost 50 channels [20:24:00] if you just want to play with it, maybe you should start another instance :D [20:24:09] i wasn't playing [20:24:13] ok [20:24:18] i did it pretty fast, short downtime [20:24:33] right [20:24:48] you just didn't log what you did hehe [20:25:21] btw global admins are in config/admins [20:25:22] i'm writing the log msg! [20:25:25] ok [20:25:32] use shell command log [20:25:44] it's fast and easy [20:25:45] :D [20:26:34] petan|wk: i've only just used it for the second time [20:27:04] don't forget to run it as wmib [20:27:07] that's all :D [20:27:29] petan|wk: what? `log`? [20:27:38] no [20:27:40] bot [20:27:41] good [20:27:41] :P [20:27:45] I didn't check now [21:09:44] !log aaron synchronized wmf-config/swift.php 'Enabled new thumb purge hook on remaining wikis' [21:09:47] Logged the message, Master [22:25:29] hmpf http://code.google.com/p/chromium/issues/detail?id=109555