[00:25:53] 3(created) [UTRS-68] Customizable Home Page; UTRS: Main Interface; Improvement <10https://jira.toolserver.org/browse/UTRS-68> (Andrew Pearson) [00:46:28] [[Special:Log/newusers]] create 10 * Elemaki * (New user account) [01:30:44] nacht ts [12:03:31] [[Special:Log/newusers]] create 10 * Tomtomn00 * (New user account) [14:02:18] in /var/log/http/access.* on wolfsbane, why do all requests come from 91.198.174.204 ? [14:20:29] liangent: There's some forwarding, I think? [15:05:30] is it possible to mass rename files? [15:05:39] I want to use the same rationale to move multiple files [15:06:42] ToAruShiroiNeko: rename(1) [15:07:10] Heh. [15:07:28] liangent: I think he wants to move wiki files. [15:07:37] Though on Unix, I'd suggest mv. [15:09:46] I want to use the same rationale to move lots of files. [15:09:58] and generate a command list to commons delinker [15:10:08] any suggestions for this? [15:12:43] ToAruShiroiNeko: What do you want to do exactly? Please be a bit more specific. [15:12:52] ok [15:13:02] I want to move rank insignia images to a standard naming scheme [15:13:09] http://en.wikipedia.org/wiki/Template:Ranks_and_Insignia_of_NATO_Armies/OF/France [15:13:11] ToAruShiroiNeko: then moveBatch.php ? [15:13:42] File:Maréchal.svg -> File:Army-FRA-OF-10.svg [15:14:07] liangent I lack a toolserver account [15:14:23] I want to use the same rationale to all 11 file renames [15:14:25] Is there on-wiki consensus for the moves? [15:14:33] 11 files? [15:14:43] Joan sure, it is per rename criteria #6 for template use [15:14:51] yes 11 rank insignia [15:14:53] weell more than 11 [15:15:03] You want to automate something for 11 files? [15:15:06] Try browser tabs? [15:15:11] its per each country [15:15:24] So 11 * 180? [15:15:25] so its 28x11 * 3 [15:15:41] 28x11x3 = 28 [15:15:47] 28*11*3 = 924 [15:15:57] Yes. [15:16:06] minus iceland and etc [15:16:12] but thats roughly the number [15:17:10] so I want to rename these using the same rationale [15:17:36] and at the end give commons delinker a command list [15:18:49] multichill is it more clear? [15:19:06] Yeah, that's not allowed by commons policy ;-) [15:19:24] it is [15:19:50] http://commons.wikimedia.org/wiki/Commons:File_renaming#What_files_should_be_renamed.3F [15:19:51] #6 [15:21:02] What is the problem multichill? [15:21:09] Waste of time and effort [15:21:25] I am spending a great deal of effort on templates [15:21:35] I do not want to spend the same amount on images [15:21:42] if you do not want to help me just say so [15:22:43] Did you consider the fact that the images are named correctly in other languages and that with change you break that? [15:23:19] I am not translating them [15:25:16] File:Maréchal.svg <- doesnt tell me which branch, country, or STANAG equavalence [15:25:42] File:Army-FRA-OF-10.svg <- does tell me which branch, country, and STANAG equavalence [15:27:27] Mar�chal is clear to me (as a FR-1 speaker), my-FRA-OF-10 is just English gibberish. [15:27:40] no it isnt english [15:28:09] you assume there is only one country with marechal rank. [15:28:39] Even worse. Anyway. For mass actions like this you'll need consensus. Ze French will hunt you down if you try to do this without it ;-) [15:28:41] there is the portugese Marechal [15:29:04] I just need help if a tool for this exists or not [15:29:18] a simple yes no answer [15:29:25] No! [15:29:34] I will move them manually then [15:29:35] thanks [18:33:38] s4 replag on rosemary is OK: QUERY OK: SELECT ts_rc_age() returned 4.000000 [18:33:48] /aux0 on hemlock is WARNING: DISK WARNING - free space: /aux0 468651 MB (8% inode=44%): [18:44:38] Sun Grid Engine execd on ortelius is WARNING: short-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.255859/1.10, alarm hl:np_load_long=0.902344/1.55, alarm hl:mem_free=21568.000000M/300M, alarm hl:available=1/0: medium-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.255859/1.00, alarm hl:np_load_long=0.902344/1.50, alarm hl:mem_free=21568.000000M/300M, alarm hl:available=1/0 [18:45:38] Sun Grid Engine execd on ortelius is OK: short-sol@ortelius OK: medium-sol@ortelius OK [18:57:57] has been any update or change related to python? some of my scripts are now having utf encoding problems [19:00:28] Sun Grid Engine execd on willow is WARNING: medium-sol@willow exceedes load threshold: alarm hl:np_load_short=1.763184/1.8, alarm hl:np_load_avg=0.896973/2.3, alarm hl:mem_free=128.000000M/300M, alarm hl:available=1/0: longrun-sol@willow exceedes load threshold: alarm hl:np_load_short=1.763184/1.9, alarm hl:np_load_long=0.737305/2.25, alarm hl:mem_free=128.000000M/200M, alarm hl:available=1/0 [19:06:05] it might help to be somewhat more specific [19:06:43] unless you're doing unicode normalization, or unless you switched from 2.x to 3.x, there are no major changes [19:07:28] when using print , after day 12 some scripts stop working, i had to change to wikipedia.output() [19:07:43] but before that, where working fine, some of them over several months [19:08:29] basicly, it's giving the error on special chars [19:08:42] right [19:08:55] then either you haven't worked on non-ASCII stuff before [19:08:58] or your locale has changed [19:09:10] what does running locale return? [19:09:36] (also: on which server is this?) [19:09:41] UTF-8 in all, except LC_ALL=, wich has nothing [19:10:08] if you start python, and run the command print u"ä" ? [19:10:49] and, last but not least... what is the exact error you got? UnicodeEncodeError / cannot encode character using codec 'ASCII'? [19:11:09] it prints the correct ä [19:12:38] the error is like: UnicodeEncodeError: 'ascii' codec can't encode character u'\xe1' in position 144: ordinal not in range(128) [19:13:25] how are you running the script? [19:13:28] Sun Grid Engine execd on willow is OK: medium-sol@willow OK: longrun-sol@willow OK [19:13:50] from qcronsub [19:13:51] I'm not sure what the locale will be using cronsub [19:14:49] qcronsub seems to have another bug, when giving parameters like: username="Alch Bot", the param given to script is just Alch [19:15:12] it must have the "_" explicit [19:15:17] Alchimista: and you're calling cronsub from cron, I'm guessing? [19:15:23] yap [19:16:01] have you changed that recently? [19:16:26] in any case - if you're running from cron, the locale is C [19:16:30] which explains your problems [19:17:11] if you run the script using LC_ALL="en_US.UTF-8" python script.py it should work (but you'll have to use a wrapper shell script for that) [19:17:13] well, i haven't changed :S [19:19:14] you were relying on undocumented behaviour [19:19:34] essentially - you were lucky it magically worked [19:20:25] strange, all of them have been working quite fine, untill last week [19:21:11] that's what typically happens with undocumented behaviour [19:24:17] RAID on adenia is CRITICAL: CHECK_NRPE: Socket timeout after 30 seconds. [19:32:59] RAID on daphne is CRITICAL: ERROR - TOTAL: 2: FAILED: 0: DEGRADED: 1 [19:32:59] SMF on willow is CRITICAL: ERROR - maintenance: svc:/network/puppetmasterd:default [19:33:28] FMA on yarrow is CRITICAL: ERROR - unexpected output from snmpwalk [19:33:28] SMF on turnera is CRITICAL: ERROR - offline: svc:/system/cluster/scsymon-srv:default [19:33:28] SMF on damiana is CRITICAL: ERROR - maintenance: svc:/network/ldap/client:default [19:33:58] /aux0 on hemlock is WARNING: DISK WARNING - free space: /aux0 467855 MB (8% inode=44%): [19:36:30] Sun Grid Engine execd on willow is WARNING: medium-sol@willow exceedes load threshold: alarm hl:np_load_short=0.553223/1.8, alarm hl:np_load_avg=0.642090/2.3, alarm hl:mem_free=191.000000M/300M, alarm hl:available=1/0: longrun-sol@willow exceedes load threshold: alarm hl:np_load_short=0.553223/1.9, alarm hl:np_load_long=0.671387/2.25, alarm hl:mem_free=191.000000M/200M, alarm hl:available=1/0 [19:41:29] Sun Grid Engine execd on willow is OK: medium-sol@willow OK: longrun-sol@willow OK [19:44:48] Sun Grid Engine execd on ortelius is WARNING: short-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.451172/1.10, alarm hl:np_load_long=0.839844/1.55, alarm hl:mem_free=22006.000000M/300M, alarm hl:available=1/0: medium-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.451172/1.00, alarm hl:np_load_long=0.839844/1.50, alarm hl:mem_free=22006.000000M/300M, alarm hl:available=1/0 [19:51:47] Sun Grid Engine execd on ortelius is OK: short-sol@ortelius OK: medium-sol@ortelius OK [19:54:48] Sun Grid Engine execd on ortelius is WARNING: short-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.285156/1.10, alarm hl:np_load_long=1.008789/1.55, alarm hl:mem_free=22200.000000M/300M, alarm hl:available=1/0: medium-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.285156/1.00, alarm hl:np_load_long=1.008789/1.50, alarm hl:mem_free=22200.000000M/300M, alarm hl:available=1/0 [20:23:57] RAID on adenia is OK: OK - TOTAL: 2: FAILED: 0: DEGRADED: 0 [20:33:17] RAID on daphne is CRITICAL: ERROR - TOTAL: 2: FAILED: 0: DEGRADED: 1 [20:33:17] SMF on willow is CRITICAL: ERROR - maintenance: svc:/network/puppetmasterd:default [20:33:28] SMF on turnera is CRITICAL: ERROR - offline: svc:/system/cluster/scsymon-srv:default [20:33:28] SMF on damiana is CRITICAL: ERROR - maintenance: svc:/network/ldap/client:default [20:33:37] FMA on yarrow is CRITICAL: ERROR - unexpected output from snmpwalk [20:34:57] /aux0 on hemlock is WARNING: DISK WARNING - free space: /aux0 467047 MB (8% inode=44%): [20:38:48] Sun Grid Engine execd on ortelius is WARNING: short-sol@ortelius exceedes load threshold: alarm hl:np_load_short=0.867188/1.10, alarm hl:np_load_long=1.611328/1.55, alarm hl:mem_free=22440.000000M/300M, alarm hl:available=1/0: medium-sol@ortelius exceedes load threshold: alarm hl:np_load_short=0.867188/1.00, alarm hl:np_load_long=1.611328/1.50, alarm hl:mem_free=22440.000000M/300M, alarm hl:available=1/0 [20:40:48] Sun Grid Engine execd on ortelius is OK: short-sol@ortelius OK: medium-sol@ortelius OK [20:47:29] Sun Grid Engine execd on willow is WARNING: medium-sol@willow exceedes load threshold: alarm hl:np_load_short=0.656250/1.8, alarm hl:np_load_avg=0.678711/2.3, alarm hl:mem_free=203.000000M/300M, alarm hl:available=1/0 [20:49:27] Sun Grid Engine execd on willow is OK: medium-sol@willow OK: longrun-sol@willow OK [20:49:47] Sun Grid Engine execd on ortelius is WARNING: short-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.335938/1.10, alarm hl:np_load_long=1.446289/1.55, alarm hl:mem_free=22094.000000M/300M, alarm hl:available=1/0: medium-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.335938/1.00, alarm hl:np_load_long=1.446289/1.50, alarm hl:mem_free=22094.000000M/300M, alarm hl:available=1/0 [20:50:48] Sun Grid Engine execd on ortelius is OK: short-sol@ortelius OK: medium-sol@ortelius OK [21:02:28] Sun Grid Engine execd on willow is WARNING: medium-sol@willow exceedes load threshold: alarm hl:np_load_short=0.908691/1.8, alarm hl:np_load_avg=0.828125/2.3, alarm hl:mem_free=222.000000M/300M, alarm hl:available=1/0 [21:33:18] RAID on daphne is CRITICAL: ERROR - TOTAL: 2: FAILED: 0: DEGRADED: 1 [21:33:28] SMF on willow is CRITICAL: ERROR - maintenance: svc:/network/puppetmasterd:default [21:33:37] FMA on yarrow is CRITICAL: ERROR - unexpected output from snmpwalk [21:33:38] SMF on damiana is CRITICAL: ERROR - maintenance: svc:/network/ldap/client:default [21:33:48] SMF on turnera is CRITICAL: ERROR - offline: svc:/system/cluster/scsymon-srv:default [21:34:58] /aux0 on hemlock is WARNING: DISK WARNING - free space: /aux0 466260 MB (8% inode=44%): [21:44:48] Sun Grid Engine execd on ortelius is WARNING: medium-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.079101/1.00, alarm hl:np_load_long=0.751953/1.50, alarm hl:mem_free=22219.000000M/300M, alarm hl:available=1/0 [21:45:49] Sun Grid Engine execd on ortelius is OK: short-sol@ortelius OK: medium-sol@ortelius OK [21:47:38] Sun Grid Engine execd on willow is WARNING: medium-sol@willow exceedes load threshold: alarm hl:np_load_short=0.696289/1.8, alarm hl:np_load_avg=0.720703/2.3, alarm hl:mem_free=216.000000M/300M, alarm hl:available=1/0 [21:48:37] Sun Grid Engine execd on willow is OK: medium-sol@willow OK: longrun-sol@willow OK [21:48:46] Dispenser? [21:49:05] Akoopal: yes? [21:49:16] region in geohack [21:49:23] what is that exactly used for? [21:50:48] Its an obsolete hack for selecting a sub template with more relevant mapping services [21:51:29] ahh, so you can provide UK specific mapping-services [21:51:36] for example [21:52:38] Sun Grid Engine execd on willow is WARNING: medium-sol@willow exceedes load threshold: alarm hl:np_load_short=0.595215/1.8, alarm hl:np_load_avg=0.689941/2.3, alarm hl:mem_free=195.000000M/300M, alarm hl:available=1/0: longrun-sol@willow exceedes load threshold: alarm hl:np_load_short=0.595215/1.9, alarm hl:np_load_long=0.697266/2.25, alarm hl:mem_free=195.000000M/200M, alarm hl:available=1/0 [21:53:29] sounds still possible usefull [21:54:38] at nl we do not have such [21:56:56] hmm, in the en template I see a whole bunch listed on the main template it seems? [21:59:19] It not as useful as it appears. The coverage of a mapping service isn't well defined by country boundaries. [22:01:03] In any case GeoHack usually auto-detect country based on location and insert/remove the appropriate sections [22:04:43] ok [22:11:40] valhallasw: i've made a bash script,and with the LC_ALL="en_US.UTF-8", now it works fine [22:11:56] at least one of the tools [22:24:17] RAID on adenia is CRITICAL: CHECK_NRPE: Socket timeout after 30 seconds. [22:33:28] SMF on willow is CRITICAL: ERROR - maintenance: svc:/network/puppetmasterd:default [22:33:37] FMA on yarrow is CRITICAL: ERROR - unexpected output from snmpwalk [22:33:37] SMF on damiana is CRITICAL: ERROR - maintenance: svc:/network/ldap/client:default [22:33:47] RAID on adenia is OK: OK - TOTAL: 2: FAILED: 0: DEGRADED: 0 [22:34:17] RAID on daphne is CRITICAL: ERROR - TOTAL: 2: FAILED: 0: DEGRADED: 1 [22:34:47] SMF on turnera is CRITICAL: ERROR - offline: svc:/system/cluster/scsymon-srv:default [22:35:07] /aux0 on hemlock is WARNING: DISK WARNING - free space: /aux0 465505 MB (8% inode=44%): [22:43:57] Sun Grid Engine execd on ortelius is WARNING: short-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.146484/1.10, alarm hl:np_load_long=0.753906/1.55, alarm hl:mem_free=22368.000000M/300M, alarm hl:available=1/0: medium-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.146484/1.00, alarm hl:np_load_long=0.753906/1.50, alarm hl:mem_free=22368.000000M/300M, alarm hl:available=1/0 [22:44:58] Sun Grid Engine execd on ortelius is OK: short-sol@ortelius OK: medium-sol@ortelius OK [23:09:03] Sun Grid Engine execd on ortelius is WARNING: short-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.173828/1.10, alarm hl:np_load_long=0.869140/1.55, alarm hl:mem_free=22368.000000M/300M, alarm hl:available=1/0: medium-sol@ortelius exceedes load threshold: alarm hl:np_load_short=1.173828/1.00, alarm hl:np_load_long=0.869140/1.50, alarm hl:mem_free=22368.000000M/300M, alarm hl:available=1/0 [23:13:51] Sun Grid Engine execd on willow is WARNING: medium-sol@willow exceedes load threshold: alarm hl:np_load_short=0.804199/1.8, alarm hl:np_load_avg=0.899902/2.3, alarm hl:mem_free=265.000000M/300M, alarm hl:available=1/0 [23:14:52] Sun Grid Engine execd on willow is OK: medium-sol@willow OK: longrun-sol@willow OK [23:17:02] Sun Grid Engine execd on ortelius is OK: short-sol@ortelius OK: medium-sol@ortelius OK [23:33:34] SMF on willow is CRITICAL: ERROR - maintenance: svc:/network/puppetmasterd:default [23:33:42] FMA on yarrow is CRITICAL: ERROR - unexpected output from snmpwalk [23:33:42] SMF on damiana is CRITICAL: ERROR - maintenance: svc:/network/ldap/client:default [23:34:35] RAID on daphne is CRITICAL: ERROR - TOTAL: 2: FAILED: 0: DEGRADED: 1 [23:34:53] SMF on turnera is CRITICAL: ERROR - offline: svc:/system/cluster/scsymon-srv:default [23:35:35] /aux0 on hemlock is WARNING: DISK WARNING - free space: /aux0 464802 MB (8% inode=44%): [23:48:52] Sun Grid Engine execd on willow is WARNING: medium-sol@willow exceedes load threshold: alarm hl:np_load_short=0.714356/1.8, alarm hl:np_load_avg=0.785644/2.3, alarm hl:mem_free=254.000000M/300M, alarm hl:available=1/0 [23:49:52] Sun Grid Engine execd on willow is OK: medium-sol@willow OK: longrun-sol@willow OK