[06:23:46] why does the wiki try to log you in if u read a page via google cache or archiveweb? [08:19:19] andre__: something weird here, LuisVilla is being added and removed from this task by obscure forces https://phabricator.wikimedia.org/T76158#1042997 [08:20:08] looks like you edited the task description three times. and once removed Luis. [08:20:09] I've tried removing the "Luis Villa (Luis _personal_ - no legal bugs please)" account, but failes [08:20:21] That's what it says, but it's all lies :) [08:20:55] I touched CC only once, to remove that account. The rest was done by phabricator [08:39:21] andre__: Ah, you managed to remove it, nice. [08:39:54] yeah, somehow worked without problems here :-/ [15:43:39] [[Tech]]; Danny lost; /* bibleversefinder on labs */; https://meta.wikimedia.org/w/index.php?diff=11298692&oldid=11290849&rcid=5952951 [15:49:25] [[Tech]]; Technical 13; /* bibleversefinder on labs */ re; https://meta.wikimedia.org/w/index.php?diff=11298738&oldid=11298692&rcid=5952959 [17:27:27] does anyone know of a JavaScript module that I can import for IE compatibility stuff? e.g. Array.indexOf, Object.keys [17:27:36] or do I need to write those fallbacks myself [17:28:31] MusikAnimal: what version of IE? JS is disabled in IE6 and IE7 [17:28:39] IE8 [17:28:51] Perhaps we should just disable that as well? [17:28:57] which amazingly doesn't have Array.indexOf or Object.keys [17:29:00] I'm okay with that! [17:29:02] It's not really supported in MediaWiki extensions anyway [17:29:25] MusikAnimal: yes [17:29:36] So should I not bother? this is for a script I'm working. IE9+ work fine [17:29:36] MusikAnimal: es5-shim [17:31:15] MatmaRex: so can I import that like mw.loader.load("es5-shim") or does it not work like that [17:31:54] MusikAnimal: yes. better to do mw.loader.using("es5-shim").done(function(){ /* your code here */ }), your version will not wait for it to load before running the rest of the code [17:34:01] yep, that's how i'm doing it for jStorage, but the syntax is mw.loader.using( 'jStorage' , function() { /* code /* }) [17:34:09] instead of using .done [17:35:26] and does .using load the module too? so I don't have to do mw.loader.load as well as mw.loader.using? [17:39:01] ^ MatmaRex [17:39:25] MusikAnimal: yes [17:39:49] MusikAnimal: both syntaxes wirk, with callback and with .done(), the second one is newer (added ~a year ago) [17:39:52] work* [17:46:07] Ok going to give this a try. Thank you for the help! :) [18:50:46] Is wmflabs.org down? [18:54:57] Excirial: which part of it? [18:55:16] Pretty much everything i can think of to test. [18:55:27] https://tools.wmflabs.org/ loads [18:55:42] http://huggle.wmflabs.org/ [18:55:45] For example [18:55:59] http://zh.wikipedia.beta.wmflabs.org/wiki/special:version (Cannot access the database: Can't connect to MySQL server on '10.68.16.193' (4) (10.68.16.193)) [18:56:10] intense.wmflabs.org/ doesn't load [18:56:21] Probably related to "!log cold-migrating all instances from virt1005 to virt1012 [18:56:22] And i believe the Wikimedia-operations chat log also refusews [18:56:58] ? [18:56:58] Oh yes, here we are: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-operations/?C=M;O=D [18:57:19] I was hoping to go trough that log to see if anyone already mentioned something but well... ^^ [18:57:22] https://lists.wikimedia.org/pipermail/labs-l/2015-February/003384.html [18:57:38] Why try to parse IRC logs when there is a helpful announcement :) [18:58:43] Well, the #wikimedia-operations topic pointed me to those lovely logs. But an announcement is indeed easier to parse. ;) [18:59:38] Suppose that for now i am going to attempt Huggling without a whitelist (Good lord, how many edits are there?) [19:00:40] Oh and thanks Nemo - at least i won't have to cast suspicious glances at my firewall update now [20:00:17] T13|mobile: you wrap it in something with the class 'nopopups' [20:02:32] Sweet. [20:02:52] Took me a minute to figure out what you were replying to. [20:03:46] Does it pick the thumb for the page based on wikitext or html? [20:04:59] If the latter, I'm wondering if I can manipulate and 'set' my image per page with a template. [21:23:38] I'm debating on applying for a WMF job listed on the 'work with us' page, and I'm wondering if WMF ever runs background checks on staff or identified users. I've nothing to hide, just curious if it is done. [21:24:41] sikrit [21:25:59] MaxSem: for me? Huh? [21:26:18] T13|mobile, out of curiosity, which postion? [21:27:38] Not sure yet. [21:28:30] I haven't looked recently to see what's currently listed. Just curious in general. [21:30:24] I've had it suggested to me a few times I'd be good for this position or that. [21:30:28] WMF is just a branch of NSA, so they don't need any additional check on you [21:31:08] Nemo_bis: :D I love your humor. Everyone knows there is No Such Agency. [21:33:03] as I recall, something about US law requires criminal background checks for staff of non-profit organizations, but I have zero details [21:34:25] T13|mobile, in general: we might [21:35:04] Okay. ;) [21:36:02] T13|mobile: very true! In fact, a careful digital restoration of https://commons.wikimedia.org/wiki/File:Wikimedia-servers-Sept04.jpg , after years of work and avant-garde research on prospective errors correction, revealed a sticker "There is No Such Agency" applied on the second Wikipedia server. Sadly, an ovezealous oversighter hid the reupload. [21:37:02] They passed one such law recently in Italy. [21:37:29] As usual, the parliament wrote down something impossibly broad, so a later ministry decree provided an "interpretation" that basically nullifies the law. [21:37:59] So in the end no criminal checks are done by non-profits? But nobody quite knows. [21:40:21] See Nemo_bis, I 'can' have a sense of humor. :p [21:44:03] Oh, MaxSem got a minute to talk about typoscan? [21:44:16] oh [21:44:20] right:P [21:44:21] It doesn't seem to be working properly. [21:44:23] wassup? [21:44:38] Toollabs:awb/typoscan [21:44:59] "working properly" [21:45:10] There's been no list to work from for year [21:45:10] Shows over 100% complete and some odd negative numbers [21:45:14] *years [21:45:53] What do I need to do to revive the project? Ive had a few users ask me if i could [21:46:14] And i said i would need to look into it. [21:46:26] Someone needs to build a list of articles to scan [21:46:38] Do I need to talk to mags directly? Where's the source? [21:46:44] get a Windoze machine with a badass CPU, downoal a dump, wait a few days for a scan to complete [21:47:01] People say they are clicking on it but it never completes [21:47:07] Are they being impatient? [21:47:11] MaxSem: Even with a hex core and 24GB ram I found it wasn't working quick enough to be worthwhile to leave the machine on for it [21:47:17] Reeeedy [21:47:21] Play some PC games [21:47:26] wtf, I consistently make at least one typo per sentence today [21:47:35] :D [21:47:45] You should get a game on Gog [21:48:06] IIRC it wasn't even using as much CPU capacity as it should [21:48:09] Reedy, run typo profiler and throw slow rules te fuck away? [21:48:12] What if I set it all up on my universities server (with permission of course) to run at off-peak times? [21:48:17] lol. [21:48:25] T13|mobile: How would that help? [21:48:36] Of course AWB is optimised for cluster operations [21:48:48] It was design goal #0 [21:48:58] I'm guessing the biggest issue is the end users being impatient [21:49:05] the scan supports multiple cores, actually [21:49:10] So... remove the human factor. [21:49:22] KILL ALL HUMANS [21:49:28] +3 [21:49:34] Right, but it didn't seem to be utilising enough CPU time imho [21:49:39] Heyyy.. wait a minute.. [21:49:43] um? [21:49:55] Most of the total cpu was idle on my desktop [21:49:59] Reedy, last time I touched it it ate CPU quite well:P [21:50:05] you broke it! [21:50:07] It's an XML file that fucking 10s of GB large [21:50:12] Of course it's gonna be slow [21:50:26] I wonder how big an enwiki dump comes to nowadays [21:51:03] well, maybe the main scan thread isn't parsing the XML fast enough with all these SSDs [21:51:09] Page histories were like 10.8GB last I saw [21:51:10] lol [21:51:12] probably [21:51:15] T13|mobile: Compressed? [21:51:22] Think so [21:51:24] MaxSem: Yeah, I think I tried on an SSD [21:51:29] Compressed isn't a useful metric [21:51:49] It's only like 1% compression though [21:51:58] there was a buffer size, try bumping it to 10k or something:P [21:52:12] Or maybe i read it wronh [21:52:15] Possible [21:52:53] The current dump extracted to many times the size [21:55:07] Less than 10 times IIRC [21:56:07] Can someone make a Special:Dump page that makes it easier to get dump files? Yes, I know... Submit a pull request... *sigh* [21:56:29] Currently, such special page would need to embed a time machine [21:57:02] Sorry Nemo_bis. That's still in beta. [21:57:10] I can I get the dump files from 2020 if it does? [21:57:58] Depends on how much power is available in Texas [21:58:27] None, they shut dowm for 2" of snow... [21:58:41] Thanks Obama [21:58:50] That's what happens when you stop oil pipelines [22:05:09] Danny_B: I think you know, how big is the XML inside enwiki-*-pages-articles.xml.bz2 nowadays? [22:05:38] I remember you uncompressed such archives on Toolserver. :) [22:11:53] When it was 9.1GB it extracted to 42GB [22:11:56] apparently [22:12:46] it's 10.8GB now [22:13:03] ~49.8GB? [22:20:37] Nemo_bis: yeah, i remember about 42g [22:44:45] time bunzip2 -c /public/dumps/public/enwiki/20141208/enwiki-20141208-pages-articles.xml.bz2 | wc -c 50893281967 real 37m13.474s [22:45:06] About 47 GiB [22:45:21] Very close, reedy :) [22:48:45] 109 minutes... [22:49:47] to download? [22:50:43] Yep [22:51:28] 15Mbs at home... 20MBs at school... maybe I should DL at school next time... [22:51:30] :p [22:51:50] Doesn't matter really, dumps.wikimedia.org download speeds are like bingo in this period [22:52:30] Can happen to be 50 KiB/s or 50 MB/s from same place [22:54:02] Need to reDL AWB too it seems. [22:54:18] Don't have it on my desktop, just laptop