[00:47:20] RECOVERY - Puppet freshness on db1009 is OK: puppet ran at Sun Jun 15 00:47:12 UTC 2014 [01:03:40] PROBLEM - Puppet freshness on db1006 is CRITICAL: Last successful Puppet run was Sat 14 Jun 2014 22:02:46 UTC [01:03:40] RECOVERY - Puppet freshness on db1006 is OK: puppet ran at Sun Jun 15 01:03:37 UTC 2014 [01:08:40] PROBLEM - Puppet freshness on ms-be1001 is CRITICAL: Last successful Puppet run was Sat 14 Jun 2014 19:07:17 UTC [01:19:30] PROBLEM - Disk space on palladium is CRITICAL: DISK CRITICAL - free space: / 1415 MB (3% inode=50%): [02:12:40] PROBLEM - Puppet freshness on stat1003 is CRITICAL: Last successful Puppet run was Fri 13 Jun 2014 20:03:25 UTC [02:15:49] !log LocalisationUpdate completed (1.24wmf8) at 2014-06-15 02:14:46+00:00 [02:15:58] Logged the message, Master [02:27:06] !log LocalisationUpdate completed (1.24wmf9) at 2014-06-15 02:26:03+00:00 [02:27:11] Logged the message, Master [02:59:28] !log LocalisationUpdate ResourceLoader cache refresh completed at Sun Jun 15 02:58:21 UTC 2014 (duration 58m 20s) [02:59:32] Logged the message, Master [03:16:19] (03PS1) 10Gerrit Patch Uploader: [do not merge] Grant 'centralauth-rename' right to stewards [operations/mediawiki-config] - 10https://gerrit.wikimedia.org/r/139655 [03:16:21] (03CR) 10Gerrit Patch Uploader: "This commit was uploaded using the Gerrit Patch Uploader [1]." [operations/mediawiki-config] - 10https://gerrit.wikimedia.org/r/139655 (owner: 10Gerrit Patch Uploader) [04:04:30] PROBLEM - puppetmaster backend https on palladium is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 8141: HTTP/1.1 500 Internal Server Error [04:09:40] PROBLEM - Puppet freshness on ms-be1001 is CRITICAL: Last successful Puppet run was Sat 14 Jun 2014 19:07:17 UTC [04:27:30] RECOVERY - Disk space on palladium is OK: DISK OK [04:27:30] RECOVERY - puppetmaster backend https on palladium is OK: HTTP OK: Status line output matched 400 - 336 bytes in 0.046 second response time [05:13:40] PROBLEM - Puppet freshness on stat1003 is CRITICAL: Last successful Puppet run was Fri 13 Jun 2014 20:03:25 UTC [07:10:40] PROBLEM - Puppet freshness on ms-be1001 is CRITICAL: Last successful Puppet run was Sat 14 Jun 2014 19:07:17 UTC [08:14:40] PROBLEM - Puppet freshness on stat1003 is CRITICAL: Last successful Puppet run was Fri 13 Jun 2014 20:03:25 UTC [08:17:40] PROBLEM - Puppet freshness on db1009 is CRITICAL: Last successful Puppet run was Sun 15 Jun 2014 05:17:14 UTC [08:33:40] PROBLEM - Puppet freshness on db1006 is CRITICAL: Last successful Puppet run was Sun 15 Jun 2014 05:33:14 UTC [08:33:40] RECOVERY - Puppet freshness on db1006 is OK: puppet ran at Sun Jun 15 08:33:37 UTC 2014 [08:47:00] RECOVERY - Puppet freshness on db1009 is OK: puppet ran at Sun Jun 15 08:46:57 UTC 2014 [10:02:19] !log nuked ms-be1001 sdj with zeros, reformatting and placing into production again [10:02:23] Logged the message, Master [10:02:30] RECOVERY - Puppet freshness on ms-be1001 is OK: puppet ran at Sun Jun 15 10:02:24 UTC 2014 [10:45:46] paravoid: are you aware of what seems like image scaling outage? [10:55:30] matanya: what's the number of the RT ticket about it, mentioned above ? 00.25 < bawolff> I filed an RT ticket [10:55:52] 7693 Nemo_bis [10:57:22] oh, now i see the bz ticket and the IRC scrollback, and also SAL [11:15:40] PROBLEM - Puppet freshness on stat1003 is CRITICAL: Last successful Puppet run was Fri 13 Jun 2014 20:03:25 UTC [12:12:42] (03CR) 10Nemo bis: "No, summarize is probably the best solution. For instance this graph gives error with logbase, but not with summarize." [operations/puppet] - 10https://gerrit.wikimedia.org/r/117021 (https://bugzilla.wikimedia.org/41754) (owner: 10Nemo bis) [13:02:56] hi [13:03:51] can i start an upload with gwtoolset now ? about 40 files of 60 mo [13:05:51] tounoki: you're not likely to get a reply quick during weekend :( [13:07:44] (03PS1) 10Matanya: mirror: lint [operations/puppet] - 10https://gerrit.wikimedia.org/r/139681 [13:08:04] tounoki: that wouldn't be a good idea [13:08:17] it seems there is an issue with image scaling [13:09:03] moreover, it is better to do so when ops are around (i.e not sunday) [13:11:56] actually it's better, he can't overload the imagescalers because they're not doing anything :P [13:12:07] go go go go [13:13:35] so i do... 5 min less to stop me ! [13:14:33] Nemo_bis: Well he can add to the backlog :p [13:14:47] JohnLewis: there isn't a job queue or something [13:14:50] <_joe|away> Nemo_bis: sorry, I was under the impression that was resolved, reading the backlog [13:14:59] Nemo_bis: meh k [13:15:18] <_joe|away> do you have any idea why no job is going on there? [13:15:28] _joe|away: no idea, I didn't see the bug nor I have access to rt [13:15:33] I was just trusting matanya [13:16:40] _joe|away: RT #7693 [13:16:42] <_joe|away> so, first of all it's just the video scalers [13:16:53] <_joe|away> matanya: yeah looking at it [13:19:30] <_joe|away> job loops running and no apparent error, investigating further [13:21:58] so ? finally ? [13:24:48] <_joe|away> tounoki: it's not related to imagescalers ATM, but it's sunday and yes, it's better if you wait until tomorrow [13:25:20] <_joe|away> I'm here looking into another outage, I don't have time to follow that as well [13:26:01] ok thanks [13:28:28] <_joe|away> Nemo_bis: do you have any idea where the logs from videoscalers would be? fluorine? [13:29:19] _joe|away: fluorine is my best guess. [13:29:46] It is the log server afaik [13:30:32] <_joe|away> Ok, I'm 99% sure it's related to the one of the scap runs that happened between 23:00 UTC on the 12th and a couple of hours later [13:30:40] <_joe|away> I just want to find the smoking gun [13:33:51] ori ran scap an hour after that time IIRC. Only scap there was according to SAL [13:34:07] <_joe|away> yes I was looking at that as a candidate [13:36:29] _joe|away: any fatals ? [13:38:26] <_joe|away> matanya: I cannot find the log for the jobs on the videoscalers, sorry [13:38:30] <_joe|away> hold on though [13:38:36] runJobs.log [13:38:42] reedy@fluorine:/a/mw-log/archive$ zgrep -c tmh runJobs.log-2014061* [13:38:42] runJobs.log-20140610.gz:347 [13:38:42] runJobs.log-20140611.gz:1526 [13:38:42] runJobs.log-20140612.gz:472 [13:38:43] runJobs.log-20140613.gz:751 [13:38:46] runJobs.log-20140614.gz:19 [13:39:09] There's nothing in the currently active runJobs.log [13:39:21] I guess the 0615 rotated will be empty [13:39:26] runJobs.log-20140615.gz:50 [13:39:28] Or not [13:39:49] heh, false positives [13:40:05] Reedy: more info? :) [13:40:12] 2014-06-14 16:22:13 mw1008 enwiktionary: cirrusSearchLinksUpdate moirtmhéadracht addedLinks=array(0) removedLinks=array(0) t=260 good [13:40:16] <_joe|away> Reedy: it should be there? [13:40:18] I should've grepped for tmh1 [13:40:26] _joe|away: Only if they're trying to actually do anything [13:41:06] <_joe|away> Reedy: it seems to read info from a source from tiem to time, but let me dive into the logs [13:41:30] zgrep tmh1 runJobs.log-20140615.gz is empty [13:41:38] <_joe|away> yes [13:43:02] <_joe|away> Reedy: I confirm that logs from the videoscalers disappeared [13:43:06] <_joe|away> let me check the timing [13:43:17] Has anyone triedrestarting the mw-job-runner daemon on tmh100* ? [13:43:35] <_joe|away> Reedy: I did not, it *is* working apparently [13:43:47] <_joe|away> so, first I'd like to understand as much as possible [13:44:03] <_joe|away> Reedy: I straced it and I've seen no foul play [13:44:10] tmh seem to be idle from ganglia [13:44:32] load around 6% is uncommons for them [13:44:41] *uncommon [13:44:45] <_joe|away> matanya: they're not processing anything [13:44:58] <_joe|away> the jobs are clearly either not getting queued or fail consistently [13:45:06] <_joe|away> right now, I'd say the former [13:45:17] so the service is up, but nothing happening, interesting [13:45:30] <_joe|away> Reedy: last job running at 2014-06-13 00:31:15 [13:45:32] hmm [13:45:38] I wonder if there's anything in the actual job queue [13:45:46] As if there isn't, they won't do anything [13:45:51] <_joe|away> Reedy: we may have that on graphite [13:45:52] And that's not their fault [13:45:58] anything in the jobqueue [13:46:00] <_joe|away> Reedy: that's exactly my hypothesis [13:46:16] commons [13:46:17] webVideoTranscode: 713 queued; 106 claimed (0 active, 106 abandoned); 0 delayed [13:46:37] <_joe|away> ... [13:46:52] So there is work for them [13:46:57] so there are jobs waiting, but no one seems to process them [13:47:01] <_joe|away> so maybe they're queued somewhere the videoscaler do not find it? [13:47:13] <_joe|away> Reedy: let me try to restart the script on one server [13:47:16] <_joe|away> just in case [13:47:19] Unless they're looking at the wrong redit server or something [13:47:28] But then the whole job queue would be broken [13:47:33] <_joe|away> I doubt it will change anything [13:48:09] Watching the output of the jobs loop will tell what it thinks they are doing [13:49:11] <_joe|away> so, on fluorine? [13:49:19] nope, locally on said machine [13:49:31] stop the daemon, run the process manually for a while [13:50:02] tmh1001 [13:50:38] <_joe|away> !log restarted mw-job-runner on tmh1001 [13:50:43] Logged the message, Master [13:50:43] <_joe|away> matanya: what? [13:50:44] I don't think we log to file [13:50:50] <_joe|away> Reedy: we don't [13:50:52] <_joe|away> sigh [13:50:58] <_joe|away> SNAFU [13:51:19] can you restart a service on tmh1001 and see what happens ? [13:51:49] He just did [13:52:08] oh [13:52:35] <_joe|away> and guess what? [13:52:38] <_joe|away> nothing happened [13:52:58] <_joe|away> so, next step is running manually from the command line, but give me 5 mins, sorry [13:53:25] <_joe|away> Reedy: can you check what has been released around the time processing stopped? [13:54:20] _joe|away: the only thing i saw related to images is : https://gerrit.wikimedia.org/r/#/c/139276/1/ApiQueryPageImages.php [13:54:45] but that is related to API, so doesn't seem relevant [13:55:23] Found it [13:55:36] <_joe|away> Reedy: :) [13:55:42] 2014-06-15 13:55:18 webVideoTranscode File:Japanese_classic_rifle.ogv transcodeMode=derivative transcodeKey=360p.ogv STARTING [13:55:42] File:Japanese classic rifle.ogv: Source not found [13:55:43] o/ [13:56:14] sudo -u apache /usr/local/bin/jobs-loop.sh -t 14400 webVideoTranscode [13:56:34] Pipeline full (5 immediate sub-processes)... [13:56:50] It's actually converting now though [13:56:57] 22846 apache 39 19 839m 63m 4584 R 183 0.4 1:00.65 avconv [13:56:57] 22874 apache 39 19 807m 39m 4056 R 144 0.2 0:47.62 avconv [13:56:57] 22851 apache 39 19 817m 45m 4056 R 124 0.3 0:42.47 avconv [13:56:57] 22891 apache 39 19 130m 22m 4416 R 100 0.1 0:31.48 ffmpeg2theora [13:56:57] 22888 apache 39 19 124m 21m 4436 R 98 0.1 0:31.57 ffmpeg2theora [13:57:09] <_joe|away> Reedy: where is that? [13:57:14] <_joe|away> tmh1001 or 02 [13:57:20] tmh1001 [13:57:29] <_joe|away> ok so what I do not understand is [13:57:47] <_joe|away> why my restart did not work? [13:58:11] <_joe|away> Reedy: let me check how is that process running [13:58:47] <_joe|away> Reedy: you did not launch it with -v0 [13:59:03] <_joe|away> -v 0 sorry [13:59:06] What's -v 0? [13:59:11] <_joe|away> no idea [13:59:17] heh [13:59:25] <_joe|away> but that's how it gets launched by the init script [13:59:26] <_joe|away> :) [13:59:41] <_joe|away> Reedy: literally the first time I log onto those servers, sorry [14:00:40] <_joe|away> Ok I'm going to look at the sources [14:02:30] PROBLEM - Disk space on palladium is CRITICAL: DISK CRITICAL - free space: / 1420 MB (3% inode=50%): [14:02:35] 2014-06-15 14:02:20 webVideoTranscode File:U-Bahn_Wien._De_Friedensbrücke_a_Stadtpark_(U4)-q5nMuXp73Tk.webm transcodeMode=derivative transcodeKey=480p.webm t=367771 good [14:04:21] that's an oversight on my part while deploying aaron's changes to job-loop, videoscaler puppet manifest uses -v 0 in extra_args which was removed [14:04:43] so it's a noop? [14:04:52] <_joe|away> godog: oh ok, so we just need to change that? [14:05:44] yes it is now, it should be enough to empty extra_args from puppet yes [14:06:15] <_joe|away> godog: ok, on it [14:06:27] <_joe|away> in the meantime, palladium has a full root... [14:07:01] thanks, happy to do it as well, I should have trusted my spider senses [14:07:59] <_joe|away> godog: almost did that [14:08:17] <_joe|away> and thanks for confirming that [14:09:35] no problem, happy to +2 and apologies :( [14:13:05] looking at palladium, I'm going to add 20G for good measure [14:13:27] (03PS1) 10Giuseppe Lavagetto: videoscalers: remove extra args [operations/puppet] - 10https://gerrit.wikimedia.org/r/139682 [14:13:51] <_joe|away> oh so you will online-resize the root partition? [14:14:08] yes [14:14:15] (03CR) 10Reedy: [C: 031] videoscalers: remove extra args [operations/puppet] - 10https://gerrit.wikimedia.org/r/139682 (owner: 10Giuseppe Lavagetto) [14:14:19] <_joe|away> it should be completely ok, yes [14:14:30] RECOVERY - Disk space on palladium is OK: DISK OK [14:14:37] success! [14:14:45] (03CR) 10Giuseppe Lavagetto: [C: 032] videoscalers: remove extra args [operations/puppet] - 10https://gerrit.wikimedia.org/r/139682 (owner: 10Giuseppe Lavagetto) [14:14:54] The filesystem on /dev/mapper/palladium--vg-root is now 15007744 blocks long. [14:14:57] happy days [14:15:16] !log extended palladium root partition by +20G [14:15:21] Logged the message, Master [14:15:40] _joe|away: will you write the outage report ? [14:16:40] PROBLEM - Puppet freshness on stat1003 is CRITICAL: Last successful Puppet run was Fri 13 Jun 2014 20:03:25 UTC [14:17:12] <_joe|away> Reedy: running puppet, then restarting the job runner on both videoscalers [14:17:19] sweet :) [14:17:22] <_joe|away> please turn off your instance on tmh1001 [14:17:27] I already have [14:19:50] <_joe|away> quite interestingly, a puppet run changed nothing [14:19:59] <_joe|away> how's that even possible? [14:20:22] was manually fixed in the init script perhaps? [14:21:09] <_joe|away> d'oh [14:21:20] <_joe|away> no godog I am tired [14:21:45] <_joe|away> and just forgot to puppet-merge [14:22:33] <_joe|away> matanya: I will, tomorrow [14:22:41] <_joe|away> I'll just close the ticket for now [14:22:46] thank you :) [14:23:37] <_joe|away> matanya: ikea furniture waits for me! [14:23:59] go ahead, build them! :D [14:25:15] <_joe|away> laters! [14:26:17] !log Job runners were restarted on tmh100[12] and are now processing jobs [14:26:22] Logged the message, Master [14:29:09] (03PS1) 10Matanya: rt: retab [operations/puppet] - 10https://gerrit.wikimedia.org/r/139683 [14:40:37] (03PS1) 10Matanya: salt: lint [operations/puppet] - 10https://gerrit.wikimedia.org/r/139684 [15:52:42] (03PS1) 10Yuvipanda: [WIP] toollabs: Create mongo accounts for all tool users [operations/puppet] - 10https://gerrit.wikimedia.org/r/139685 [15:52:54] scfc_de: ^ WIP patch [16:16:43] YuviPanda: Coren uses rather consistently /etc/wmflabs-project to determine the prefix. This has the added bonus that the scripts work on Toolsbeta as well :-). You don't chown the file yet, do you? (I'll post this on the change as well.) [16:17:00] scfc_de: no, but I've hit a brick wall [16:17:07] scfc_de: mongodb pre-allocates about 200M each database [16:17:20] so autocreating a db per user meant I ran out of space before all users were done :| [16:17:53] :-) [16:18:13] Do you need to precreate the databases or can you just create the users with the rights to create their own? [16:18:28] scfc_de: I'm investigating that, but my reading of the docs so far is 'no' [16:20:39] And something self-serving? A daemon on tools-mongo or (properly reviewed) sudo rule that allows individual tools to create databases? [16:20:51] (03PS2) 10Yuvipanda: [WIP] toollabs: Create mongo accounts for all tool users [operations/puppet] - 10https://gerrit.wikimedia.org/r/139685 [16:21:04] (03CR) 10Tim Landscheidt: [WIP] toollabs: Create mongo accounts for all tool users (032 comments) [operations/puppet] - 10https://gerrit.wikimedia.org/r/139685 (owner: 10Yuvipanda) [16:22:14] scfc_de: yeah, let me add the chown too [16:23:42] (03PS3) 10Yuvipanda: [WIP] toollabs: Create mongo accounts for all tool users [operations/puppet] - 10https://gerrit.wikimedia.org/r/139685 [16:23:48] scfc_de: either way, this one can at least be used for the postgres user creation bits [16:23:55] scfc_de: and possibly port the mysql one to this as well [16:24:09] scfc_de: have you seen *that* script? it's the kind of perl I don't like, very bashy [16:25:55] YuviPanda: Hmmm. Don't remember. operations/software? [16:26:06] scfc_de: that or somewehere in openstack? [16:26:07] let me check [16:26:47] scfc_de: hahah https://gerrit.wikimedia.org/r/#/c/135445/ [16:26:49] unmerged [16:27:35] scfc_de: it's that kinda perl script I don't fully like [16:27:39] no docs either [16:28:22] just 'treat everyting as a string and munge it around till it works' [16:28:34] (03PS3) 10Yuvipanda: Labs: puppetize replica-addusers [operations/puppet] - 10https://gerrit.wikimedia.org/r/135445 (owner: 10coren) [16:28:46] (03CR) 10Yuvipanda: "Did you mean to +2 this, Coren? :)" [operations/puppet] - 10https://gerrit.wikimedia.org/r/135445 (owner: 10coren) [16:30:26] scfc_de: mongodb might have to not be around for a while, it looks like. You can turn off pre-allocation, but it is 'not fit for production' if it is turned off [16:30:52] Well, I cede to you that Python would make it look more structured :-). [16:31:10] scfc_de: :) I bet you can make it look more structured in perl too if you want. [16:31:17] but I've no idea how 'idiomatic perl' looks like [16:31:29] scfc_de: the python script I just put up does pretty much the same thing, with mongo instead of mysql [16:34:11] That depends. Previously, I'd rather used "s/\n//" than "chomp" to save cycles and space; nowadays I've grown fond of: "If there's a module to do something, use it even if you don't use 90 % of all methods." And that makes scripts already look much "cleaner". [16:34:59] scfc_de: oh yeah, CPAN [16:35:16] scfc_de: but of course, CPAN is harder to access when you're writing puppet [16:35:18] unless it's packaged [16:35:20] which I guess most are [16:37:26] YuviPanda: That's usually the threshold for me: If it's available in Fedora/Ubuntu/whatever, there's no point in not using it. I tend to avoid manual installs for anything "important". [16:39:00] scfc_de: makes sense. OTOH, the python script didn't need modules for anything other than db connectivity [17:08:44] akosiaris: any idea at what point we can start handing out postgres user accounts to everyone? I think I've a script almost ready [17:08:47] re: toollabs [17:14:42] (03CR) 10Yuvipanda: [C: 04-1] "Not going to work on Mongo anymore, hit a few showstopper bugs/'missing-features'. Code will still be useful for postgres, though." [operations/puppet] - 10https://gerrit.wikimedia.org/r/139685 (owner: 10Yuvipanda) [17:17:40] PROBLEM - Puppet freshness on stat1003 is CRITICAL: Last successful Puppet run was Fri 13 Jun 2014 20:03:25 UTC [17:25:35] (03PS4) 10Yuvipanda: [WIP] toollabs: Create mongo accounts for all tool users [operations/puppet] - 10https://gerrit.wikimedia.org/r/139685 [17:44:11] !log hoo Synchronized php-1.24wmf8/extensions/Wikidata/: Touched various JavaScripts (duration: 00m 09s) [17:44:15] Logged the message, Master [18:56:10] PROBLEM - graphite.wikimedia.org on tungsten is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:00:40] PROBLEM - Puppet freshness on db1007 is CRITICAL: Last successful Puppet run was Sun 15 Jun 2014 16:00:19 UTC [19:05:00] RECOVERY - graphite.wikimedia.org on tungsten is OK: HTTP OK: HTTP/1.1 200 OK - 1607 bytes in 0.004 second response time [20:00:30] RECOVERY - Puppet freshness on db1007 is OK: puppet ran at Sun Jun 15 20:00:23 UTC 2014 [20:18:40] PROBLEM - Puppet freshness on stat1003 is CRITICAL: Last successful Puppet run was Fri 13 Jun 2014 20:03:25 UTC [20:19:48] (03PS3) 10Ori.livneh: apache: include mod_{filter,access_compat,version} by default [operations/puppet] - 10https://gerrit.wikimedia.org/r/138846 [20:20:08] (03CR) 10Ori.livneh: [C: 032] apache: include mod_{filter,access_compat,version} by default [operations/puppet] - 10https://gerrit.wikimedia.org/r/138846 (owner: 10Ori.livneh) [20:24:36] (03PS1) 10Ori.livneh: add apache::mod::filter, omitted from I5abf01810 [operations/puppet] - 10https://gerrit.wikimedia.org/r/139757 [20:24:56] (03PS2) 10Ori.livneh: add apache::mod::filter, omitted from I5abf01810 [operations/puppet] - 10https://gerrit.wikimedia.org/r/139757 [20:25:01] (03CR) 10Ori.livneh: [C: 032 V: 032] add apache::mod::filter, omitted from I5abf01810 [operations/puppet] - 10https://gerrit.wikimedia.org/r/139757 (owner: 10Ori.livneh) [20:30:06] Deskana|Away: ping [21:22:38] does operations take care of everything in operations/ like operations/mediawiki-config [21:23:08] yeah [21:26:42] Withoutaname: Yes [21:36:20] PROBLEM - SSH on lvs1002 is CRITICAL: Server answer: [21:37:20] RECOVERY - SSH on lvs1002 is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1.4 (protocol 2.0) [23:19:40] PROBLEM - Puppet freshness on stat1003 is CRITICAL: Last successful Puppet run was Fri 13 Jun 2014 20:03:25 UTC [23:23:08] Hi. Can anyone here do password resets for users who cannot access email? [23:24:25] hello [23:24:42] [01:09] hello [01:10] I deed some help [01:10] i have an account [01:10] http://hu.wiktionary.org/wiki/Szerkeszt%C5%91:Dubaduba [01:10] but i forgot my password [01:10] for it [01:11] i didnt use that account for a couple of years now [01:11] you don't have SUL... [01:11] so i tried reset password [01:11] with the foll [23:25:06] so i tried reset password [01:11] with the following email [01:11] fc2user@yahoo.com [01:12] which i have [01:12] wrtitten on page [01:12] http://hu.wiktionary.org/wiki/Szerkeszt%C5%91:Dubaduba [01:12] but got no email [23:25:28] Dubaduba can confirm that (s)he owns that account by sending an email from the address listed on the user page https://hu.wiktionary.org/w/index.php?title=Szerkesztő:Dubaduba&oldid=160440 [23:26:36] ..is this the right channel? [23:26:46] as i remember my password was the same as my username on wiki [23:26:46] probably [23:26:50] but hten [23:27:03] they changed the software [23:27:03] and probably someone can, but they're not very likely to be around during weekends [23:27:10] and couldnt log in afterwards [23:27:38] so if you can check my password [23:27:45] it should be dubaduba [23:27:51] but should be changed [23:30:29] MatmaRex: what course of action would you recommend? Email office? [23:41:34] (03CR) 10Withoutaname: [C: 031] Remove wiktionary.wikipedia.org from rewrites as it is not in DNS. [operations/apache-config] - 10https://gerrit.wikimedia.org/r/92799 (owner: 10Reedy)