[00:02:12] PROBLEM - Puppet freshness on amssq33 is CRITICAL: Puppet has not run in the last 10 hours [00:02:12] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [00:02:12] PROBLEM - Puppet freshness on amssq36 is CRITICAL: Puppet has not run in the last 10 hours [00:02:12] PROBLEM - Puppet freshness on amssq44 is CRITICAL: Puppet has not run in the last 10 hours [00:02:12] PROBLEM - Puppet freshness on amssq39 is CRITICAL: Puppet has not run in the last 10 hours [00:02:12] PROBLEM - Puppet freshness on amssq51 is CRITICAL: Puppet has not run in the last 10 hours [00:02:12] PROBLEM - Puppet freshness on amssq55 is CRITICAL: Puppet has not run in the last 10 hours [00:02:13] PROBLEM - Puppet freshness on amssq54 is CRITICAL: Puppet has not run in the last 10 hours [00:02:13] PROBLEM - Puppet freshness on amssq60 is CRITICAL: Puppet has not run in the last 10 hours [00:02:14] PROBLEM - Puppet freshness on amssq59 is CRITICAL: Puppet has not run in the last 10 hours [00:02:14] PROBLEM - Puppet freshness on amssq53 is CRITICAL: Puppet has not run in the last 10 hours [00:02:15] PROBLEM - Puppet freshness on maerlant is CRITICAL: Puppet has not run in the last 10 hours [00:02:15] PROBLEM - Puppet freshness on knsq26 is CRITICAL: Puppet has not run in the last 10 hours [00:02:16] PROBLEM - Puppet freshness on ssl3004 is CRITICAL: Puppet has not run in the last 10 hours [00:04:09] PROBLEM - Puppet freshness on amssq32 is CRITICAL: Puppet has not run in the last 10 hours [00:04:09] PROBLEM - Puppet freshness on amssq45 is CRITICAL: Puppet has not run in the last 10 hours [00:04:09] PROBLEM - Puppet freshness on amssq47 is CRITICAL: Puppet has not run in the last 10 hours [00:04:09] PROBLEM - Puppet freshness on amssq48 is CRITICAL: Puppet has not run in the last 10 hours [00:04:09] PROBLEM - Puppet freshness on knsq27 is CRITICAL: Puppet has not run in the last 10 hours [00:04:09] PROBLEM - Puppet freshness on knsq28 is CRITICAL: Puppet has not run in the last 10 hours [00:05:12] PROBLEM - Puppet freshness on amssq42 is CRITICAL: Puppet has not run in the last 10 hours [00:05:12] PROBLEM - Puppet freshness on amssq34 is CRITICAL: Puppet has not run in the last 10 hours [00:05:12] PROBLEM - Puppet freshness on cp3002 is CRITICAL: Puppet has not run in the last 10 hours [00:06:06] PROBLEM - Puppet freshness on amssq37 is CRITICAL: Puppet has not run in the last 10 hours [00:06:06] PROBLEM - Puppet freshness on amssq57 is CRITICAL: Puppet has not run in the last 10 hours [00:06:06] PROBLEM - Puppet freshness on knsq18 is CRITICAL: Puppet has not run in the last 10 hours [00:06:06] PROBLEM - Puppet freshness on knsq19 is CRITICAL: Puppet has not run in the last 10 hours [00:06:06] PROBLEM - Puppet freshness on knsq16 is CRITICAL: Puppet has not run in the last 10 hours [00:06:06] PROBLEM - Puppet freshness on ssl3002 is CRITICAL: Puppet has not run in the last 10 hours [00:06:06] PROBLEM - Puppet freshness on ssl3001 is CRITICAL: Puppet has not run in the last 10 hours [00:07:09] PROBLEM - Puppet freshness on amssq46 is CRITICAL: Puppet has not run in the last 10 hours [00:08:12] PROBLEM - Puppet freshness on knsq22 is CRITICAL: Puppet has not run in the last 10 hours [00:09:06] PROBLEM - Puppet freshness on hooft is CRITICAL: Puppet has not run in the last 10 hours [00:10:09] PROBLEM - Puppet freshness on amssq43 is CRITICAL: Puppet has not run in the last 10 hours [00:10:09] PROBLEM - Puppet freshness on amssq61 is CRITICAL: Puppet has not run in the last 10 hours [00:10:09] PROBLEM - Puppet freshness on nescio is CRITICAL: Puppet has not run in the last 10 hours [00:11:12] PROBLEM - Puppet freshness on knsq20 is CRITICAL: Puppet has not run in the last 10 hours [00:26:21] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:32:30] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 9.190 seconds [01:08:29] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:12:23] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 4.396 seconds [01:48:41] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:52:35] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 2.429 seconds [02:17:36] !log LocalisationUpdate completed (1.19) at Sun Mar 11 02:17:35 UTC 2012 [02:17:41] Logged the message, Master [02:27:05] PROBLEM - Puppetmaster HTTPS on stafford is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:33:05] RECOVERY - Puppetmaster HTTPS on stafford is OK: HTTP OK HTTP/1.1 400 Bad Request - 335 bytes in 5.326 seconds [02:54:23] RECOVERY - Puppet freshness on db1022 is OK: puppet ran at Sun Mar 11 02:53:56 UTC 2012 [03:05:56] RECOVERY - Puppet freshness on fenari is OK: puppet ran at Sun Mar 11 03:05:42 UTC 2012 [04:26:25] PROBLEM - Puppet freshness on db1033 is CRITICAL: Puppet has not run in the last 10 hours [04:30:28] PROBLEM - Puppet freshness on virt4 is CRITICAL: Puppet has not run in the last 10 hours [04:34:22] PROBLEM - Puppet freshness on virt3 is CRITICAL: Puppet has not run in the last 10 hours [04:39:19] PROBLEM - Puppet freshness on virt1 is CRITICAL: Puppet has not run in the last 10 hours [04:49:01] PROBLEM - Puppet freshness on virt2 is CRITICAL: Puppet has not run in the last 10 hours [06:18:22] What's the ssh username of Wikimedia SVN? [06:19:43] IWorld: your own username? [06:19:53] Ah. [06:20:33] IWorld: who art thou? [06:20:43] I'm using Putty and TortoiseSvn and the system writes an error "(server sent public key)" [06:20:56] what makes you think that's an error? [06:21:22] I can see an error window. [06:22:00] IWorld: Do you have commit access or do you want anon access? [06:22:29] I have an Labs LDAP account abd I'm in the users list. [06:22:37] and [06:23:04] labs doesn't imply svn [06:23:19] IWorld: what does this say? plink -ssh -l username svn.wikimedia.org [06:23:43] err [06:23:47] on Linux? [06:23:55] IWorld: no, on windows... [06:24:06] IWorld: i'm happy to work with linux though ;) [06:24:14] ah [06:24:28] * IWorld starts his linux pc [06:24:52] should be faster to just try that on windows... [06:25:02] if it's already up [06:25:05] IWorld: are you on the http://svn.wikimedia.org/users.php list? [06:25:11] yes [06:25:26] yes [06:25:37] but i am too so that's not a good test ;) [06:27:03] baastion1.pmtpa.wmflabs is so sloooow [06:27:19] low load avg... [06:27:27] IWorld: Did you email requesting svn commit access or git? [06:27:55] no [06:28:53] who gave you labs access theb? and what was the discussion surrounding that? [06:29:12] *then [06:30:36] I had get the labs account for the huggle wa, but I will get the "Ready for git?" status becuase the huggle svn will be moved to git, [06:30:42] -"," +"." [06:31:24] You most likely only have access to labs then for the huggle instance, not our svn [06:32:02] but I will edit the mediawiki/USERINFO/ ;) [06:32:17] huh? [06:32:56] *labs and git [06:34:01] But the huggle wa svn will be moved to git (Gcode SVN -> Wikimedia Git) [06:34:23] IWorld: You don't have svn access I believe so you won't be able to commit to our svn's userinfo list [06:34:31] please expand wa... (everytime you write it) [06:34:39] jeremyb: web app [06:34:47] its the new huggle thingy [06:34:47] p858snake|l: see parenthetical ;) [06:34:53] Compiling on a Labs instance is a bad idea isnt it... these things are slow tonight [06:35:18] JRWR: everything's relative... [06:35:37] IWorld: If it's hosted on GCode SVN atm, you are going to need to bug someone over on the gcode project for huggle for access to that [06:35:50] JRWR: in particular it depends what you're compiling and how resource hoggy the process is [06:35:56] php5.4.0 [06:36:03] LDAP account is not SVN access? [06:36:15] JRWR: and it's a one time thing? and nice'd? [06:36:33] JRWR: i.e. you can make a .deb and then you don't need to compile again [06:36:33] well since its a seprate instance, it doesnt matter [06:36:55] anyway, sleep [06:37:04] Im just waiting for the warnings to start cropping up from nginx-ffuqua-doom1-3 [06:43:38] IWorld: Its the start of the useraccount, it's used for many things, SVN, labs, gerrit/git [06:47:25] PROBLEM - Squid on brewster is CRITICAL: Connection refused [06:49:02] p858snake|l: so I can access with the labs account to wikimedia svn? [06:49:23] IWorld2: If it's configured to allow you, yes [06:49:35] but out of the box, no [06:50:15] And how I can the access? Witch a mail to Wmf? [06:51:53] IWorld2: read https://www.mediawiki.org/wiki/Requesting_commit_access#Requesting_commit_access [06:52:15] ok [06:52:45] looks like the NFS server for labs is craping it self, its dead slow [06:52:53] crapping* [06:55:22] RECOVERY - Squid on brewster is OK: TCP OK - 0.003 second response time on port 8080 [06:58:47] JRWR: Ryan_Lane might like to know about that ;) [06:59:08] * JRWR pokes Ryan_Lane with a very large stick [07:12:07] which server specifically (got a hostname)? [07:12:48] ya, i-00000196.pmtpa.wmflabs [07:13:18] from the random high loads that are being reported from other instances, something is going down [07:13:19] and hard [07:14:31] wth... its working fine now.. I think I know what it was [07:14:35] midnight backups [07:14:49] bogged down the entire node/nodes [07:24:55] PROBLEM - Disk space on ms1004 is CRITICAL: DISK CRITICAL - free space: / 0 MB (0% inode=95%): /var/lib/ureadahead/debugfs 0 MB (0% inode=95%): [07:31:04] PROBLEM - Puppet freshness on mw1110 is CRITICAL: Puppet has not run in the last 10 hours [07:31:04] PROBLEM - Puppet freshness on mw1020 is CRITICAL: Puppet has not run in the last 10 hours [07:32:52] RECOVERY - Disk space on ms1004 is OK: DISK OK [07:33:45] !log on ms1004 the HTCPpurger.log file after rotation was 17 gb, filling the disk. Removed it. [07:33:49] Logged the message, Master [07:34:20] well thats not a good thing to have :) [07:35:55] !log current ls shows 17416851456 2012-03-11 07:34 HTCPpurger.log while current du -sh shows 175M for /var/log. Sparse file that gets rotated badly? lots of leading nulls (many gb worth), why? [07:35:59] Logged the message, Master [07:49:09] !log removed current htcp log file, restarted purger, it seems to be logging normallynow [07:49:12] Logged the message, Master [08:02:48] PROBLEM - Apache HTTP on srv278 is CRITICAL: Connection refused [08:08:03] PROBLEM - Lighttpd HTTP on dataset2 is CRITICAL: Connection refused [08:12:48] apergos: any eta when dump servers will be accessible again? [08:12:58] huh? [08:13:22] Connecting to dumps.wikimedia.org|208.80.152.185|:80... failed: Connection refused. [08:13:37] ah [08:13:41] neither wget nor browser [08:13:53] in 2 minutes [08:14:45] !log restarted lighttp on dataset2 [08:14:49] Logged the message, Master [08:16:00] RECOVERY - Lighttpd HTTP on dataset2 is OK: HTTP OK HTTP/1.0 200 OK - 4903 bytes in 0.061 seconds [08:19:09] RECOVERY - Apache HTTP on srv278 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 1.992 second response time [09:01:21] PROBLEM - Puppet freshness on db1004 is CRITICAL: Puppet has not run in the last 10 hours [09:11:15] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [09:12:18] PROBLEM - Puppet freshness on knsq23 is CRITICAL: Puppet has not run in the last 10 hours [09:12:18] PROBLEM - Puppet freshness on amssq40 is CRITICAL: Puppet has not run in the last 10 hours [09:13:21] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [09:13:21] PROBLEM - Puppet freshness on amssq56 is CRITICAL: Puppet has not run in the last 10 hours [09:13:21] PROBLEM - Puppet freshness on amssq49 is CRITICAL: Puppet has not run in the last 10 hours [09:16:21] PROBLEM - Puppet freshness on knsq21 is CRITICAL: Puppet has not run in the last 10 hours [09:16:21] PROBLEM - Puppet freshness on ms6 is CRITICAL: Puppet has not run in the last 10 hours [09:16:21] PROBLEM - Puppet freshness on knsq24 is CRITICAL: Puppet has not run in the last 10 hours [09:16:21] PROBLEM - Puppet freshness on ssl3003 is CRITICAL: Puppet has not run in the last 10 hours [09:17:15] PROBLEM - Puppet freshness on amssq62 is CRITICAL: Puppet has not run in the last 10 hours [09:19:21] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [09:19:21] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [09:29:15] PROBLEM - Puppet freshness on ms-be5 is CRITICAL: Puppet has not run in the last 10 hours [09:49:14] PROBLEM - Puppet freshness on amssq31 is CRITICAL: Puppet has not run in the last 10 hours [09:49:14] PROBLEM - Puppet freshness on amslvs1 is CRITICAL: Puppet has not run in the last 10 hours [09:55:14] PROBLEM - Puppet freshness on amslvs3 is CRITICAL: Puppet has not run in the last 10 hours [09:55:14] PROBLEM - Puppet freshness on amssq50 is CRITICAL: Puppet has not run in the last 10 hours [09:55:14] PROBLEM - Puppet freshness on amssq35 is CRITICAL: Puppet has not run in the last 10 hours [09:55:14] PROBLEM - Puppet freshness on amssq38 is CRITICAL: Puppet has not run in the last 10 hours [09:55:14] PROBLEM - Puppet freshness on amssq41 is CRITICAL: Puppet has not run in the last 10 hours [09:55:14] PROBLEM - Puppet freshness on cp3001 is CRITICAL: Puppet has not run in the last 10 hours [09:55:14] PROBLEM - Puppet freshness on knsq17 is CRITICAL: Puppet has not run in the last 10 hours [09:55:15] PROBLEM - Puppet freshness on amssq52 is CRITICAL: Puppet has not run in the last 10 hours [09:55:15] PROBLEM - Puppet freshness on amssq58 is CRITICAL: Puppet has not run in the last 10 hours [09:55:16] PROBLEM - Puppet freshness on knsq29 is CRITICAL: Puppet has not run in the last 10 hours [09:55:16] PROBLEM - Puppet freshness on knsq25 is CRITICAL: Puppet has not run in the last 10 hours [10:04:14] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [10:04:14] PROBLEM - Puppet freshness on amssq33 is CRITICAL: Puppet has not run in the last 10 hours [10:04:14] PROBLEM - Puppet freshness on amssq36 is CRITICAL: Puppet has not run in the last 10 hours [10:04:14] PROBLEM - Puppet freshness on amssq39 is CRITICAL: Puppet has not run in the last 10 hours [10:04:14] PROBLEM - Puppet freshness on amssq44 is CRITICAL: Puppet has not run in the last 10 hours [10:04:14] PROBLEM - Puppet freshness on amssq53 is CRITICAL: Puppet has not run in the last 10 hours [10:04:14] PROBLEM - Puppet freshness on amssq54 is CRITICAL: Puppet has not run in the last 10 hours [10:04:15] PROBLEM - Puppet freshness on amssq59 is CRITICAL: Puppet has not run in the last 10 hours [10:04:15] PROBLEM - Puppet freshness on amssq55 is CRITICAL: Puppet has not run in the last 10 hours [10:04:16] PROBLEM - Puppet freshness on amssq60 is CRITICAL: Puppet has not run in the last 10 hours [10:04:16] PROBLEM - Puppet freshness on amssq51 is CRITICAL: Puppet has not run in the last 10 hours [10:04:17] PROBLEM - Puppet freshness on maerlant is CRITICAL: Puppet has not run in the last 10 hours [10:04:17] PROBLEM - Puppet freshness on knsq26 is CRITICAL: Puppet has not run in the last 10 hours [10:04:18] PROBLEM - Puppet freshness on ssl3004 is CRITICAL: Puppet has not run in the last 10 hours [10:05:08] PROBLEM - Puppet freshness on amssq32 is CRITICAL: Puppet has not run in the last 10 hours [10:05:08] PROBLEM - Puppet freshness on amssq48 is CRITICAL: Puppet has not run in the last 10 hours [10:05:08] PROBLEM - Puppet freshness on amssq47 is CRITICAL: Puppet has not run in the last 10 hours [10:05:08] PROBLEM - Puppet freshness on amssq45 is CRITICAL: Puppet has not run in the last 10 hours [10:05:08] PROBLEM - Puppet freshness on knsq27 is CRITICAL: Puppet has not run in the last 10 hours [10:05:08] PROBLEM - Puppet freshness on knsq28 is CRITICAL: Puppet has not run in the last 10 hours [10:07:14] PROBLEM - Puppet freshness on amssq34 is CRITICAL: Puppet has not run in the last 10 hours [10:07:14] PROBLEM - Puppet freshness on amssq42 is CRITICAL: Puppet has not run in the last 10 hours [10:07:14] PROBLEM - Puppet freshness on amssq37 is CRITICAL: Puppet has not run in the last 10 hours [10:07:14] PROBLEM - Puppet freshness on cp3002 is CRITICAL: Puppet has not run in the last 10 hours [10:07:14] PROBLEM - Puppet freshness on knsq16 is CRITICAL: Puppet has not run in the last 10 hours [10:07:14] PROBLEM - Puppet freshness on knsq18 is CRITICAL: Puppet has not run in the last 10 hours [10:07:14] PROBLEM - Puppet freshness on knsq19 is CRITICAL: Puppet has not run in the last 10 hours [10:07:15] PROBLEM - Puppet freshness on ssl3002 is CRITICAL: Puppet has not run in the last 10 hours [10:07:15] PROBLEM - Puppet freshness on amssq57 is CRITICAL: Puppet has not run in the last 10 hours [10:07:16] PROBLEM - Puppet freshness on ssl3001 is CRITICAL: Puppet has not run in the last 10 hours [10:09:11] PROBLEM - Puppet freshness on amssq46 is CRITICAL: Puppet has not run in the last 10 hours [10:10:14] PROBLEM - Puppet freshness on knsq22 is CRITICAL: Puppet has not run in the last 10 hours [10:10:14] PROBLEM - Puppet freshness on hooft is CRITICAL: Puppet has not run in the last 10 hours [10:11:08] PROBLEM - Puppet freshness on amssq43 is CRITICAL: Puppet has not run in the last 10 hours [10:11:08] PROBLEM - Puppet freshness on amssq61 is CRITICAL: Puppet has not run in the last 10 hours [10:11:08] PROBLEM - Puppet freshness on nescio is CRITICAL: Puppet has not run in the last 10 hours [10:13:14] PROBLEM - Puppet freshness on knsq20 is CRITICAL: Puppet has not run in the last 10 hours [10:36:59] with the current influx of bots creating accounts every day, what scope is there to updating the account creation process? [10:51:59] sDrewth, confuse bots with more colours? [10:53:28] I need a sev urgently [10:53:31] *d [10:53:37] apergos: ? [10:54:31] a sysadmin, actualy [10:55:49] matanya: Don't ask if you can ask. Just ask your question and wait for someone who might be able to help you. [10:56:10] mass spamming from wikipedia.org [10:56:14] and wikimedia.org [10:56:41] did someone hack some of our mail servers? [10:56:56] got some mail headers? [10:56:59] Can you pastebin the full headers? Probably fake [10:57:05] I guess so [10:57:10] just a sec [10:59:33] ok. spoke to ISP. he will send them to me later on [10:59:38] thanks guys [11:00:25] ISP? You're getting the spam right? [11:00:30] no [11:00:37] he contacted me [11:00:59] What isp and why did he contact you? [11:00:59] matanya: feel welcome to hassle me, I used to be a postmaster in an earlier experience and I lived mail headers [11:01:29] though any reasonable ISP should be able to work it out, though doesn't surprise me that they don't [11:02:08] Every ISP should be able to read the headers and find the right abuse contact if it gets out of hand [11:02:20] they were about to block us [11:02:39] do you have an actual email of these? [11:02:41] What ISP is it? Name? AS? [11:03:28] walla.co.il [11:03:52] I asked the abuse to send it to me [11:05:19] but these emails were filtered? Or had they been forwarded to you? [11:06:00] filtered [11:06:34] oh it was the same ISP filtering your port #25? [11:07:13] no [11:07:20] and they didn't at the end [11:09:21] sDrewth: pm? [11:09:40] k [11:30:00] PROBLEM - Disk space on search1017 is CRITICAL: DISK CRITICAL - free space: /a 1432 MB (1% inode=99%): [12:12:04] sDrewth: So, where did it come from for real? China? ;-) [12:12:08] sDrewth: got the headers [12:13:08] huh? I have been given nothing [12:13:44] http://pastebin.com/0E9beUGc [12:15:01] And China it is [12:15:07] Oh, Korea [12:15:10] I was close [12:15:34] where do you see that? [12:15:45] Received: from 5 ([58.65.120.87]) [12:15:46] and what is the right ip to block? [12:15:53] oh, see it now [12:15:57] whois 58.65.120.87 -> Korea [12:16:17] Your ISP is an idiot if he wanted to start blocking @wikimedia.org emails [12:17:05] matanya: the Foundation has been informed [12:17:07] idiot is a very soft word [12:17:16] thanks vito [12:17:30] Vito: What would be the point of that? Domain names get faked all the time [12:17:31] Vito: how exactly? pb? [12:17:36] idot is a bit harsh [12:17:41] can I copypaste these irc log to them? [12:17:45] totally clueless would b more accurate [12:17:53] fine by me [12:17:53] matanya, multichill, sDrewth: do you agree? [12:18:01] I see thousands of emails with faked emailaddresses hitting spam filters every day [12:18:05] absolutely, very easy one [12:18:25] sDrewth: Ok. So his boss is an idiot for putting him in charge of this [12:18:30] start at the bottom, looked for a forged Received: line and follow the SMTP handover [12:19:02] sDrewth: They didn't even bother adding forged headers [12:19:09] learned a few new things, thanks guys. [12:19:16] yep [12:19:40] yay, I am glad I didn't have to do any work [12:19:49] (I was away cooking when folks pinged me) [12:20:03] * Bsadowski1 slaps matanya with Joan. [12:20:08] =P [12:20:10] j/k [12:20:33] ouch, someone got abused there [12:20:37] matanya: http://www.mxtoolbox.com/SuperTool.aspx?action=blacklist%3a58.65.120.87 [12:20:38] is matanya a wet trout? [12:20:51] hi apergos, and thanks [12:20:52] no, Joan is I guess [12:21:13] hello, but I didn't do anthing. so you're welcome but not sure for what :-D [12:21:16] Spamassasin or tool like that would have scored it really high and just ignored it [12:21:54] I don't think they have heard of Spamassasin [12:21:59] That combined with grey listing keeps a lot crap out [12:22:18] all they onlt know ms-exchange point and click [12:22:23] *only [12:22:23] ouch [12:22:49] sadly it is very common here [12:22:53] hm http://de.prototype.wikimedia.org/ [12:22:56] greylisting works [12:23:16] this remembers me an university using a silly spamfilter which blocked almost every email [12:23:23] that and blocking all of china [12:24:07] and use of some good RBLs, either a local copy or an outside copy http://www.robtex.com/ip/58.65.120.87.html [12:25:04] matanya: no only no mail defences, but clueless manning the keyboards :-/ [12:25:12] not only ... [12:25:32] what can we do... :( [12:25:45] apply their foreheads to the desk [12:25:48] repeatedly [12:25:53] pebkac [12:26:10] apergos: I was about to see the same ;-) [12:26:32] *say [12:26:36] :-) [12:28:30] I hate the people who just do whois and send it to *all* the emailaddresses they see. [12:29:37] Anyway, this is starting to sound too much like my dayjob. Going to do something else now ;-) [12:30:40] good! have a good day [12:31:50] You too apergos. Something else. Are you in Greece again? Does Greece have some sort of chapter/active community right now? [12:31:55] multichill: nice, isn't it, it is almost worth an abuse complaint back [12:32:38] multichill: have I got some pages for you to edit at English Wikisource! ;-) [12:32:43] nothing like your day job [12:32:51] Just a very sarcastic email back to make all the other abuse departments have a laugh [12:33:09] I am in Greece... still [12:33:18] for a year and a few months) [12:33:39] there is an active but often fractious community. there have been off again on again discussions about a chapter [12:33:49] there are external factors that make things harder too [12:34:25] apergos: Greece is missing at https://commons.wikimedia.org/wiki/Commons:Wiki_Loves_Monuments_2012/Participating_countries . Lot's of unemployed people, so the should have time on their hands :P [12:35:21] Check out https://upload.wikimedia.org/wikipedia/commons/thumb/4/42/Participating_Countries_WLM_2012.svg/2000px-Participating_Countries_WLM_2012.svg.png btw :-) [12:36:05] multichill, time, but cameras? [12:37:27] I doubt that's the problem. apergos, does Greece have some sort of official system for heritage? [12:37:56] they have an official system that borks freedom of panorama [12:38:11] and a couple years ago we had mass deletions at commons because of it :-( [12:38:27] Like Italy? That's non-copyright restrictions. We ignore them at Commons [12:38:36] this is a copyright restriction [12:38:54] in practice, would anyone sue? probably not. but legally they could [12:39:03] So my pictures of the Akropolis, Delphi and that sort of places are copyrighted? [12:39:10] no, those aren't [12:39:12] although... [12:39:15] *those* have another issue [12:39:32] And that is? And what images would be copyrighted? [12:39:54] that's the cultural protection laws [12:40:03] protection of cultural heritage or somethinig like that [12:40:19] there's some fees you have to pay etc [12:40:37] for noncomercial use on the web you can bypass them but only by an application to thte govt [12:40:52] which, maybe you would eventually get it granted after a few years of waiting [12:41:13] Oh, right, a bit like Italy. Commons doesn't care about these rules. [12:41:23] someone did a presentation about this recently, alhough it was in greek (since targeted at people likely to take those pictures) [12:41:31] no, but we do [12:41:44] since we live here, we would be the targets of any legal action [12:42:06] That's why we didn't do it last year in Italy. [12:42:11] yeah [12:42:33] in practice again some people will go ahead and upload the pictures and I will not say anything about it [12:42:35] Do you know if Turkey has similar rules? [12:42:41] but it makes it harder to organiz something... [12:42:46] I don't know about Turkey, sorry [12:43:23] Most ancient Greek sites are in Italy, Greece and Turkey. Two down, might be one left :P [12:45:23] maybe [12:45:39] there was a post about the project in the equiv to the village pump, so people do know about it [12:57:25] apergos: Do you have a link? [12:58:26] t the post? lemme see [12:59:12] For our documentation. It's a challenge to keep track of 40+ countries :P [12:59:16] http://el.wikipedia.org/wiki/%CE%92%CE%B9%CE%BA%CE%B9%CF%80%CE%B1%CE%AF%CE%B4%CE%B5%CE%B9%CE%B1:%CE%91%CE%B3%CE%BF%CF%81%CE%AC/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF_2012/%CE%A6%CE%B5%CE%B2%CF%81%CE%BF%CF%85%CE%AC%CF%81%CE%B9%CE%BF%CF%82#Wiki_Loves_Monuments_2012 [12:59:35] sorry but the url encoding borks the urls every time [13:00:14] No, it works [13:00:33] Seems rather inactive judging from the number of replies to topics [13:00:42] hah [13:00:46] it's very active [13:00:53] there just has to be a hot topic [14:11:38] RECOVERY - Lucene on search1008 is OK: TCP OK - 0.028 second response time on port 8123 [14:13:26] RECOVERY - Lucene on search1015 is OK: TCP OK - 0.027 second response time on port 8123 [14:27:32] PROBLEM - Puppet freshness on db1033 is CRITICAL: Puppet has not run in the last 10 hours [14:31:35] PROBLEM - Puppet freshness on virt4 is CRITICAL: Puppet has not run in the last 10 hours [14:35:38] PROBLEM - Puppet freshness on virt3 is CRITICAL: Puppet has not run in the last 10 hours [14:40:35] PROBLEM - Puppet freshness on virt1 is CRITICAL: Puppet has not run in the last 10 hours [14:50:38] PROBLEM - Puppet freshness on virt2 is CRITICAL: Puppet has not run in the last 10 hours [16:15:33] PROBLEM - check_minfraud_secondary on payments3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:15:34] PROBLEM - check_minfraud_secondary on payments1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:15:34] PROBLEM - check_minfraud_secondary on payments4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:15:34] PROBLEM - check_minfraud_secondary on payments2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:22:03] PROBLEM - check_minfraud_secondary on payments1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:22:03] PROBLEM - check_minfraud_secondary on payments4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:22:03] PROBLEM - check_minfraud_secondary on payments3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:22:03] PROBLEM - check_minfraud_secondary on payments2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:25:30] PROBLEM - check_minfraud_secondary on payments2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:25:30] PROBLEM - check_minfraud_secondary on payments1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:25:30] PROBLEM - check_minfraud_secondary on payments3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:25:30] PROBLEM - check_minfraud_secondary on payments4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:30:25] RECOVERY - check_minfraud_secondary on payments4 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 118 bytes in 3.087 second response time [16:30:34] PROBLEM - check_minfraud_secondary on payments3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:30:34] PROBLEM - check_minfraud_secondary on payments2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:30:34] PROBLEM - check_minfraud_secondary on payments1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:35:30] PROBLEM - check_minfraud_secondary on payments2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:35:30] PROBLEM - check_minfraud_secondary on payments1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:35:30] PROBLEM - check_minfraud_secondary on payments3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:40:28] PROBLEM - check_minfraud_secondary on payments4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:40:28] PROBLEM - check_minfraud_secondary on payments2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:40:28] PROBLEM - check_minfraud_secondary on payments3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:40:28] PROBLEM - check_minfraud_secondary on payments1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:45:33] PROBLEM - check_minfraud_secondary on payments3 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:45:33] PROBLEM - check_minfraud_secondary on payments1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:45:33] PROBLEM - check_minfraud_secondary on payments2 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:45:33] PROBLEM - check_minfraud_secondary on payments4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:50:30] RECOVERY - check_minfraud_secondary on payments2 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 118 bytes in 7.041 second response time [16:50:30] RECOVERY - check_minfraud_secondary on payments3 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 118 bytes in 8.759 second response time [16:50:30] PROBLEM - check_minfraud_secondary on payments1 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:50:30] PROBLEM - check_minfraud_secondary on payments4 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [16:55:18] RECOVERY - check_minfraud_secondary on payments1 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 118 bytes in 1.191 second response time [16:55:18] RECOVERY - check_minfraud_secondary on payments4 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 118 bytes in 0.551 second response time [17:26:50] Anyone else see service issues with Wikipedia in last hour? [17:27:10] I am in United States, Phoenix, Arizona. [17:27:26] I assume I'm routed to Tampa servers. [17:27:49] I spoke with someone in New Hampshire who had no issues. [17:28:18] And I spoke to someone in South Carolina who had the same problems as me, basically pages not loading properly at all, saying they were loading but hanging up, for at least half an hour. [17:28:34] Two people in UK said their access was perfectly normal [17:29:02] I was browsing Wikimedia status pages [17:29:08] But found nothing that matched the pattern I saw. [17:30:01] dendodge in Wikipedia channel referred me to this channel. [17:31:41] I currently have fully working normal access to Wikipedia again, I was just wondering how that could have taken place from a technical standpoint, for one person in New Hampshire to be able to access normally, but a person in Arizona and a person in South Carolina to be unable to access. [17:32:09] PROBLEM - Puppet freshness on mw1020 is CRITICAL: Puppet has not run in the last 10 hours [17:32:09] PROBLEM - Puppet freshness on mw1110 is CRITICAL: Puppet has not run in the last 10 hours [17:38:46] dendodge suggested that perhaps one of the servers in the cluster was down. [17:39:09] the US-Tampa cluster. [17:42:20] Do you have a basic understanding of how the Internet works? [17:42:39] Someone accessing in New Hampshire goes through a nearly completely set of tubes than someone accessing in Arizona. [17:42:50] And if any of those connections has an issue, you'll experience problems. [17:43:12] It's also about a million times more likely that it's a local issue than a data center being down. [17:43:17] Not a million, but you get the idea. [17:43:47] Yes I do have a pretty good understanding of how the internet works. But both me and the fellow in South Carolina had access to every other website on the internet besides Wikipedia. [17:44:09] Well, it sounds like a conspiracy. [17:44:12] sounds like a peering issue [17:46:19] Well to be honest I had conspiracy theories floating through my mind during the time it was not loading, but when it came back online my faith in the goodness of the government and humanity in general was restored lol [17:47:42] Reading up on peering now lol [17:53:16] Dunno if this helps, but I was pinging www.wikipedia.org regularly during the issues, and at first maybe two of the four were timing out while the other two came back with replies, for a while. And then later on I was getting 4 timeouts for a while. [17:54:22] You should learn to traceroute. [17:54:39] It's more helpful. :-) [17:55:31] Ok, I'll try that next time I have a server specific access issue. [17:55:49] tracerouting can see where the peer has lost the connection [17:56:03] Google had a peering issue out of TX about 2 months ago [17:56:39] so if I traceroute, I can see where exactly the connection is failing, what step... whether it's closer to my end or the wikipedia end of things, etc? [17:57:42] I just ran a tracert to www.wikipedia.org, obviously not helpful at this point since my "outage" is over but that would be the thing to do if it happens again with this or any other server but not others, right? [17:58:07] yep [17:58:26] Ok, I will remember that, thank you. [18:06:28] According to this diagram in the wikipedia article [18:06:30] http://en.wikipedia.org/wiki/File:AS-interconnection.svg [18:06:47] peering would seem to be designed either for purposes of redundancy, or shortcutting, or both [18:12:03] and it is [18:12:12] but sometimes a router will null route something [18:12:18] and the entire network is affected [18:13:57] so I've got 13 hops to the final destination to tracert wikipedia.org, you are maybe saying that at hop 7 a router goes nuts and null routes my request to communicate with wikipedia, so the tracert dies there? [18:15:06] yep [18:18:33] Joan: mtr > traceroute [18:18:57] mzmcbride@gonzo:~$ man mtr [18:18:57] No manual entry for mtr [18:18:57] mzmcbride@gonzo:~$ mtr [18:18:57] -bash: mtr: command not found [18:19:10] So I guess not. [18:19:12] Joan: does your OS have a package manager? [18:19:23] Probably. [18:19:28] I think I have MacPorts installed. [18:19:28] ferenc79: you should check out the -t flag for ping. also mtr! [18:19:47] Anyway, the point is that traceroute is more common. [18:20:16] mtr is *much* better [18:20:53] just copied down -t flag, jeremy, looking into mtr [18:21:10] http://packages.debian.org/mtr https://trac.macports.org/browser/trunk/dports/net/mtr/Portfile [18:22:12] * jeremyb runs away [18:23:21] it looks like windows has a pathping utility that has similar functionality to mrt, jeremy [18:23:29] (yes I have winblows lol) [18:23:39] mtr [18:24:19] http://en.wikipedia.org/wiki/PathPing [18:24:22] does that look adequate [18:25:43] how that functions [18:25:44] jeremy? [18:29:30] hmmm it traced my 13 hops but now it's not updating me with any info, I think it's gonna sit there with no displayed info for 325 seconds while it is "computing statistics for 324 seconds" [18:29:40] 325 lol [18:30:29] They're pinging out like flies lol [18:31:08] I will try WinMTR and see if it's better [18:34:12] yup took full 325 seconds to give me any more info, trying the other [18:35:21] oh wow, WinMTR is beautiful, thanks a lot jeremy [18:35:54] updates constantly as it works [18:37:32] wow, that's a keeper [18:44:09] ferenc79: ;) [18:47:56] thanks again [19:02:47] PROBLEM - Puppet freshness on db1004 is CRITICAL: Puppet has not run in the last 10 hours [19:12:50] PROBLEM - Puppet freshness on owa3 is CRITICAL: Puppet has not run in the last 10 hours [19:13:44] PROBLEM - Puppet freshness on amssq40 is CRITICAL: Puppet has not run in the last 10 hours [19:13:44] PROBLEM - Puppet freshness on knsq23 is CRITICAL: Puppet has not run in the last 10 hours [19:14:47] PROBLEM - Puppet freshness on amslvs2 is CRITICAL: Puppet has not run in the last 10 hours [19:14:47] PROBLEM - Puppet freshness on amssq49 is CRITICAL: Puppet has not run in the last 10 hours [19:14:47] PROBLEM - Puppet freshness on amssq56 is CRITICAL: Puppet has not run in the last 10 hours [19:17:47] PROBLEM - Puppet freshness on knsq21 is CRITICAL: Puppet has not run in the last 10 hours [19:17:47] PROBLEM - Puppet freshness on knsq24 is CRITICAL: Puppet has not run in the last 10 hours [19:17:47] PROBLEM - Puppet freshness on ms6 is CRITICAL: Puppet has not run in the last 10 hours [19:17:47] PROBLEM - Puppet freshness on ssl3003 is CRITICAL: Puppet has not run in the last 10 hours [19:18:24] as I idle more and more in the wikimedia channels, I notice how many servers are spewing warnings or errors, I know I dont have the rights to fix any of them... but the urge is strong [19:18:50] PROBLEM - Puppet freshness on amssq62 is CRITICAL: Puppet has not run in the last 10 hours [19:18:59] At a certain size stuff breaks [19:19:15] how do you measure size? [19:20:47] PROBLEM - Puppet freshness on owa1 is CRITICAL: Puppet has not run in the last 10 hours [19:20:47] PROBLEM - Puppet freshness on owa2 is CRITICAL: Puppet has not run in the last 10 hours [19:25:50] Damianz: it just seems like half of something just shat is self [19:30:50] PROBLEM - Puppet freshness on ms-be5 is CRITICAL: Puppet has not run in the last 10 hours [19:35:43] JRWR: a warning is something other than a error and not all "CRITICAL"s by nagios are realy critical – for example the CRITICALs above are completly harmless [19:37:42] indeed [19:37:55] it's a weekend too, so chances of it causing any issues is even lower [19:41:09] Good point, but it is crying wolf then... [19:41:56] Computers don't have the necessary discretion and judgment like humans do to tell the difference between important and unimportant errors. [19:42:09] {{fact}} [19:42:40] but it was humands that designed the system, and the warning profiles to match [19:42:43] {{fact}} [19:42:45] :P [19:43:28] not all humans are perfect like you lol [19:43:51] and I'm not saying I'm perfect [19:43:59] * JRWR thinks this is going nowhere [19:43:59] I was just pulling ur tail lol [19:44:05] :P [19:44:07] having some fun [19:44:13] I know [19:44:15] so was I [19:45:09] I gotta bookmark this peering article and read more later, my brain is full lol [19:46:41] ferenc79: http://en.wikipedia.org/wiki/File:AS-interconnection.svg is wrong. AS1-ASn are not connected to the internet, these are the internet ;-) [19:47:39] yeah maybe the cloud part should be "backbone" [19:47:53] not "internet" [19:48:11] 'rest of the internet' [19:48:29] Akoopal: Yep [19:48:56] yeah that makes a little more sense that way [19:49:17] PROBLEM - Disk space on srv219 is CRITICAL: DISK CRITICAL - free space: / 254 MB (3% inode=61%): /var/lib/ureadahead/debugfs 254 MB (3% inode=61%): [19:50:04] I gotta stop uploading cat pictures to srv219 [19:50:12] geez! [19:50:47] PROBLEM - Puppet freshness on amslvs1 is CRITICAL: Puppet has not run in the last 10 hours [19:50:47] PROBLEM - Puppet freshness on amssq31 is CRITICAL: Puppet has not run in the last 10 hours [19:55:17] RECOVERY - Disk space on srv219 is OK: DISK OK [19:56:47] PROBLEM - Puppet freshness on amslvs3 is CRITICAL: Puppet has not run in the last 10 hours [19:56:47] PROBLEM - Puppet freshness on amssq35 is CRITICAL: Puppet has not run in the last 10 hours [19:56:47] PROBLEM - Puppet freshness on amssq38 is CRITICAL: Puppet has not run in the last 10 hours [19:56:47] PROBLEM - Puppet freshness on amssq41 is CRITICAL: Puppet has not run in the last 10 hours [19:56:47] PROBLEM - Puppet freshness on amssq50 is CRITICAL: Puppet has not run in the last 10 hours [19:56:47] PROBLEM - Puppet freshness on amssq52 is CRITICAL: Puppet has not run in the last 10 hours [19:56:48] PROBLEM - Puppet freshness on amssq58 is CRITICAL: Puppet has not run in the last 10 hours [19:56:48] PROBLEM - Puppet freshness on knsq17 is CRITICAL: Puppet has not run in the last 10 hours [19:56:49] PROBLEM - Puppet freshness on knsq25 is CRITICAL: Puppet has not run in the last 10 hours [19:56:49] PROBLEM - Puppet freshness on cp3001 is CRITICAL: Puppet has not run in the last 10 hours [19:56:50] PROBLEM - Puppet freshness on knsq29 is CRITICAL: Puppet has not run in the last 10 hours [19:57:27] WE KNOW ALREADY [20:00:23] welp, see yas [20:05:47] PROBLEM - Puppet freshness on amslvs4 is CRITICAL: Puppet has not run in the last 10 hours [20:05:47] PROBLEM - Puppet freshness on amssq36 is CRITICAL: Puppet has not run in the last 10 hours [20:05:47] PROBLEM - Puppet freshness on amssq39 is CRITICAL: Puppet has not run in the last 10 hours [20:05:47] PROBLEM - Puppet freshness on amssq33 is CRITICAL: Puppet has not run in the last 10 hours [20:05:47] PROBLEM - Puppet freshness on amssq53 is CRITICAL: Puppet has not run in the last 10 hours [20:05:47] PROBLEM - Puppet freshness on amssq51 is CRITICAL: Puppet has not run in the last 10 hours [20:05:47] PROBLEM - Puppet freshness on amssq44 is CRITICAL: Puppet has not run in the last 10 hours [20:05:48] PROBLEM - Puppet freshness on amssq54 is CRITICAL: Puppet has not run in the last 10 hours [20:05:48] PROBLEM - Puppet freshness on amssq55 is CRITICAL: Puppet has not run in the last 10 hours [20:05:49] PROBLEM - Puppet freshness on amssq59 is CRITICAL: Puppet has not run in the last 10 hours [20:05:49] PROBLEM - Puppet freshness on amssq60 is CRITICAL: Puppet has not run in the last 10 hours [20:05:50] PROBLEM - Puppet freshness on knsq26 is CRITICAL: Puppet has not run in the last 10 hours [20:05:50] PROBLEM - Puppet freshness on maerlant is CRITICAL: Puppet has not run in the last 10 hours [20:05:51] PROBLEM - Puppet freshness on ssl3004 is CRITICAL: Puppet has not run in the last 10 hours [20:06:50] PROBLEM - Puppet freshness on amssq45 is CRITICAL: Puppet has not run in the last 10 hours [20:06:50] PROBLEM - Puppet freshness on amssq47 is CRITICAL: Puppet has not run in the last 10 hours [20:06:50] PROBLEM - Puppet freshness on amssq48 is CRITICAL: Puppet has not run in the last 10 hours [20:06:50] PROBLEM - Puppet freshness on amssq32 is CRITICAL: Puppet has not run in the last 10 hours [20:06:50] PROBLEM - Puppet freshness on knsq28 is CRITICAL: Puppet has not run in the last 10 hours [20:06:50] PROBLEM - Puppet freshness on knsq27 is CRITICAL: Puppet has not run in the last 10 hours [20:09:23] PROBLEM - Puppet freshness on amssq34 is CRITICAL: Puppet has not run in the last 10 hours [20:09:23] PROBLEM - Puppet freshness on amssq37 is CRITICAL: Puppet has not run in the last 10 hours [20:09:23] PROBLEM - Puppet freshness on cp3002 is CRITICAL: Puppet has not run in the last 10 hours [20:09:23] PROBLEM - Puppet freshness on amssq57 is CRITICAL: Puppet has not run in the last 10 hours [20:09:23] PROBLEM - Puppet freshness on amssq42 is CRITICAL: Puppet has not run in the last 10 hours [20:09:23] PROBLEM - Puppet freshness on knsq18 is CRITICAL: Puppet has not run in the last 10 hours [20:09:24] PROBLEM - Puppet freshness on knsq19 is CRITICAL: Puppet has not run in the last 10 hours [20:09:24] PROBLEM - Puppet freshness on ssl3002 is CRITICAL: Puppet has not run in the last 10 hours [20:09:25] PROBLEM - Puppet freshness on knsq16 is CRITICAL: Puppet has not run in the last 10 hours [20:09:25] PROBLEM - Puppet freshness on ssl3001 is CRITICAL: Puppet has not run in the last 10 hours [20:10:35] PROBLEM - Puppet freshness on amssq46 is CRITICAL: Puppet has not run in the last 10 hours [20:11:38] PROBLEM - Puppet freshness on hooft is CRITICAL: Puppet has not run in the last 10 hours [20:11:38] PROBLEM - Puppet freshness on knsq22 is CRITICAL: Puppet has not run in the last 10 hours [20:12:32] PROBLEM - Puppet freshness on amssq61 is CRITICAL: Puppet has not run in the last 10 hours [20:12:32] PROBLEM - Puppet freshness on nescio is CRITICAL: Puppet has not run in the last 10 hours [20:12:32] PROBLEM - Puppet freshness on amssq43 is CRITICAL: Puppet has not run in the last 10 hours [20:14:38] PROBLEM - Puppet freshness on knsq20 is CRITICAL: Puppet has not run in the last 10 hours [23:11:45] gn8 folks