[01:02:55] Change on 12mediawiki a page Developer access was modified, changed by Jeremyb link https://www.mediawiki.org/w/index.php?diff=568571 edit summary: all are done [04:10:45] 08/03/2012 - 04:10:45 - Creating a home directory for burthsceh at /export/keys/burthsceh [04:15:47] 08/03/2012 - 04:15:47 - Updating keys for burthsceh at /export/keys/burthsceh [07:54:06] hello [07:55:24] yeahhh gluster 3!! [07:55:31] hashar@deployment-dbdump:~$ gluster --version [07:55:32] glusterfs 3.3.0 built on May 31 2012 10:40:41 [08:04:41] hashar is it better heh? [08:04:43] :P [08:04:53] petan: I need to test out git pull [08:05:01] andrewbogott_afk I suppose you didn't migrate bots yet [08:05:02] will update beta to latest master as well as all extensions [08:05:06] writing doc meanwhile [08:05:07] ok [08:05:18] hashar why do we even use gluster store for these files? [08:05:23] you moved the nfs to /data [08:05:24] ? [08:05:30] I thought we are going to use scap [08:05:39] also, I am not sure I told you, but I am on vacations for 3 weeks starting this evening [08:05:47] Reedy is taking backup [08:05:51] hashar I am on vacations 2 week starting sunday [08:05:57] cooool :-) [08:05:58] :D [08:06:21] so the reason for moving to /data was because nfs-memc instance added an another layer of abstration [08:06:34] and it started to be filling up heavily with all the files / thumbs [08:06:48] + we needed mooooar space for hosting transcoded video [08:07:07] that makes things a bit nicer / easier to understand too [08:10:39] http://en.wikipedia.beta.wmflabs.org/ : Forbidden! [08:10:40] yeahhh [08:11:28] !log deployment-prep Dist upgrading apache32 and 33 and rebooting them [08:11:30] Logged the message, Master. [08:12:18] I need to upgrade all instances [08:12:22] boo [08:12:42] !log deployment-prep dist-upgrading all instances to get the latest GlusterFS version (3.3.0) [08:12:44] Logged the message, Master. [08:13:47] petan: is deployment-feed still of any use ? I can't connect to it [08:13:59] hashar: yes it's for recovery [08:14:06] it contain some data that someone with shell need to recover [08:14:11] okkk [08:14:13] so that we can recreate it [08:15:07] oh that is https://bugzilla.wikimedia.org/show_bug.cgi?id=38611 [08:15:24] petan: can't we just redo the irc config? [08:20:27] petan: I am deleting deployment-nfs-memc , no more use for it, everything got copied / moved to /data/project [08:21:01] It has some weird irc config IIRC [08:26:03] we can't redo it because it's secret and on production [08:26:09] Ryan copied it to instance to /root [08:26:12] now it's corrupted [08:26:29] so I can't redo something I don't have access to [08:26:30] We should just puppetize it then push it up so prod can use it and labs can [08:26:34] [08:26:46] Damianz yes we wanted to do this for squid like 6 months ago, still waiting [08:27:00] Aren't we waiting for the squid configs from prod though? [08:27:04] yes [08:27:05] We actually had the irc config [08:27:05] still [08:27:10] we did [08:27:15] then it corrupted [08:27:18] now we don't [08:27:23] Not sure why it's THAT hard to get the squid configs [08:27:29] Smells like someone doesn't buy into progression [08:27:29] neither I do [08:27:49] I think there are some weird secret data, like Jimbo's secret bank account numbers in squid [08:27:50] Needs some ass kicking to the point that people stop pushing broken changes to prod [08:28:03] they just needed a place no one would look for them [08:28:10] squid was ideal [08:28:39] Anything secret can go in private anyway then just get included, like any block rules etc - don't need to know the ips blocked on prod, just the config option and add a list to secret [08:28:47] It's better for prod long term anyway [08:34:40] stupid Apache does not start :( [08:35:14] petan: squid is going to be phased out I think [08:35:20] I have done some work on using varnish [08:35:28] bits.beta is served by varnish iirc [08:35:37] hm [08:35:38] ok [08:35:40] good news [08:35:55] !log deployment-prep running puppet on Apache32 / 33 [08:35:57] Logged the message, Master. [08:36:28] ohhh [08:36:35] apache is disabled by default great [08:45:19] !log deployment-prep On -dbdump /etc/init.d/udp2log stop && /etc/init.d/udp2log-mw start {{bug|38995}} [08:45:21] Logged the message, Master. [08:48:48] !log deployment-prep making sure perms are correct in /data/project/apache/common-local/php-master : chmod -R g+w . ; chown mwdeploy:svn -R * .* [08:48:50] Logged the message, Master. [09:12:13] !log deployment-dbdump running git gc --aggressive in /home/wikipedia/common/php-master [09:12:13] deployment-dbdump is not a valid project. [09:12:22] !log deployment-prep running git gc --aggressive in /home/wikipedia/common/php-master [09:12:23] Logged the message, Master. [09:12:25] !beta [09:12:50] petan: would it be possible to create a !beta bot alias for !log deployment-prep ? ;-D [09:13:16] !log deployment-prep deleting deployment-bastion, it lacks a DNS entry {{bug|38846}} [09:13:18] Logged the message, Master. [09:29:56] Compressing objects: 32% (122457/373618) [09:29:57] sooo slow [09:49:30] !log deployment-prep On extensions: git submodule foreach git checkout master [09:49:31] Logged the message, Master. [09:53:43] !log deployment-prep Updating mediawiki core to latest master: Updating 0c1471c..d47c1e9 (fast forwarded) [09:53:45] Logged the message, Master. [09:54:01] hashar yes [09:54:15] !beta is !log deployment-prep $* [09:54:16] Key was added [09:54:20] !beta hashar :P [09:54:20] !log deployment-prep hashar :P [09:54:22] Logged the message, Master. [09:54:23] danke [09:54:37] !beta Updating extensions to latest master: git submodule foreach git pull [09:54:37] !log deployment-prep Updating extensions to latest master: git submodule foreach git pull [09:54:39] Logged the message, Master. [09:56:08] !* is $* [09:56:08] Key was added [09:56:19] !* a b c d e f g df g h er hj rtj t jr hs rg s gsd hse gdf dsf e e w tger g [09:56:19] a b c d e f g df g h er hj rtj t jr hs rg s gsd hse gdf dsf e e w tger g [09:56:21] ok [09:56:23] good [09:56:27] !! [09:56:27] an awesome bash trick [09:56:33] :O [09:56:34] !del !! [09:56:40] I think it's some alias [09:56:40] oh no [09:56:44] !! del [09:56:44] Successfully removed ! [09:56:49] !! [09:56:52] :P [09:56:58] I like doing double exclamations marks [09:57:04] !! [09:57:21] I am wondering what bash trick is that [09:57:31] ! is used to redo last command [09:57:32] Key was added [09:57:37] so I guess !! produce a loop [09:57:40] ! [09:57:40] used to redo last command [09:58:05] ! del [09:58:05] Successfully removed [09:58:12] ! [09:58:12] There are multiple keys, refine your input: $realm, $site, *, :), access, account, account-questions, accountreq, addresses, afk, alert, amend, ask, b, bang, bastion, beta, blehlogging, blueprint-dns, bot, bots, broken, bug, bz, console, cookies, credentials, cs, damianz, damianz's-reset, db, demon, deployment-beta-docs-1, deployment-prep, docs, documentation, domain, epad, etherpad, extension, forwarding, gerrit, gerritsearch, gerrit-wm, ghsh, git, git-puppet, gitweb, group, hashar, help, hexmode, hyperon, info, initial-login, instance, instance-json, instancelist, instanceproject, keys, labs, labsconf, labsconsole, labsconsole.wiki, labs-home-wm, labs-morebots, labs-nagios-wm, labs-project, leslie's-reset, link, linux, load, load-all, logbot, mac, magic, manage-projects, meh, monitor, morebots, nagios, nagios.wmflabs.org, nagios-fix, newgrp, new-labsuser, new-ldapuser, nova-resource, openstack-manager, origin/test, os-change, osm-bug, pageant, password, pastebin, pathconflict, petan, ping, pl, pong, port-forwarding, project-access, project-discuss, projects, puppet, puppetmaster::self, puppet-variables, putty, pxe, queue, quilt, report, requests, resource, revision, rights, rt, Ryan, ryanland, sal, SAL, security, security-groups, sexytime, socks-proxy, ssh, start, stucked, sudo, sudo-policies, svn, terminology, test, Thehelpfulone, unicorn, whatIwant, whitespace, wiki, wikitech, wikiversity-sandbox, windows, wl, wm-bot, [09:58:15] :P [09:58:19] !! [09:58:33] !say is $* [09:58:34] Key was added [09:58:37] !say Damianz hi [09:58:38] Damianz hi [09:58:45] !say hi | Damianz [09:58:45] Damianz: hi [09:58:47] :D [09:58:53] !say IMMA EAT YOUR BRAINS | petan [09:58:54] petan: IMMA EAT YOUR BRAINS [09:58:56] !say how are you, dude | Damianz [09:58:56] Damianz: how are you, dude [09:59:19] On another note, going back to writing python :P [09:59:21] !say !say hello | wm-bot | wm-bot [09:59:21] wm-bot | wm-bot: !say hello [09:59:43] !sayloop is !say hi [09:59:43] Key was added [09:59:45] can't look it [09:59:46] !sayloop [09:59:46] !say hi [09:59:54] doesn't consider itself as a user [09:59:58] !sayloop del [09:59:58] Successfully removed sayloop [10:00:06] !sayloop2 is !sayloop2 [10:00:06] Key was added [10:00:09] !sayloop2 [10:00:09] !sayloop2 [10:00:16] it's idiot proof [10:00:18] :D [10:00:23] ahh it is smart enough to not react to itself [10:00:37] !sayloop2 del [10:00:37] Successfully removed sayloop2 [10:01:37] petan: since you like bots, would you be interested in setting up a bugzilla bot to replace wikibugs ? [10:01:49] Mozilla / Gnome have a nice one that comes as abugzilla plugin [10:01:54] ok [10:02:02] http://code.google.com/p/supybot-bugzilla/ [10:02:05] too many bots [10:02:09] that would be an interesting project for the bots labs [10:02:11] but I need to know how it's gonna work [10:02:25] well one would need to create a new instance, install bugzilla [10:02:29] Danny_B|backup I can make it a plugin for wm-bot [10:02:37] + install supybot and write doc about how to do so [10:02:41] then play with it here :-) [10:02:56] that would prepare us to install supybot in production and phase out wikibugs [10:03:10] hashar: once I get some information what that bot should do I can just make a plugin [10:03:25] or is it needed for it to be supybot? [10:03:42] because I really don't like python [10:03:57] I hate languagues which parse code by indentation [10:04:22] which is there on purpose to improve the code legibility [10:07:23] ok I am just used to read code by brackets and when I see no brackets it's like if it was whole one function to me [10:07:38] !beta running massive git gc --aggressive on all extensions [10:07:38] !log deployment-prep running massive git gc --aggressive on all extensions [10:07:40] Logged the message, Master. [10:07:50] git garbage collector? [10:08:06] :P [10:08:06] petan: the proposal is to test out supybot :-) [10:08:13] that is actively maintained by an upstream [10:08:23] ok, as long as I don't have to modify the source code, I am fine with that [10:08:26] so guarantee to be better out of the box :D [10:08:29] * Danny_B|backup thinks bots should be written in non compiled langs [10:08:42] * petan disagrees [10:08:50] * Danny_B|backup knows ;-) [10:08:54] petan: well it is more about trying to install it and write some short report about what it is [10:08:56] the good points [10:08:59] the ease of installation [10:09:00] aha [10:09:04] well I can test it [10:09:08] well overall, just trying out supybost :) [10:09:11] c# is sort of far away from open source stuff though [10:09:12] ok [10:09:18] see if it could be a nice candidate for wikibugs replacement [10:09:21] Danny_B|backup it's a new language [10:09:26] hopefully it will change in future [10:09:31] mono is open source [10:09:32] fuck [10:09:35] I killed -dbdump [10:09:37] damn [10:09:38] o.o [10:09:43] poor box [10:09:47] git gc --aggressive with 4 threads sent the box to swap I guess [10:09:48] haha [10:09:50] :-( [10:09:55] !beta hashar is going to kill us all [10:09:55] !log deployment-prep hashar is going to kill us all [10:09:57] Logged the message, Master. [10:10:14] !beta Probably killed -dbdump by launching 4 instances of git gc --aggressive :-((((((((((( [10:10:14] hashar we should disable swap [10:10:14] !log deployment-prep Probably killed -dbdump by launching 4 instances of git gc --aggressive :-((((((((((( [10:10:16] Logged the message, Master. [10:10:21] it suck [10:10:28] on system with poort IO like labs [10:10:30] poor [10:10:50] swap == terminated [10:10:57] hehe [10:11:03] Sarah Connors ? [10:11:03] if we disable swap it at least die on OOM [10:11:53] !beta rebooting swapped -dbdump [10:11:54] !log deployment-prep rebooting swapped -dbdump [10:11:55] Logged the message, Master. [10:11:55] btw why linux die on OOM so ungracefully compared to windows [10:11:58] swapdeath ftl [10:12:04] windows never die on OOM [10:12:11] it just kill the low priority processes [10:12:22] yeah stuff like mysql [10:12:23] ;) [10:12:26] heh [10:12:27] it happened to us in production [10:12:37] I think one time it killed the master DB :-] [10:12:45] with a -9 signal!!!!!!!!!!!!!!!!!!!!! [10:13:03] caused several hours of downtime while the db was being reconstructed out of log files [10:13:08] and we did lost some transactions :-( [10:14:27] once it happened that typo (non-paired parenthesis) in template killed whole server farm so there was wikimedia blackout for about three hours [10:14:44] haha [10:15:21] I am back, danny crashed my client :D [10:15:32] because it's written by me :P [10:15:39] what? [10:15:55] oh if you guys are on linkedIn, I have created a "We are Anonymous" group to support the cause ;) http://www.linkedin.com/groups/We-are-Anonymous-4557326 [10:16:28] being a member of any group means i am no longer anonymous ;-) [10:17:51] ok I am back again [10:17:57] my client didn't like the | symbol [10:18:02] hopefully it works now [10:18:06] I lost context a bit [10:18:51] Danny_B|backup do you see my responses or private messages are broken as well [10:18:55] Oops | [10:19:04] Damianz in nickname [10:19:10] but it's fixed now [10:20:59] !beta running 'git submodule foreach git gc --aggressive' in screen 2042 [10:20:59] !log deployment-prep running 'git submodule foreach git gc --aggressive' in screen 2042 [10:21:01] Logged the message, Master. [10:21:09] hashar, should I create a project for that [10:21:10] bz [10:21:18] mutante hi [10:21:26] petan: well just create an instance in the bots project ? [10:21:30] I guess that will be enough [10:21:36] hashar we should install a bugzilla to test it? [10:21:42] and this way other members of the bots project can play with it too :) [10:21:46] I don't like idea of installing bugzilla to bots project [10:21:51] ohh [10:21:58] then create a new project [10:22:02] maybe we already have some in labs [10:22:03] maybe there is a bugzilla one [10:22:06] there is project bugzilla [10:22:10] dunno what is there [10:22:15] no idea either [10:22:15] neither who maintains it [10:22:19] mutante are you here :D [10:22:22] I guess it was hexmode [10:22:32] !say I need you to insert me to bugzilla project | mutante [10:22:32] mutante: I need you to insert me to bugzilla project [10:22:35] mutante is in SF I think [10:22:40] aha [10:22:58] no hexmode here [10:23:00] :o [10:23:03] a dedicated instance in bots project will save you the trouble of adding new users I guess [10:23:04] he left irc... [10:23:09] well it is up to you :-] [10:23:20] I don't want to mix bots with other project [10:23:39] I will run the bot on bots, but I don't want to ahve bugzilla there [10:23:50] some generic testing bugzilla would be nice [10:24:26] Hey dawg, I heard you like bugzilla so I put bugzilla in your bugzilla so you can report while you report. [10:35:42] !log incubator Switching both enwiki and devwiki to the master branch in the name of logic (aka to really do testing and not be stuck with 1.20wmf* branches). [10:35:43] Logged the message, Master. [11:11:15] !log dumps Having persistent problems with accessing the gluster storage, grrr... [11:11:16] Logged the message, Master. [11:25:47] 08/03/2012 - 11:25:46 - Creating a home directory for matmarex at /export/keys/matmarex [11:30:47] 08/03/2012 - 11:30:47 - Updating keys for matmarex at /export/keys/matmarex [12:39:31] Change on 12mediawiki a page Developer access was modified, changed by Planetenxin link https://www.mediawiki.org/w/index.php?diff=568701 edit summary: planetenxin [13:36:08] Change on 12mediawiki a page Developer access was modified, changed by Jeremyb link https://www.mediawiki.org/w/index.php?diff=568710 edit summary: archived some reqs [13:37:42] !beta cache-bits02 : cd /var/lib/git/operations/puppet sudo GIT_SSH=/var/lib/git/ssh git fetch origin refs/changes/04/13304/14 && git checkout -b 13304/14 FETCH_HEAD (aka deploying {{gerrit|13304}} patchset 14 [13:37:43] 13304}} patchset 14: !log deployment-prep cache-bits02 : cd /var/lib/git/operations/puppet sudo GIT_SSH=/var/lib/git/ssh git fetch origin refs/changes/04/13304/14 && git checkout -b 13304/14 FETCH_HEAD (aka deploying {{gerrit [13:38:00] !log deployment-prep cache-bits02 : cd /var/lib/git/operations/puppet sudo GIT_SSH=/var/lib/git/ssh git fetch origin refs/changes/04/13304/14 && git checkout -b 13304/14 FETCH_HEAD (aka deploying {{gerrit|13304}} patchset 14 [13:38:02] Logged the message, Master. [13:38:04] ;) [13:38:12] petan: the !beta alias does not work all the time hehe [13:38:21] yes, pipes do not [13:38:28] it's hard to make it do both things [13:38:43] people want to use pipes [13:38:44] !beta switching cache-bits02 from role::cache::bits::labs to role::cache::bits ( see {{gerrit|13304}} patchset 14 [13:38:44] 13304}} patchset 14: !log deployment-prep switching cache-bits02 from role::cache::bits::labs to role::cache::bits ( see {{gerrit [13:38:56] {{gerrit|blah [13:39:00] is problem [13:39:17] when you use a pipe bot use the string after is as a prefix for output [13:39:21] !say hi | hashar [13:39:21] hashar: hi [13:39:25] you see [13:39:36] !say {{gerrit|13 [13:39:36] 13: {{gerrit [13:40:29] hashar you either need to avoid pipes or I would need to put the bots to another channel, because here we need to use that pipe thingy [13:42:25] petan: na that is fine :-° [13:42:41] I will just fall back to the good old """ !log deployment-prep """ :-) [13:42:46] the beta alias is nice enough [13:47:01] !log deployment-prep deployed {{gerrit|13304}} PS 14 on cache-bits02 [13:47:03] Logged the message, Master. [14:13:16] paravoid: have you found any inspiration to get syslog-ng on deployment-dbdump ? :-D [14:15:46] 08/03/2012 - 14:15:46 - Creating a home directory for planetenxin at /export/keys/planetenxin [14:20:48] 08/03/2012 - 14:20:47 - Updating keys for planetenxin at /export/keys/planetenxin [14:25:47] 08/03/2012 - 14:25:47 - Updating keys for planetenxin at /export/keys/planetenxin [15:20:50] andrewbogott: I'm very impressed [15:20:57] how much time did it take you to do all these migrations? [15:21:21] (good morning) [15:21:27] Yesterday was lots of meetings so it was easy to keep half an eye on the migration terminal. [15:21:40] And there weren't all that many left anyway [15:21:44] good morning! [15:22:49] I've been expecting a dozen people to appear in this channel and ask "why doesn't my instance work anymore?" But so far things seem peaceful. [15:23:06] yeah :) [15:23:19] petan has sent me an email on how to restart some of the bots [15:23:39] Is he not going to be around to do it himself? [15:23:48] apparently no [15:23:56] I just fwd you the mail, I can help with that too [15:24:13] ok, I see it. [15:31:33] !log deployment-prep updating cache-bits02 puppet to gerrit 13304/21 [15:31:35] Logged the message, Master. [15:53:07] paravoid: andrewbogott: i could maybe attempt to help restarting too [15:53:14] is it scheduled for a specific time? [15:53:27] a little after 18 UTC? [15:53:30] I'm going to start breaking things at 11pdt. [15:53:45] But it'll take ~30 minutes to move all the bots instances. [15:53:55] 11pdt = 18utc, I hope? [15:53:59] yes [15:55:34] jeremyb: I think the known set of things to restart is small; I'm not sure what to do about the unknown set except to let the bot-herders take care of their own. [15:55:49] * andrewbogott wonders if 'bot herder' is an epithet [15:56:48] andrewbogott: we could maybe make a process listing for each box before the boot to have a hit list of things to make sure are up after? [15:57:07] !log deployment-prep cache-bits02 would need 13304 to be merged in manually whenever the change is completed. Make sure to report any issue to mark :-] [15:57:09] Logged the message, Master. [15:57:11] I am off for vacations [15:57:16] ii [15:58:29] jeremyb: That's a good idea, although it sounds like a lot of work [15:58:45] andrewbogott: idk... [16:09:15] jeremyb: I think a translation of 'sounds like a lot of work' is: I would be happy for you to do this but would not be happy to do it myself :) [16:09:34] ldapsearch -v -x -y <(< /etc/ldap.conf awk '$1 == "bindpw" { print $2}' | perl -pe 's/\s//g;') -D 'cn=proxyagent,ou=profile,dc=wikimedia,dc=org' -b 'ou=hosts,dc=wikimedia,dc=org' 'puppetVar=instanceproject=bots' puppetVar 2>/dev/null | fgrep 'instancename=' | awk -F= '{print $2".pmtpa.wmflabs"}' [16:09:57] that's a list of hostnames that i should get process lists for [16:10:24] I would've gotten the list from nova, but it's probably the same list. [16:11:14] * aude reads that as wikimediadc.org ;) [16:11:56] aude: ;) [17:00:45] 08/03/2012 - 17:00:44 - Updating keys for burthsceh at /export/keys/burthsceh [17:01:45] REMINDER: There will be a brief bastion outage in one hour. Anyone who is logged into a labs instance at that time will be unceremoniously kicked off. [17:13:59] j^: what's wrong with it? how can I help? [17:16:18] yeah, looking [17:16:43] I was looking at your jobrunner stuff just now [17:16:57] I feel terrible for taking so long, lots of work this week [17:17:37] why did I remember that mark reviewed that at some point? [17:22:35] j^: is upload better now? [17:22:37] should be [17:24:42] grat [17:24:44] great* [17:48:54] REMINDER: There will be a brief bastion outage in ten minutes. Anyone who is logged into a labs instance at that time will be unceremoniously kicked off. [17:56:22] paravoid: I created a new instance this morning and it landed on virt1. Is that expected? [17:56:35] unfortunately yes [17:56:52] according to Ryan, the scheduler picks the emptiest node [17:57:01] which means that during the transition there's no way to fix that [17:57:07] but after a node is empty, we can stop nova-compute on it [17:57:27] ok [18:00:16] andrewbogott: warn me when you're starting [18:00:31] jeremyb: I'm starting! [18:00:56] That goes for everyone: bastion instances are about to reboot, so hit save now! [18:02:09] well i got the ps just in time ;) [18:04:11] Heh, I was tunneled into IRC via bastion [18:04:19] hahaha [18:05:28] * Damianz paws bogott [18:05:47] jeremyb: Can you reach bastion1 now? [18:05:52] i'm on it [18:06:06] jeremyb@bastion1:~$ uptime 18:06:01 up 0 min, 1 user, load average: 0.23, 0.06, 0.02 [18:06:11] OK, great, bastion hosts are moved. [18:06:17] So, now, time to break bots. [18:06:43] andrewbogott: getting some food, back in ~20 mins [18:07:43] On the bright side my supervisord setup works from last time Ryan broke bots :D [19:04:07] when the bot dies on a !log command it was usually the missing cache directory [19:04:09] (or write perms to it) [19:05:25] mutante: I fixed that in the new version [19:05:41] we need to delete the bot off of the old box [19:07:00] cool [19:07:43] paravoid, still awake? I'm trying to restart 'Articles For Creation bot' as per petr's email. [19:10:15] paravoid: nevermind, sorted. [19:10:41] jeremyb: I've restarted all the bots that petr mentioned. So, over to you. [19:10:51] andrewbogott: send me the list? [19:11:20] !log bots restarted Log bot on bots-labs [19:11:21] Logged the message, Master [19:11:53] !log bots restarted wm-bot (bouncer.exe and wmib.exe) on bots-1 [19:11:54] Logged the message, Master [19:12:20] these !logs are not new restarts, right? [19:12:22] !log bots restarted 'Articles For Creation bot' on bots-1 [19:12:23] Logged the message, Master [19:12:43] jeremyb: They're bots that were stopped by the migration and that I restarted by hand. [19:12:48] And, that's all that I did. [19:13:20] andrewbogott: i'm saying they were done before you said you were done. not done as you !log'd them [19:13:25] right? [19:13:27] right. [19:13:33] good [19:21:17] Ryan_Lane: fyi, http://wikitech.wikimedia.org/index.php?title=Server_admin_log&diff=49825&oldid=49824 [19:21:56] hahaha [19:21:58] of course [19:22:05] stupid standard name [19:24:13] Ryan_Lane: btw, seen my query in the other channel? maybe not so important to worry about but i'll let *you* make that call [19:24:24] which channel? [19:27:48] Ryan_Lane: hrmmm, so then i can get netadmin/sysadmin status from LDAP? [19:27:58] Ryan_Lane: can we expose that to SMW queries too? [19:28:14] I need to add support for that [19:28:17] but yes, I'd like to [19:28:53] k [19:29:24] Ryan_Lane: the answer for the other day was just [[member::user:username]] btw. yaron helped me [19:29:34] ah [19:34:06] andrewbogott: so if i read your mail right then everything in it is up already, right? [19:34:26] Yeps. [19:34:42] k [19:34:44] At least, as far as I can tell. I'm not qualified to judge whether the bots are behaving properly. [19:35:24] wm-bot: ping [19:35:24] Hi jeremyb, there is some error, I am a stupid bot and I am not intelligent enough to hold a conversation with you :-) [19:35:29] bot authors were warned, if they aren't running properly, they can fix them [19:35:33] that's normal ;) [19:35:36] it would be nice to have nagios alerts for the bots [19:35:42] so that nagios would tell us [19:40:08] ugh, services running as non-existant users ;-( [19:40:12] 102, 107 [19:40:44] (or are not symbolic like the rest of the list for some reason...) [19:43:25] jeremyb: on which host? [19:43:30] bots-1 [19:43:38] the users must have been deleted [19:43:41] not the first time i've seen it i think [19:43:57] it's the exim4 and dbus services [19:44:26] how the users can have been deleted if the srevices are running after reboot i don't understand [19:44:46] maybe puppet forced the users to different uids. but even then it should be using the new ones [19:47:46] Did you finish moving servers yet? (just got home so don't have scrollback) [19:47:52] yes [19:48:02] Cool, didn't break my bots then [19:48:15] which bots? [19:48:24] cbng, cbIII [19:48:42] oh, you're the supervisord guy [19:48:44] Well when I say didn't break, I can see them making edits so they restarted at least :D [19:49:04] Yeah... I really should find something better than supervisord but it does the job [19:49:10] Config is less painful than monit [19:50:25] * Damianz wonders how painful upstart is these days.. assuming ubuntu still uses upstart and not systemd like fedora [19:50:35] upstart sucks a little, but I recommend it [19:50:48] ubuntu still uses upstart for now [19:50:55] they'll likely switch to systemd at some point [20:30:38] gah no ryan [20:30:46] Jeff_Green: back on monday? [20:31:11] jeremyb: ya [20:31:39] Jeff_Green: myisam -> innodb is done but RT didn't feel like sending a notification for some reason [20:31:46] Jeff_Green: ready for dump whenever you are [20:31:52] oh good--I was going to ask about that on monday [20:32:40] alrighty, running away again. see you monday! [20:33:00] k [20:33:21] andrewbogott: there is quite a bit that still needs starting (unless it was fixed since i took the ps) [20:34:15] jeremyb: OK. We've nagged the bot owners quite a bit, so I'm OK with just letting them trickle in and restart things. [20:34:28] An outage might encourage them to make their bots more reboot-resistant :) [20:34:45] or encourage them to join bots-l at all [20:34:50] Or just a bitch of "it's wose than the toolserver" again [20:34:51] err [20:34:53] labs-l* [20:41:14] andrewbogott: are you sure you got bots-sql3 ? or was it done earlier? [20:42:01] jeremyb: re RT sending mail: "comment" and "reply" are usually the difference. (reply = mail) [20:42:17] I'm pretty sure this is the first I've heard of bots-sql3. Did I overlook a bit in petr's email? [20:42:19] mutante: huh? if it's the ticket's been closed it should mail [20:42:47] jeremyb: true. at least to those added in "requestor" [20:42:52] andrewbogott: i mean migrated it. it does say uptime is 2:05 but the process list matches *too* perfectly. compared to the others [20:43:18] mutante: well i still don't even know what the ticket # is so i can't tell you how to look other than subject/date/name [20:43:44] mutante: afaik i'm the only person to have mailed RT about that ticket and i'm the original opener [20:44:05] jeremyb: it should have replied and given you the ticket number... hmmm [20:44:15] mutante: it never does that... i wish [20:44:29] jeremyb: looking... [20:44:54] jeremyb: whats in the subject? [20:45:04] tried "myisam" [20:45:16] jeremyb: It's running on virt6, so, definitely on new hardware. It's possible that it was there all along and that I didn't migrate. [20:45:26] jeremyb: But, sounds more like it's just properly configured :) [20:45:26] mutante: '[OTRS] change MyISAM tables -> InnoDB' [20:45:57] ohh.. OTRS != RT [20:46:03] andrewbogott: well it's been booted recently i guess (low uptime). but it's kinda spooky and it's not halloween ;) [20:46:08] mutante: no, RT [20:46:36] mutante: ops-requests [20:46:57] jeremyb: got it, the number is 3307 [20:47:07] hahaha, perfect [20:47:12] mysql is 3306 [20:47:15] jeremyb: it's been resolved by binasher [20:47:23] i used to run extra mysql instances on 3307 [20:47:39] and it does claim "outgoing email recorded" after you created it [20:47:41] (off an lvm snapshot) [20:48:35] mutante: i'm assuming it's because i don't have an account? [20:48:46] hehe @ port number, almost :) [20:52:09] jeremyb: hmm, there are others without accounts on a regular basis and they apparently do get mail if they are requestor [20:53:33] jeremyb: looking at that mail it created, it looks like it was what i said first "comment vs. reply" [20:53:37] mutante: the only time i ever got an automated response was when it rejected me because i didn't have an account (and didn't create the ticket at all) [20:53:59] mutante: and i've done more than a few tickets [20:54:05] * jeremyb is looking for that rejection msg now [20:55:24] mutante: i think this is before your time even. does core-operations still exist? [20:55:25] "On Correspond Notify Requestors and Ccs with template Correspondence " [20:55:39] Date: Sun, 12 Dec 2010 01:08:36 -0500 [20:56:13] To: jeremy@ ... [20:56:21] jeremyb: did you just receive mail from RT now? [20:56:34] "Testing mail. Tell me if you got this Jeremy... test test" [20:56:49] or maybe it was core-ops [20:56:53] hah [20:57:04] yes, that queue still exists [20:57:14] but your tickets is in ops-requests as normal [20:57:28] you has reply [20:58:05] did you reply to 3307@rt.wm ? [20:58:25] yes, you did. and ACK it worked [20:59:00] so yeah, you only get it when people hit Reply [20:59:11] not just for resolve all by itself [21:00:12] mutante: but i have in the past gotten notifies when gerrit resolves... [21:00:21] * aude hopes instance creation works [21:00:34] aude: why not? [21:00:36] gerrit? [21:00:41] new hardware! [21:00:43] mutante: yes [21:00:49] mutante: it has a hook to resolve on merge [21:01:03] jeremyb: well, thats unrelated to RT isnt it [21:01:13] to resolve in RT [21:01:16] jeremyb: cause it didn't work last weekend [21:01:23] everything breaks on the weekend [21:01:25] aude: try it now! [21:02:04] * aude trying [21:05:52] jeremyb: that's new to me, haven't heard of anything automatic between RT and gerrit [21:06:11] mutante: i saw it in action... let me look [21:12:11] oh, huh [21:12:23] this one actually was core-ops. maybe the rules are different in there [21:15:56] is empty console output normal? [21:16:30] Date: Mon, 9 Jul 2012 13:02:28 +0000 [21:16:32] RT-Originator: dzahn@... [21:16:38] !rt 2402 [21:16:39] http://rt.wikimedia.org/Ticket/Display.html?id=2402 [21:16:44] that's the email [21:16:58] looks automated but maybe you clicked a button [21:17:20] i did "Daniel Zahn - Status changed from 'open' to 'resolved' " [21:17:20] !g 14509 [21:17:20] https://gerrit.wikimedia.org/r/#q,14509,n,z [21:17:22] for that one [21:17:42] gerrit timestamp was 2012-07-09 12:56:02 UTC [21:17:47] jeremyb: hmm, yea, that just looked automatic but it wasnt [21:17:57] i just clicked stuff in both systems [21:18:07] well let me show you the hook ;) [21:18:58] jeremyb: https://labsconsole.wikimedia.org/wiki/Nova_Resource:I-00000395 (normal?) [21:19:21] i'm not in maps i think [21:19:28] we can add you [21:19:47] makes me think that adding people via IRC trigger would be kind of nice [21:19:52] added [21:20:26] aciton=consoleoutput is empty [21:20:28] action [21:20:33] mutante: to projects? [21:20:39] 08/03/2012 - 21:20:38 - Created a home directory for jeremyb in project(s): maps [21:20:46] aude: sysadmin role required ;) [21:20:52] bouncing from error to error ;) [21:20:53] me might just be impatient [21:20:57] jeremyb: ok [21:21:12] jeremyb: yea [21:21:14] done [21:22:30] mutante: https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git;a=blob;f=files/gerrit/hooks/change-merged;h=2c5306e68f67ae987010b9bc2c12d3d9f0c7862f;hb=HEAD#l19 [21:22:36] mutante: https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git;a=blob;f=files/gerrit/hooks/hookhelper.py;h=ece9d7d11e18748cddd402198a4f92f5e2e816dc;hb=HEAD#l148 [21:23:20] aude: nope [21:23:20] jeremyb: aha...interesting [21:23:35] mutante: i already knew that was there when i got the "automated" msg... [21:23:43] so i put 1+1 together [21:24:01] * jeremyb has made significant changes to the code just linked [21:24:04] i see now.. yea [21:24:14] so... [21:24:15] that's nice [21:24:26] I'll try it next time [21:24:27] did gerrit do something in taht case? [21:24:29] * aude had stuff in the console output immediately last weekend [21:24:30] that* [21:24:37] nada nothing this time [21:25:41] 08/03/2012 - 21:25:41 - User jeremyb may have been modified in LDAP or locally, updating key in project(s): maps [21:25:49] mutante: oh, maybe only works if it's in the merge comment [21:25:59] i thought commit msg would be enough [21:26:27] and it has to say resolve or resolves [21:26:32] * aude kicks my instance [21:28:06] aude: you're giving me the wrong admin. need sysadmin [21:28:44] jeremyb: you can be both :) [21:29:21] jeremyb: gotcha [21:29:21] hm [21:29:28] is the instance down? [21:29:35] was it on the list of corrupted instances? [21:29:43] Ryan_Lane: just created [21:29:47] ah [21:29:50] Ryan_Lane: https://labsconsole.wikimedia.org/w/index.php?title=Special:NovaInstance&action=consoleoutput&project=maps&instanceid=i-00000395 [21:30:12] nada nothing [21:30:23] checking [21:30:31] trying to setup some maps puppet stuff this weekend [21:30:38] it's in the "building" state [21:30:41] Ryan_Lane: thanks [21:30:44] on virt5.... [21:30:58] why do i see nothing in console output? [21:31:07] because it's not in the running state [21:31:07] * aude saw stuff last time (last weekend) [21:31:12] Ryan_Lane: how can we let people query states? almost no machine state should be secret even if you're not a member, right? [21:31:22] i saw stuff in the pending and building state [21:31:23] it shows on the manage instances page [21:31:33] * aude can be patient and wait [21:31:45] i thought the wiki's not guaranteed to be right? [21:31:56] purging helps but i wasn't entirely trusting it [21:32:17] virt5 was supposed to be disabled [21:32:24] ah, really? [21:32:27] yes [21:32:42] only virt6-8 are supposed to be up right now [21:32:49] that could be the problem [21:32:49] that's why it is failing [21:32:54] ok [21:32:54] we should delete/recreate it [21:32:57] can I delete it? [21:32:59] Ryan_Lane: btw, bug for you... let me copy from scrollback [21:33:00] yes you can [21:33:58] 03 14:13:30 < jeremyb> ^demon|away: ugh, he's in [[special:userlist]] but not [[special:log/newusers]]. i guess that needs filing [21:34:01] 03 14:14:03 < ^demon|away> Complain to Ryan? I just did the command like I'm supposed to. [21:34:16] Ryan_Lane: done on formey because he already had svn [21:34:17] that's notmal [21:34:21] *normal [21:34:27] * jeremyb calls it a bug [21:34:32] it's not [21:34:36] the wiki didn't create the user [21:34:52] it's always been a bug in mediawiki core [21:34:58] but it says on [[special:userlist]] that it was just created recently! [21:34:58] I have a bug open about that for like 6 years [21:35:07] * jeremyb searches for it [21:37:00] hah, @ocean.navo.navy.mil [21:37:06] yep [21:37:17] * jeremyb is still looking [21:37:25] you sure you reported? [21:37:25] aude: you can re-create it now [21:37:29] Ryan_Lane: ok [21:37:43] jeremyb: it's due to a hook not running [21:38:04] (not someone else) [21:38:15] positive [21:38:58] subject hints? [21:39:06] s/subject/summary/ [21:39:28] it has to do with a new user hook of some kind [21:39:40] ugh, need context! <^demon> Ok Reedy you need to stop doing everything I'm about to do. [21:39:44] (quip) [21:40:10] mmm [21:40:17] !b 11734 | Ryan_Lane ? [21:40:18] Ryan_Lane ?: https://bugzilla.wikimedia.org/11734 [21:40:46] that's the one [21:40:53] 2007! [21:41:24] hrmmm, no labs keyword? [21:41:30] meh [21:41:44] when we do self-register this problem goes away [21:42:00] huh? how so? [21:42:10] because then the wiki is creating the account [21:42:13] and ldap auth isnt [21:42:29] i don't see why that would be possible later but not now [21:42:32] which is why some users show up properly in the logs and others don't [21:42:50] svn users are auto-created by logging in when they already have an LDAP account [21:43:06] brand new users are created through the wiki [21:43:06] do svn users already have passwords? [21:43:15] no, but they have accounts [21:43:29] can't we just turn on autocreation now for svn and not for anon? [21:43:37] it doesn't work that way [21:43:49] if a user account already exists, the ldap extension won't create their account [21:44:51] i mean let all legacy svn users just log in immediately without even talking to us [21:45:00] and only require onwiki requests for people without svn [21:48:26] https://labsconsole.wikimedia.org/wiki/Special:Contributions/127.0.0.1 [21:59:22] aude: did you try again? [22:00:06] Ryan_Lane: did you delete? still there in the wiki [22:01:28] is it? [22:01:45] it probably needs to be deleted in ldap too [22:01:54] I did it directly from nova [22:02:41] jeremyb: mind deleting the wiki pages? [22:02:44] *page [22:03:07] ahhh [22:03:11] Ryan_Lane: can i? [22:03:22] oh [22:03:24] maybe you can't [22:03:25] * aude created https://labsconsole.wikimedia.org/wiki/Nova_Resource:I-00000396 [22:03:32] completely empty wiki page [22:03:40] i mean i'm not a sysop... [22:03:46] i just tried hitting delete [22:03:51] > The requested host does not exist. [22:03:55] and there should be 00000397 because i tried yet again, thinking i didn't wrong [22:04:02] yup, not deleting [22:04:14] jeremyb: yeah. sorry. I need to do it, it seems [22:04:24] Ryan_Lane: np [22:04:26] next version of openstack manager will allow me to open rights up [22:04:59] nice [22:05:18] * aude would like to see a few volunteers have more rights and be able to help on the weekend [22:05:53] Ryan_Lane: you mean 35655 ? [22:05:54] Ryan_Lane: I'm having trouble creating instances too; they seem to spend a long time 'pending'. [22:06:10] mine vanished into thin air [22:06:40] aude: Was that an existing instance, or one you just created? [22:06:45] new ones [22:07:00] mine from last weekend are there and work just fine now [22:07:23] Hm... [22:07:27] just that i have 2 different things in mind to work on so need 2nd instance :) [22:10:56] I wonder if we're hitting a limit threshold [22:10:59] sec [22:11:06] we have limits set [22:11:23] --max_cores=80 [22:11:32] I should double that at least [22:11:39] I'd imagine they have way more than that now [22:12:53] yep [22:12:57] they're all over limit [22:13:17] (nova.rpc): TRACE: NoValidHost: All hosts have too many cores [22:13:20] heh [22:13:22] fail [22:13:34] I guess we moved from four servers to three. [22:13:38] why can't the wiki tell us that msg? ;) [22:13:50] because my code doesn't handle it [22:13:58] andrewbogott: 5 to 3 [22:14:18] counting virt5? I thought it wasn't used before or after. [22:14:25] it was [22:14:36] paravoid migrated it first [22:15:00] is virt5 a cisco? Is it set aside for special future plans? [22:15:05] I'm upping it to 200 [22:15:11] ah, okay [22:15:23] andrewbogott: well, we'd put it in service, but it was built differently [22:15:29] Oh, ok. [22:15:53] So, total changes from 400 to 600? [22:16:00] Means we'll run out again pretty soon :( [22:16:03] glad to catch this before you all left for the weekend, at least [22:16:34] i deleted 2 instances just before i tried creating the new one.... if that matters at all [22:16:48] one was duplicate and one was the wrong type [22:18:15] aude: why do you think i said to do it now? [22:18:35] aude: ok, it'll work now [22:18:43] aude: you'll likely need to delete/create [22:18:43] jeremyb: right :) [22:18:50] Ryan_Lane: ok [22:19:23] JeroenDeDauw: did you get sorted yet? should we pull your data? [22:20:42] andrewbogott: well, the limit was put into place to avoid killing the smaller boxes [22:20:52] andrewbogott: these can likely handle 200 cores worth of vms [22:21:11] (small vms have 2 cores, medium have 4, large have 8) [22:21:21] so, it's at most 100 vms [22:21:52] Ryan_Lane: I may just be impatient, but it looks like I still can't create new instances. [22:22:53] Ryan_Lane: I think I understand the math -- I was making the mistake of thinking we were maxed out before (when the limit was 5 x 80) [22:23:07] i'm getting failed to create instance errors [22:23:08] JeroenDeDauw: if you don't need the data feel free to delete and recreate any time [22:23:12] aude: oh? [22:23:13] hm [22:23:28] i'm trying m1 instance type [22:23:32] is that a bad choice? [22:23:49] QuotaError: InstanceLimitExceeded: Instance quota exceeded. You cannot run any more instances of this type. [22:23:52] Ryan_Lane: i got that during the wikimania closing and showed leslie. idk if it was ever diagnosed [22:23:54] i tried a new instance name [22:24:02] aude: how many instances do you have in the project? [22:24:16] there's project quotas too [22:24:16] (she thought it was weird) [22:24:22] oooh, now i see pending ones [22:24:22] aude: I just need to up them for you [22:24:26] ah [22:24:28] delete the pending ones :) [22:24:33] they will never finish [22:24:49] there are 7 including the pending ones [22:24:49] jeremyb: there's two limits. one is per host, the other is per project [22:24:53] 5 real ones [22:24:57] * Ryan_Lane nods [22:24:57] jeremyb: still data on there - have not looked into how to best pull this yet, and am busy w/ actual coding for now [22:25:02] you likely hit a core or memory limit [22:25:11] JeroenDeDauw: *you* can't pull it [22:25:17] JeroenDeDauw: has to be done for you [22:25:25] there's a limit per project? [22:25:30] oh [22:25:32] I have the data [22:25:38] JeroenDeDauw: should just get everything or is /etc enough or what? [22:25:43] JeroenDeDauw: I have the data, do you have a new instance to put it on? [22:26:06] I have the mysql database files [22:26:19] Ryan_Lane: no new instance yet [22:26:32] Ryan_Lane: stick it in project storage? [22:26:52] jeremyb: can't do that without an instancw [22:27:25] huh, i thought project exists was enough [22:27:28] kind of [22:27:43] but the project storage is only accessible to instances in the project [22:28:12] on the plus side, the glusterfs upgrade yesterday apparently fixed most of the gluster bugs [22:28:25] so it might be reliable enough to use for most things [22:28:32] not mysql, though [22:28:37] I already tested that [22:28:48] It's reliable for storing backups :D [22:28:52] yep [22:29:26] so... we're staying with gluster then? ;) [22:29:33] Ryan_Lane: did you use the debian pkg? [22:29:34] interesting..... [22:30:03] n [22:30:04] no [22:30:09] I used the upstream [22:30:30] precise is using the lucid version. it's compatible, but won't get any of the gains from the precise version of fuse [22:31:14] aude: did deleting the pending instances work? [22:31:18] if not I can up your project quota [22:31:38] the quotas are there as sanity checks more than anything [22:31:55] Ryan_Lane: Ok, this time the scheduler assigned me a host. So, looks promising. [22:32:03] andrewbogott: cool [22:32:11] i count 6 instances in maps in ldap now [22:32:29] labs is so much faster right now. only a matter of time till people swap death these too :) [22:32:41] hah [22:32:53] Ryan_Lane: the pending instances are gone but not sure they were ever there [22:33:08] is it still giving you an error? [22:33:09] Ryan_Lane: whatchya think about making instance deletion not delete the page but just change a property or state to deleted? [22:33:10] ok , i see my new one [22:33:17] greay [22:33:19] *great [22:33:29] jeremyb: hm. could do that [22:33:38] no ip address yet, still a red link [22:33:41] aude: what's the one? [22:33:45] no console output [22:33:46] aude: that's notmal [22:33:49] maps-osmmapnik [22:33:52] Ryan_Lane: ok [22:33:53] is it in the building state? [22:34:15] there's a job that runs that makes the page and adds the dns A record [22:34:46] speaking of A record... [22:34:51] ok [22:34:54] hi [22:34:56] * aude can be patient [22:35:12] success! :) [22:35:20] hey paravoid [22:35:46] !b 38846 | ops [22:35:46] ops: https://bugzilla.wikimedia.org/38846 [22:39:20] * andrewbogott is out, for now. [22:39:49] bye [22:42:04] I'm adding multi-region support into OpenStackManager while I'm making these changes [22:42:21] so, we'll be able to add the second datacenter pretty easily :) [22:42:46] Can you make it now show 2 options for one zone too? :D [22:42:53] zones are gone [22:42:58] it's now only regions [22:43:53] and that drop down will be totally gone [22:44:10] list images/addresses, etc. will show project, then zone, then resources [22:44:22] err [22:44:25] region. not zone [22:45:51] magic rainbow ponies [22:46:13] Ryan_Lane: is zone on the horizon? [22:46:34] (NDA) [22:46:41] I need to get around to puppetizing openstack up for work's testing server again [22:47:14] Damianz: there's a puppetlabs openstack module and it has it's own mailing list [22:47:17] haven't tried it [22:47:30] jeremyb: zones are gone [22:47:40] jeremyb: it's now regions [22:47:58] Ryan_Lane: NDA [22:48:04] nda? [22:48:09] yes [22:48:13] ...? [22:48:15] oh [22:48:18] private data? [22:48:19] ;) [22:48:20] yah [22:48:34] that's long term [22:48:42] we have a lot of discussion to do about that [22:48:45] so, below the horizon ;) [22:48:49] yes [22:49:09] I saw some news about that, couldn't find the module last time I flicked though... will look again [22:49:15] Also NDAs are stupid and annoying [22:49:28] Damianz: tell the lawyers [22:49:38] heh. I like how often we have to say that in this channel [22:49:41] They don't stop someone releasing info if they want to be an ass [22:49:54] Damianz: they would sue anyone that did so [22:49:59] And once the info is out if it's actually important wtf is the point [22:50:15] and the person sued would lose [22:50:23] Yeah but the point is to prevent leaking stuff, reactive measures are pretty pointless [22:50:23] it's a pretty good incentive to not release the info [22:50:33] it's preventative [22:50:39] Getting fired for being un-professional should be incentive enough [22:51:30] not everyone that signs an nda is an employee [22:52:35] * Damianz generally hates laywers [22:53:02] it usually isn't the lawyers that are the problem [22:53:26] Damianz: https://groups.google.com/a/puppetlabs.com/d/forum/puppet-openstack [22:54:18] Oooh shiny, tyvm [22:54:47] And yeah, the entire legal ecosystem is a POS. Like the about of disclaimers I have to waste time writing conditional code for just to cover ass is insane [22:55:38] I like the fact we have NDAs [22:56:06] my personal beef is with email footers [22:56:17] Email footers ARGH [22:56:18] the reason we have them is one of the few that warrants it imho [22:56:22] especially disclaimers on emails to public lists [22:56:24] I get emails with 1 line of email, 5 sig and 30 disclaimer [22:56:42] Damianz: right [22:57:30] oh, and the best one was [22:57:32] NDAs are fine in principle, but if you're working on something critical to be kept internal and it's leaked it doesn't matter how much you sue someone for that could be your whole company dead. Just keep people happy, motiviated and get stuff done with minimal red tape imo [22:57:36] the email for glam night out [22:57:55] it was from the same person it was to. my address wasn't on the msg i received [22:58:53] > This e-mail message is intended only for the designated recipient(s) named above. The information contained in this [22:58:56] > e-mail and any attachments may be confidential or legally privileged. If you are not the intended recipient, you may [22:59:00] > not review, retain, copy, redistribute or use this e-mail or any attachment for any purpose, or disclose all or any [22:59:03] > part of its contents. If you have received this e-mail in error, please immediately notify the sender by reply e-mail [22:59:06] > and permanently delete this e-mail and any attachments from your computer system. [22:59:10] i'm not named above!!!! [22:59:28] rofl [22:59:45] Irony is at the *bottom* of an email saying don't read it if it isn't for you. [23:00:35] would you prefer it on the top? :) [23:00:38] or the subject? :) [23:00:44] Damianz: right: http://www.economist.com/node/18529895 http://dltj.org/article/pointless-e-mail-disclaimers/ [23:00:57] paravoid is evil [23:01:26] paravoid: tl;dr user is an idiot :) [23:01:32] or like the top then a bunch of newlines so that the body doesn't fit on your screen! [23:04:24] some spam -Greek spam at least- had very footer saying "this mail cannot be considered spam, because according to European Directive 20../../EC ..." [23:04:29] it made a *great* spam filter [23:04:54] it's unlikely I'll receive legit mail containing that law's number, so I just have it on my spamassassin [23:13:06] gah, no demon [23:13:32] i just ran across this log (in google) with demon in it: http://echelog.com/logs/browse/gerrit/1340575200 [23:13:36] and blarson's there too! [23:13:47] i wonder if they've met [23:14:01] paravoid: remember from dc10? [23:14:35] remember what? blarson? [23:14:44] yeah, we've met at multiple debconfs [23:15:56] i can't remember if he was at dc11 [23:21:08] I've been to dc6-12, so... :-) [23:23:10] make sure "ddate" stays forever;) [23:26:42] mutante: erm? [23:26:49] ddate - converts Gregorian dates to Discordian dates [23:28:29] i was confusing with julina [23:28:32] julian* [23:28:57] $ date; ddate $(date +'%m %d %Y') [23:28:57] Fri Aug 3 23:28:48 UTC 2012 [23:28:57] Boomtime, Chaos 67, 3178 YOLD [23:31:55] haha, even the date on the man page is in discordian [23:32:09] but there doesn't seem to be any way to convert back to english [23:35:25] jeremyb: watch ddate +%. [23:38:31] uhhhhhhhhhh [23:39:26] heh, just an easteregg