[09:41:10] the PHP linting script is https://gerrit.wikimedia.org/r/#/c/29937/ [09:41:22] and the continuous integration stuff is https://gerrit.wikimedia.org/r/#/c/43429/ [09:41:42] I moved the PHP lint script under wikimedia as a short / easy proof of concept [09:41:55] but what I really want is migrate all the continuous integration stuff as a module [09:42:04] either under wikimedia or as an independent module ( contint ? ) [09:43:53] can someone take a look at the http://test.m.wikipedia.org/ breakage, please? [09:45:18] mark ^^ [10:02:44] MaxSem: looking [10:02:50] if only it wasn't a huge mess [10:11:44] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 184 seconds [10:12:56] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 211 seconds [10:34:00] New patchset: Faidon; "Fix test.m.wikipedia.org" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47574 [10:34:15] hashar: want to review that? [10:34:21] yeah [10:34:25] missing the bug number :-D [10:34:27] jk [10:34:52] end a typo err: Could not parse for environment production: Syntax error at ','; expected '}' at /var/lib/jenkins/jobs/operations-puppet-validate/workspace/manifests/role/cache.pp:674 [10:34:57] ha [10:35:13] so hmm [10:35:41] yeah, fixed it [10:35:49] mark refactored the cache manifests to have the per realm / per dc configuration in role::cache::configuration [10:36:02] I know [10:36:12] New patchset: Faidon; "Fix test.m.wikipedia.org" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47574 [10:36:19] this is a quick hack to fix test.m for the mobile deployment [10:38:29] Change merged: Faidon; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47574 [10:41:10] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 0 seconds [10:41:27] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 0 seconds [10:41:52] gr [10:41:53] cant remember [10:42:14] I think mark wanted the backends directives to be generated out of the directors blocks [10:42:30] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 192 seconds [10:42:57] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 200 seconds [10:43:18] yes [10:43:18] again [10:43:20] temp hack :) [10:43:45] :D [10:44:18] merged and applied [10:44:31] seems to work, MaxSem can you confirm? [10:44:52] yes - thanks a lot, paravoid and hashar [10:45:30] replying [10:45:41] I haven't done anything!! ;D [10:51:26] well [10:51:28] shower lunch [10:51:30] RECOVERY - MySQL Replication Heartbeat on db33 is OK: OK replication delay 0 seconds [10:51:30] RECOVERY - MySQL Slave Delay on db33 is OK: OK replication delay 0 seconds [10:51:36] and I might head to the coworking place after [10:51:36] you shower your lunch? [10:51:41] if it ever stop raining [10:51:43] :-) [10:51:46] ;D [10:52:04] well I could walk to the coworking place while eating a snack [10:52:10] that will actually shower the snack hehe [10:52:20] I guess I am going to stay home this afternoon [10:52:22] I came back from Brussels yesterday [10:52:44] only to find a super-shiny 19° C Athens [10:52:51] arhhhh [10:53:14] it is like 5°c in the morning, 10°c in the afternoon [10:53:19] cloudy and rainy [10:53:20] sigh [10:53:48] paravoid: also got a debian package for you to review : https://gerrit.wikimedia.org/r/#/c/44408/ ;-D [10:53:56] ouch [10:53:58] some random python module I need for a project [10:54:44] (though I can't remember which project) [10:54:59] ah Zuul [10:55:11] hmmm [10:55:13] the ITP is at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=698354 [10:55:14] not bad [10:55:31] mind you, I have no idea how to actually build a package out of what I have produced [10:55:42] nor how to upload it to debian.org :D [10:56:11] iirc I have reused the debian directory of the python-yaml or python-jenkins package [10:56:13] did a a massive sed [10:56:19] then ran the debian lintian tool [10:56:26] + debuilder with some magic arguments [10:56:59] well hmm [10:57:02] https://wikitech.wikimedia.org/view/Debianize_python_package ;-D [10:58:23] so what's the procedure of getting a new package on WMF? create a repo, push the stuff, add paravoid as a reviewer?:) [10:58:55] yeah and hope the paravoid machine manage to epoll() your request :-D [10:59:04] it is a bug boxè [11:00:02] MaxSem: only ops/roots can upload on apt.wm.org [11:00:43] shower time brb [11:03:08] New review: Faidon; "See inline for a few comments. Besides those, you should preferrably switch to git-buildpackage form..." [operations/debs/python-voluptuous] (master); V: 0 C: -1; - https://gerrit.wikimedia.org/r/44408 [11:03:10] there you go [11:04:22] hasharShower: oh and btw, you should go through the Python modules team if you want me to upload this to Debian [11:04:28] that's what I tell all of my sponsorees [11:05:11] http://wiki.debian.org/Teams/PythonModulesTeam/ [11:05:21] unfortunately they still use SVN :-) [11:10:49] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47047 [11:13:02] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47518 [11:14:01] paravoid: OHMFG [11:14:06] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47048 [11:14:08] I am never going to use svn [11:14:09] :-D [11:17:05] and hmm [11:22:14] bbl [11:38:08] PROBLEM - MySQL Replication Heartbeat on db32 is CRITICAL: CRIT replication delay 200 seconds [11:39:55] RECOVERY - MySQL Replication Heartbeat on db32 is OK: OK replication delay 0 seconds [11:44:34] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 199 seconds [11:44:43] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 204 seconds [11:46:11] New patchset: Silke Meyer; "Replace deprecated pollForChanges.php on Wikidata client with dispatchChanges.php on repo" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47576 [11:53:16] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 0 seconds [11:53:35] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 0 seconds [11:57:55] PROBLEM - Puppet freshness on ms1004 is CRITICAL: Puppet has not run in the last 10 hours [11:57:55] PROBLEM - Puppet freshness on msfe1002 is CRITICAL: Puppet has not run in the last 10 hours [11:57:55] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [11:57:55] PROBLEM - Puppet freshness on virt1004 is CRITICAL: Puppet has not run in the last 10 hours [11:57:55] PROBLEM - Puppet freshness on vanadium is CRITICAL: Puppet has not run in the last 10 hours [11:59:52] PROBLEM - Puppet freshness on professor is CRITICAL: Puppet has not run in the last 10 hours [12:02:16] New patchset: Hashar; "(bug 44061) initial release" [operations/debs/python-voluptuous] (master) - https://gerrit.wikimedia.org/r/44408 [12:28:53] New patchset: Dzahn; "install planet.wm SSL cert instead of star.wm (RT-4468/3481)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47578 [12:39:00] PROBLEM - Puppet freshness on cp3020 is CRITICAL: Puppet has not run in the last 10 hours [12:43:30] New patchset: Dzahn; "add star.planet.wm SSL cert (RT-3481/4468)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47579 [12:44:29] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47579 [12:47:35] New patchset: Dzahn; "install star.planet.wm SSL cert instead of star.wm (RT-4468/3481)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47578 [12:48:21] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47578 [12:52:01] New patchset: Dzahn; "fix changed planet SSL cert name for relationship from 'Apache_module[rewrite]'" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47580 [12:52:34] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47580 [13:15:13] mutante: you wake up too early :-D [13:22:39] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 191 seconds [13:22:39] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 191 seconds [13:24:19] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 0 seconds [13:24:19] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 0 seconds [13:47:04] New patchset: Silke Meyer; "Enable memcached in Wikidata's LocalSettings files" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47585 [13:53:04] New review: Dzahn; "nice" [operations/puppet] (production); V: 2 C: 2; - https://gerrit.wikimedia.org/r/47409 [13:53:05] Change merged: Dzahn; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47409 [13:56:00] New patchset: Hashar; "Jenkins job validation (DO NOT SUBMIT)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47588 [13:57:41] New patchset: Hashar; "Jenkins job validation (DO NOT SUBMIT)" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47588 [13:58:17] Change abandoned: Hashar; "yeah it catches the typo" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47588 [14:39:39] New review: Reedy; "Lol, probably not needed, but whatever." [operations/mediawiki-config] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/47558 [14:39:47] Change merged: Reedy; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/47558 [14:42:43] Change abandoned: Hashar; "Chad said "dont bother"" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/46931 [14:43:20] morning all! [14:43:27] !log gallium : removed mercurial and mercurial-common [14:43:27] RobH, are you west coast or east coast these days? [14:43:38] (if west, I will leave you alone about analytics1001 until later :p ) [14:47:55] ottomata: west [14:48:03] hokay, thanks :) [14:48:23] Is it EQIAD physical work you need doing? [14:50:04] yes, analytics1001 is eqiad. servers with 10xx indicate eqiad [14:50:19] i dont think Rob is there atm [14:53:30] aye yeah, i don't thikn analytics1001 needs physical help [14:53:32] its online now [14:53:35] i'm just trying to reinstall it [14:53:37] and it won't PXE boot properly [14:53:42] analytics1007 needs physical help [14:53:45] but that is a different issue [14:54:56] do you think it changed its MAC address or is it more complex [14:56:23] i don't think so, i'm not sure, ahhh, but I haven't actually checked brewster logs while trying [14:56:29] so I think I have more I can do first [14:56:41] RobH just told me he'd look into it, so I'm bugging him [14:57:54] yea, checking brewster log for an error is a good idea [14:59:19] ok, another question for you while I have your attention [14:59:25] every messed with ldap + shell accounts before? [14:59:29] i have an issue on analytics nodes [14:59:32] in labs? [14:59:41] (nope) [14:59:42] where puppet can't add new user shell accounts [14:59:53] because the analytics nodes use ldap to authenticate for certain services [14:59:55] so [15:00:00] yes as in "i saw that error" [15:00:09] and noticed it conflicts [15:00:12] spetrea (stefan petrea, contractor) [15:00:12] yeah [15:00:19] we're trying to give him an account [15:00:26] but usermod uses ldap to check user stuff [15:00:33] i think [15:00:37] i'm not sure what's happening actually [15:00:40] i saw puppet was trying to change the UID for that user [15:00:41] puppet uses usermod [15:00:43] right [15:00:53] but it could not, because it has a different one in LDAP [15:00:55] it looks to puppet like he exists [15:01:00] but not in /etc/passwd [15:01:09] so puppet issues usermod commands to modify entries in /etc/passwd [15:01:12] but he's not there [15:01:13] so it borks [15:01:31] yea, one first thought was to manually add him to /etc/passwd [15:01:36] and then let puppet fix the UID as it wants to [15:01:46] but thats all i have [15:01:55] yeah, but that's pretty hacky, i'd have to do it for 27 nodes, and we'd have to do it for every new user [15:02:03] and, apparently we disabled a user account recently [15:02:05] and its trying to remove him [15:02:15] because it looks like he is in ldap [15:02:19] but he's not in /etc/passwd [15:02:20] so it borks [15:02:43] basically for user accounts, puppet xor ldap [15:02:56] yea, i dont think you can really mix it [15:03:08] pick one of them [15:03:41] that's fine [15:03:53] Ryan_Lane was suggesting I delete all the shell accounts at some point [15:04:02] and what, just manage ssh keys with puppet I guess? [15:04:17] as long as /home//.ssh/authorized_keys exists [15:04:20] logins should work? [15:04:26] yea, that's what we do in production [15:04:32] in production? [15:04:40] yea, well, exists and has the right permissions [15:04:45] are there other non labs nodes that use ldap? [15:05:15] hmm.. office [15:05:30] office? [15:05:35] office wiki? [15:05:40] office IT [15:05:42] oh [15:05:51] they're hooked into the same ldap? [15:05:57] hm [15:06:03] not puppetized, I suppose? [15:07:11] not thatm i am aware of but Yossie might be working on it rightr now [15:07:27] there is also ldap.pp in puppet manifests but that is not the same one [15:07:41] that is the one used on Labs [15:08:07] yeah, i think I am using that to set up the ldap connection [15:08:14] but since shell users are outside of ldap [15:08:16] puppet is angry [15:08:24] i mean, i could modify unixaccount to be smarter [15:08:26] probably [15:08:55] actually, I htink if I made it manage ssh keys without first requiring the unixaccount, it would be fine [15:09:41] i would do it with puppet just like we do on other production servers for now and then bring up the LDAP issue on a list for general discussion if/how that should be changed [15:10:52] hm [15:10:59] yeah totally, this is a bigger change than just me [15:11:01] hmmmm [15:11:08] i guess I can try to make spetrea's account manually like you suggested now [15:11:10] and see if that works [15:11:14] and then deal with other users later [15:11:29] good morning cmjohnson1! this is your friendly weekly poke about analytics1007 :) [15:11:58] (oo, I feel bad that I just typed that, I just saw that you entered the room just now and thought about it. You probably just opened up your compy for the morning. I hate being barraged when I first get online!) [15:12:08] hi ottomata: thx ...not going to get to it this week...have a bunch of new apaches [15:12:14] ottomata: i mean, would it hurt if you remove him from that LDAP for now.. and just let puppet finish the job? [15:12:21] ok that's cool! [15:12:26] but then labs wouldn't work for him, right? [15:12:36] i could maybe disable ldap temporarily [15:12:40] i am not sure which ldap you are using [15:12:48] cmjohnson1, thanks anyway, I'm just going to poke occasionally :) [15:12:50] ah..yea.. [15:12:54] hmm [15:13:20] lemme try just adding him manually first, i'll copy his /etc/passwd record from elsewhere and see [15:41:41] PROBLEM - MySQL Slave Delay on db32 is CRITICAL: CRIT replication delay 186 seconds [15:42:44] PROBLEM - MySQL Replication Heartbeat on db32 is CRITICAL: CRIT replication delay 195 seconds [15:52:02] RECOVERY - MySQL Slave Delay on db32 is OK: OK replication delay 28 seconds [15:53:23] RECOVERY - MySQL Replication Heartbeat on db32 is OK: OK replication delay 0 seconds [16:03:44] PROBLEM - Puppet freshness on db1047 is CRITICAL: Puppet has not run in the last 10 hours [16:11:41] PROBLEM - Puppet freshness on mw1128 is CRITICAL: Puppet has not run in the last 10 hours [16:12:23] !log dist-upgrades for payments systems [16:20:21] Jeff_Green: no morebots [16:20:38] sadness [16:20:38] what happened? [16:20:50] dunno, just noticed [16:20:59] oic [16:21:28] well, you can use the newly create feature of bot to notify you, when morebots come back lol [16:21:45] ha [16:22:43] @seenrx morebots [16:22:50] @seen-on [16:22:50] Seen is now enabled in the channel [16:22:53] @seenrx morebots [16:22:53] petan: Last time I saw morebots they were quitting the network with reason: Ping timeout: 245 seconds at 2/5/2013 6:43:53 AM (09:38:59.9480860 ago) (multiple results were found: labs-morebots, labs-morebots_) [16:23:09] 9 hours :/ [16:23:34] did it get a new autodeath feature? [16:23:49] dunno, but the version we have in labs dies a lot [16:26:57] hear hear [16:27:03] i don't seem to have an account on wikitech... [16:28:11] er? [16:28:16] on the wiki? seriously? [16:28:40] according to wikitech the wiki, morebots is a flaky-ass python irc client running on the host wikitech [16:28:46] oh [16:28:52] so /home/w/docs for the root pwd for linode [16:29:01] for wikitech that is [16:29:05] remember how it's off site etc... [16:29:13] k [16:29:47] uhh [16:30:01] Jeff_Green: you want a wiki account? [16:30:04] or you want shell on linode? [16:30:12] shell. i've got a wiki account [16:30:22] i'm just gonna attempt to follow the directions to restart the bot [16:30:22] ahh, nm then, just reading backscroll [16:30:31] yeah it's easy [16:30:36] good to know how [16:31:13] login is failing me [16:31:21] I suppose it's sudo service adminbot restart [16:31:26] um [16:31:32] wikitech,wikimedia.org [16:31:38] ssh wikitech@wikitech.wikimedia.org [16:31:44] no [16:31:45] root [16:31:48] wikitech@wikitech.wikimedia.org's password: [16:32:08] well that's just confusing [16:32:16] you are using passwords on prod o.o [16:32:19] considering from the doc: Username: wikitech [16:32:24] we are using them for off site [16:32:32] hm [16:32:41] why not ssh keys? [16:32:57] root worked. thx [16:33:01] could set up some key not related to anything we have I guess [16:33:02] yw [16:33:48] we're concerned about people stealing key agents? [16:34:15] well it's on a third party hosted box so we don't want the key to be anything actually in use on prod [16:34:32] apergos you know you can have multiple keys :P [16:34:45] yes, Ihave multiple keys right now [16:34:50] I guess all of ops does [16:35:04] so it appears to be running [16:35:08] yes [16:35:15] killing... [16:35:16] so it's probably been netsplitted or something [16:35:20] yep, that's the way [16:35:22] unfortunatelly having too many of them will make logins on some systems fail [16:35:56] can use .ssh/config to choose the right key for the context [16:36:13] which I do [16:36:18] and again I expect most ops does [16:36:31] yawp [16:36:38] wheee! [16:36:48] !log restarted morebots :-P [16:36:51] Logged the message, Master [16:37:03] yay [16:37:12] !log dist-upgrades for payments systems [16:37:13] Logged the message, Master [16:37:15] so yeah it will restart via the wrapper script which checks once a minute or something [17:00:35] PROBLEM - check_mysql on payments1004 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:00:35] PROBLEM - check_mysql on payments1003 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:00:36] PROBLEM - check_mysql on payments1002 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:01:12] the payments replag is expected, due to reboots... [17:05:31] PROBLEM - check_mysql on payments1004 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:05:32] PROBLEM - check_mysql on payments1003 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:05:32] PROBLEM - check_mysql on payments1002 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:05:36] !log authdns update [17:05:37] Logged the message, Master [17:10:37] PROBLEM - check_mysql on payments1002 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:10:38] PROBLEM - check_mysql on payments1004 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:10:38] PROBLEM - check_mysql on payments1003 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:15:34] PROBLEM - check_mysql on payments1002 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:15:34] PROBLEM - check_mysql on payments1003 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:15:34] PROBLEM - check_mysql on payments1001 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:15:35] PROBLEM - check_mysql on payments1004 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:16:02] PROBLEM - check_apache2 on payments1 is CRITICAL: PROCS CRITICAL: 0 processes with command name apache2 [17:17:29] Jeff_Green: some payments are complaining :-) [17:17:34] lol thx [17:17:45] i should have muted them... [17:17:47] I guess you already received the page or are doing some maintenance [17:17:55] just making sure you know about them :-] [17:17:57] that's all from rebooting payments1 [17:18:44] ah [17:19:02] I know nagios has a concept of services dependencies, where you can skip some services check if the parent service/host is dead [17:19:20] rlly? [17:19:24] but that require to know about the service topology to generate the config files with the proper dependencies [17:19:25] yeah [17:19:40] so you can attach your servers to the switch serving them [17:19:51] right [17:19:53] if the switch goes down, the hosts are ignored and it only report about the switch being down [17:19:57] same goes on for services [17:20:07] the mysql master being dead could be the parent of the slave checks [17:20:22] PROBLEM - check_mysql on payments1002 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:20:23] RECOVERY - check_apache2 on payments1 is OK: PROCS OK: 8 processes with command name apache2 [17:20:23] PROBLEM - check_mysql on payments1004 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:20:23] PROBLEM - check_mysql on payments1001 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:20:23] PROBLEM - check_mysql on payments1003 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:20:24] right right [17:20:28] but that is not easy to properly set up cause you need to maintain a topology of all your hosts and services [17:20:36] :-D [17:20:53] right, and the timings of when it notices states changing would matter I imagine [17:21:19] yeah I can't remember how it is managed [17:21:45] I guess a service being down is in a SOFT state (not reporting), the parent is checked, if the parent is ok then the service goes to HARD state which trigger the notification [17:21:53] else if the parent is dead, the service notification is ignored [17:21:55] something like that [17:22:05] one day I will have a look at the nagios conf :-] [17:22:05] makes sense [17:22:12] it's pretty awful really [17:23:53] ah here it is http://nagios.sourceforge.net/docs/3_0/dependencies.html [17:23:58] garg. I think now it's paging on a bug in the test module actually [17:25:13] nope. replication choked on a query [17:25:37] PROBLEM - check_mysql on payments1004 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:25:38] PROBLEM - check_mysql on payments1001 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:25:38] PROBLEM - check_mysql on payments1002 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:25:38] PROBLEM - check_mysql on payments1003 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:25:41] grrr. [17:26:13] * Damianz gives green a cookie [17:26:36] Error 'You cannot 'ALTER' a log table if logging is enabled' on query. Default database: 'mysql'. Query: 'ALTER TABLE slow_log [17:26:40] sbernardin: rt4477 [17:26:51] would be nice if it gave me the full query :-$ [17:30:37] RECOVERY - check_mysql on payments1003 is OK: Uptime: 3766 Threads: 1 Questions: 14314 Slow queries: 12 Opens: 650 Flush tables: 1 Open tables: 48 Queries per second avg: 3.800 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [17:30:44] PROBLEM - check_mysql on payments1004 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:30:45] PROBLEM - check_mysql on payments1001 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:30:46] RECOVERY - check_mysql on payments1002 is OK: Uptime: 2662 Threads: 2 Questions: 12264 Slow queries: 12 Opens: 650 Flush tables: 1 Open tables: 47 Queries per second avg: 4.607 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [17:32:30] * Damianz wonders if http://devopsreactions.tumblr.com/post/35908831679/listening-to-someone-saying-that-he-skipped-a-record [17:34:09] Damianz: ya think? [17:34:57] puuuuug [17:35:02] puuugly [17:35:22] RECOVERY - check_mysql on payments1004 is OK: Uptime: 6947 Threads: 3 Questions: 14022 Slow queries: 11 Opens: 649 Flush tables: 1 Open tables: 47 Queries per second avg: 2.018 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [17:35:22] PROBLEM - check_mysql on payments1001 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:35:43] looks like the mysql package update produced a slavedb-choking query [17:40:19] PROBLEM - check_mysql on payments1001 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [17:40:20] New patchset: Nemo bis; "Add WMFI English blog" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47602 [17:40:30] RAGE. [17:41:02] Against the machine? [17:45:25] PROBLEM - check_mysql on payments1001 is CRITICAL: SLOW_SLAVE CRITICAL: Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 2174 [17:50:31] RECOVERY - check_mysql on payments1001 is OK: Uptime: 3112 Threads: 5 Questions: 9577 Slow queries: 13 Opens: 597 Flush tables: 1 Open tables: 64 Queries per second avg: 3.077 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [18:20:13] PROBLEM - check_mysql on payments1004 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [18:20:14] PROBLEM - check_mysql on payments1003 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [18:20:41] wtf. I set those not to notify! [18:20:51] well, i got notified :-p [18:21:05] http://devopsreactions.tumblr.com/post/39118334785/carefully-examining-nagios-emails [18:21:16] that describes my morning [18:21:23] hahah. [18:25:19] RECOVERY - check_mysql on payments1004 is OK: Uptime: 9947 Threads: 3 Questions: 25101 Slow queries: 77 Opens: 1366 Flush tables: 1 Open tables: 64 Queries per second avg: 2.523 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [18:25:20] PROBLEM - check_mysql on payments1003 is CRITICAL: Slave IO: Yes Slave SQL: No Seconds Behind Master: (null) [18:25:20] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 185 seconds [18:25:57] ha! that one isn't mine. [18:28:47] PROBLEM - MySQL Slave Delay on db53 is CRITICAL: CRIT replication delay 187 seconds [18:29:40] PROBLEM - MySQL Replication Heartbeat on db53 is CRITICAL: CRIT replication delay 210 seconds [18:30:25] RECOVERY - check_mysql on payments1003 is OK: Uptime: 7366 Threads: 2 Questions: 26232 Slow queries: 40 Opens: 1587 Flush tables: 1 Open tables: 64 Queries per second avg: 3.561 Slave IO: Yes Slave SQL: Yes Seconds Behind Master: 0 [18:43:11] hiya paravoid, apt/.deb question if you are about: [18:43:18] i'm responding to this ticket [18:43:19] https://rt.wikimedia.org/Ticket/Display.html?id=4474 [18:43:25] 1. [18:43:29] is it ok if I just do that? [18:43:45] 2. reprepro doesn't seem like the distribution the package is built with: [18:59:37] RECOVERY - MySQL Replication Heartbeat on db53 is OK: OK replication delay 1 seconds [18:59:55] RECOVERY - MySQL Slave Delay on db53 is OK: OK replication delay 0 seconds [19:13:20] !log gallium : updating Zuul by cherry-picking a set of changes that let us fetch tags. ff79197..1854e32 [19:13:26] Logged the message, Master [19:17:12] !log gallium : Zuul updated! [19:17:13] Logged the message, Master [19:25:07] RECOVERY - swift-account-reaper on ms-be3 is OK: PROCS OK: 1 process with regex args ^/usr/bin/python /usr/bin/swift-account-reaper [19:30:23] PROBLEM - swift-account-reaper on ms-be3 is CRITICAL: PROCS CRITICAL: 0 processes with regex args ^/usr/bin/python /usr/bin/swift-account-reaper [19:50:18] window move up [19:58:20] ottomata: replied to the ticket [19:58:49] paravoid, which one? [19:58:58] python-jsonschema [19:59:24] (must take a sec...) [20:00:51] Why must wordpress update every other week [20:00:52] whyyyyy [20:01:09] (i know why, its hacky and secruity issue prone, but still.) [20:02:32] paravoid, got it, thanks [20:04:36] !log updating the blog software, here goes nothing [20:04:37] Logged the message, RobH [20:07:14] !log kaldari Started syncing Wikimedia installation... : [20:07:15] Logged the message, Master [20:11:46] PROBLEM - MySQL Replication Heartbeat on db33 is CRITICAL: CRIT replication delay 193 seconds [20:12:22] PROBLEM - MySQL Slave Delay on db33 is CRITICAL: CRIT replication delay 203 seconds [20:12:27] !log blog update sucessful, caching plugin update nonsuccessful, rolling it back to older plugin [20:12:28] Logged the message, RobH [20:15:24] paravoid: hi again, how do you want to proceed for the wikimedia module ? :-D [20:15:50] paravoid: should I just forgot about it and move my contint stuff under a contint module ? :-D [20:19:16] RECOVERY - Apache HTTP on mw1100 is OK: HTTP OK - HTTP/1.1 301 Moved Permanently - 0.094 second response time [20:21:54] !log kaldari Finished syncing Wikimedia installation... : [20:21:55] Logged the message, Master [20:31:39] hashar: hey [20:33:18] paravoid: yeah there :-] [20:34:46] yeah, let's do contint module now I think [20:35:07] then move to wikimedia as a submodule if we see a clear line on where goes what [20:35:15] okk [20:35:19] not just contint I'd say [20:35:31] but also jenkins for the jenkins parts, zuul for the zuul parts etc. [20:35:50] you mean one module per software ? [20:37:00] one generic module per software package preferrably, yes [20:37:09] plus contint that layers on top of them [20:37:27] plus a role class to instantiate them into gallium [20:38:27] okay leaving for now [20:38:39] bye [20:38:41] paravoid: will refactor again :-] [20:38:44] paravoid: *wave* [20:38:46] sorry [20:38:52] it is ok :-] [20:38:56] just need to know +1 or -1 hehe [20:39:05] I dotn care about refactoring over and over [20:39:20] ;-] [20:40:11] New review: Hashar; "So it turns out it is better to have one module per software with some role classes :-] Abandonning..." [operations/puppet] (production); V: 0 C: 0; - https://gerrit.wikimedia.org/r/43420 [20:47:36] New patchset: Ori.livneh; "Update EventLogging dependencies" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47621 [20:51:25] paravoid: yt? I'm wondering in which repo under operations/debs to put debs for EventLogging depedencies. Should I be requesting a new repository (or repositories)? [20:53:07] ori-l, i think you need to create a new one [20:53:12] for each package [20:57:28] heads up: soon we will perform a mobile deployment, and while we try to avoid it as hell we still could need a Varnish flush - will someone be around in 1-2 hours to do that? [21:00:22] ottomata: who/where should I ask for one? [21:00:31] hmm, i think I can create [21:00:46] ottomata: 'python-jsonschema', if so :) [21:00:52] shoudl I make one: operations/debs/python-jsonschema [21:00:53] ok [21:04:15] yup [21:04:29] done [21:04:35] i'm not sure about permissions stuff though [21:04:35] https://gerrit.wikimedia.org/r/gitweb?p=operations/debs/python-jsonschema.git;a=summary [21:04:41] check to see if you can push [21:04:50] it doesn't have a repo there yet [21:04:54] so you can push whatever you need [21:07:00] <^demon> ottomata: Generally speaking, new repositories not at the top level can be expected to inherit in a somewhat sane manner :) [21:07:08] <^demon> I've got most of the parents with sane acls these days. [21:07:14] cool [21:07:20] i had it inherit from operations/debs [21:07:23] which just inherits from all projects [21:07:29] <^demon> Yeah, operations/debs/* is fine. [21:07:42] <^demon> operations/* needs a parent acl though. Never got around to that. [21:09:20] ottomata, ^demon: thanks! [21:12:45] !log analytics1001 going down for troubleshooting/reinstallation [21:12:47] Logged the message, RobH [21:14:10] AHHHHH uh oh [21:15:39] phew, no prob Rob [21:15:49] i thought the other anlaytics guys were demoing some stuff they needed it for [21:15:49] PROBLEM - Host analytics1001 is DOWN: PING CRITICAL - Packet loss = 100% [21:15:50] but they are done [21:15:52] phew [21:15:53] thanks! [21:17:57] New patchset: RobH; "disable moodbar on enwiki per bz44688" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/47624 [21:20:20] ottomata: So analytics1001 is reinstalling via pxe now [21:21:05] cool! [21:21:08] did you do anything? [21:21:12] or did it just magically work? [21:21:40] RECOVERY - Host analytics1001 is UP: PING OK - Packet loss = 0%, RTA = 26.51 ms [21:23:59] ottomata: magic. [21:24:13] and the boot order was set to PXE only [21:24:19] so it looks like every reboot would reinstall... [21:25:09] ottomata: nope; can't push. [21:25:13] hmmmmmmmm, but i never got it to boot PXE at all [21:25:20] it would only reboot os [21:25:21] Change merged: RobH; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/47624 [21:25:28] hmm [21:25:52] PROBLEM - SSH on analytics1001 is CRITICAL: Connection refused [21:27:31] I think the repository needs to be initialized with a .gitreview file by someone in ops before I can git-review patches [21:27:59] hmmmm, i've never had tha tproblem [21:28:05] RoanKattouw_away: damn, yer away! [21:28:06] last week I created operations/debs/kafka [21:28:09] blehhh. [21:28:11] ottomata: because you're in ops :) [21:28:16] anyone know if sync-common-all is supposed to actually work? [21:28:30] yeahhhhh, but operations/debs inherits from all projects [21:28:38] maybe it was because I was project owner [21:28:40] hmm [21:28:53] !log robh synchronized wmf-config/InitialiseSettings.php [21:28:55] Logged the message, Master [21:29:28] ottomata: https://gerrit.wikimedia.org/r/#/admin/projects/operations/debs,access [21:29:42] yeah i'm looking at that [21:29:51] owner is ops? is that what that means [21:29:55] ^demon? [21:30:05] <^demon> Yep. [21:30:32] hm, so what should I do to make it so ori-l can push & git-review to this repo? [21:30:36] That's fine; I would need to have someone in ops review my commits anyway. But I think an initial commit with .gitreview is required. [21:30:50] <^demon> You can submit a .gitreview for review :) [21:31:00] hmmmm, i don't think so, i was able to push to ops/debs/kafka with no git-review [21:31:04] i've never pushed a .gitreview vile [21:31:06] buuut, i dunno [21:31:08] I always just type [21:31:10] git-review [21:31:11] and magic happens [21:31:20] <^demon> `git push origin HEAD:refs/for/master` ;-) [21:31:37] ^demon: that'll work even with no .gitreview? [21:31:46] <^demon> Yup, it's just standard git. [21:32:06] New patchset: Ori.livneh; "Initial commit" [operations/debs/python-jsonschema] (master) - https://gerrit.wikimedia.org/r/47651 [21:32:07] <^demon> "push HEAD to refs/for/master on origin" [21:32:13] Yes, yes it does. [21:32:42] Change merged: Demon; [operations/debs/python-jsonschema] (master) - https://gerrit.wikimedia.org/r/47651 [21:32:43] New patchset: Reedy; "Bug 44688 - Please disable MoodBar extension on English Wikipedia" [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/47656 [21:33:14] !log reedy synchronized wmf-config/InitialiseSettings.php 'Bug 44688 - Please disable MoodBar extension on English Wikipedia' [21:33:15] Logged the message, Master [21:33:17] Change merged: Reedy; [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/47656 [21:33:21] I just want to say how happy i am that we arent all making local shell changes to config files and stepping on one another anymore [21:33:28] yay gerrit [21:33:32] ^demon: ^ [21:33:37] haha [21:33:39] <^demon> ori-l, ottomata: I set "Automatically resolve conflicts" and "Require change-id" on that new repo. [21:33:39] indeed! [21:33:41] yay puppet! [21:33:44] <^demon> Generally, they should always be on. [21:33:46] ok [21:33:48] <^demon> (Can't wait until that inherits) [21:34:22] <^demon> RobH: Well, puppet was you guys, and Gerrit was all Ryan's fault ;-) [21:34:27] <^demon> I just keep the lights on. [21:34:31] :D [21:34:36] it's really awesome when you don't require ids, then turn on requiring ids and gerrit refuses any change (YAY) [21:34:36] riiiiiiiigt [21:34:44] *riiiiiiiiight [21:34:54] ^demon: as if you had nothing to do with it at all ;) [21:34:55] well, in this case im speaking to mediawikiconfig [21:35:00] so nonpuppet [21:35:12] ^demon: you do know I tried it because you mentioned it at some point, right? [21:35:22] <^demon> Damianz: This is why I turn it on at creation time. And also, "I can't wait until it inherits." [21:35:27] <^demon> Ryan_Lane: I did? Logs plz. [21:35:42] <^demon> Man, I totally thought this was your fault and then I became advocate #1. [21:35:44] it was quite a while before I tried it [21:35:51] you mentioned it in passing [21:36:21] you hadn't recommended it or anything, but I hadn't known about it before then [21:36:25] <^demon> I mention *lots* of things in passing. You have to learn to disregard most of those dude ;-) [21:36:31] :D [21:36:44] to be fair, I looked at a bunch of options [21:37:05] I still think the choice was correct :) [21:37:07] * RobH falls asleep waiting on cisco memory check [21:37:22] <^demon> You know what would rock? If Gitblit did CR and not just repo/acl management. [21:37:34] <^demon> Gitblit does *everything* except the CR bit. Which is kind of the most important bit. [21:37:39] heh [21:39:01] dammmit [21:39:03] New patchset: Ori.livneh; "Correct project ref in .gitreview" [operations/debs/python-jsonschema] (master) - https://gerrit.wikimedia.org/r/47661 [21:39:03] New patchset: Ori.livneh; "Initial commit of debian/" [operations/debs/python-jsonschema] (master) - https://gerrit.wikimedia.org/r/47662 [21:39:09] ottomata: so it reboots into installer only...wtf [21:39:24] New patchset: Hashar; "move contint packages under a submodule" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47663 [21:39:25] New patchset: Hashar; "Jenkins module created out of contint manifests" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47664 [21:39:25] New patchset: Hashar; "cleanout testswarm from the manifests" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47665 [21:40:09] New review: Demon; "Wow. You can tell I review .gitreview files without even looking." [operations/debs/python-jsonschema] (master); V: 2 C: 2; - https://gerrit.wikimedia.org/r/47661 [21:40:35] Change merged: Demon; [operations/debs/python-jsonschema] (master) - https://gerrit.wikimedia.org/r/47661 [21:40:43] RobH [21:40:43] yeah [21:40:46] ^demon: thanks :) [21:41:14] this is the devil machine. [21:41:19] <^demon> ori-l: yw. Fun fact: there's work been going on to one day eliminate those .gitreview files entirely, and making git-review smarter. Will be a nice day. [21:41:19] PROBLEM - Host analytics1001 is DOWN: PING CRITICAL - Packet loss = 100% [21:41:20] analytics1001 obstinate [21:41:22] is* [21:42:15] ^demon: oh, cool. [21:44:24] Ryan_Lane: I'm adding you to the review request since it looks like you recently debianized some Python deps, but don't worry about it if you're too busy. [21:47:37] New patchset: Ori.livneh; "Update EventLogging dependencies" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47621 [21:48:51] New review: Hashar; "going to split that in different modules ;-D" [operations/puppet] (production); V: -1 C: -1; - https://gerrit.wikimedia.org/r/43429 [21:49:24] im getting a 503 from varnish when trying to create an account through test.m.wikipedia.org - is anyone available to look into this? it's unclear to me whether or not this is a varnish problem or mobilefrontend problem [21:53:28] RECOVERY - SSH on analytics1001 is OK: SSH OK - OpenSSH_5.9p1 Debian-5ubuntu1 (protocol 2.0) [21:53:37] RECOVERY - Host analytics1001 is UP: PING OK - Packet loss = 0%, RTA = 26.45 ms [21:59:10] PROBLEM - Host analytics1001 is DOWN: PING CRITICAL - Packet loss = 100% [21:59:37] PROBLEM - Puppet freshness on ms1004 is CRITICAL: Puppet has not run in the last 10 hours [21:59:37] PROBLEM - Puppet freshness on msfe1002 is CRITICAL: Puppet has not run in the last 10 hours [21:59:37] PROBLEM - Puppet freshness on ocg3 is CRITICAL: Puppet has not run in the last 10 hours [21:59:37] PROBLEM - Puppet freshness on virt1004 is CRITICAL: Puppet has not run in the last 10 hours [21:59:37] PROBLEM - Puppet freshness on vanadium is CRITICAL: Puppet has not run in the last 10 hours [22:00:53] !log analytics1001 respects my authority. [22:00:56] Logged the message, RobH [22:01:05] RobH: {{cn}} [22:01:09] ahha [22:01:09] nice [22:01:22] !log analytics1001 reinstalled, not puppet signed, back online [22:01:23] Logged the message, RobH [22:01:30] so RobH, that was it: set boot order, full shutdown, then power on? [22:01:34] PROBLEM - Puppet freshness on professor is CRITICAL: Puppet has not run in the last 10 hours [22:01:53] set boot order, reinstall, reset boot order to disk first, power back up [22:02:03] hmm, i never got it to PXE boot anyway [22:02:03] the cisco seems to lack the 'one time boot' option the dells have. [22:02:09] i would do [22:02:12] oh, it was pxe booting by default when i took it over [22:02:12] set boot-order pxe [22:02:14] commit [22:02:21] power cycle [22:02:27] i would see it try to pxe boot, but it would never get installer [22:02:28] i have issues making boot order changes stick in cli [22:02:28] RECOVERY - Host analytics1001 is UP: PING OK - Packet loss = 0%, RTA = 26.45 ms [22:02:35] oh? hm [22:02:35] i have to set via web gui or it doesnt always take [22:02:38] hmmm, ok [22:02:45] which means ya gotta have a proxy to lan setup [22:02:50] yeah i've got that [22:02:55] or i've done it before [22:08:23] !log analytics1007 coming down for troubleshooting, disregard nagios errors [22:08:23] Logged the message, RobH [22:11:27] Change abandoned: Ori.livneh; "This is on hold for the moment." [operations/mediawiki-config] (master) - https://gerrit.wikimedia.org/r/42892 [22:12:47] PROBLEM - Host analytics1007 is DOWN: PING CRITICAL - Packet loss = 100% [22:14:20] sigh @ ori-l [22:14:37] * MaxSem runs scap [22:15:28] Nemo_bis: for the moment! [22:16:33] sure, all is temporary [22:16:37] till the sun explodes [22:26:40] the sun's not going to explode [22:28:26] !log maxsem Started syncing Wikimedia installation... : https://www.mediawiki.org/wiki/Extension:MobileFrontend/Deployments/2013-02-05 [22:28:27] Logged the message, Master [22:29:01] PROBLEM - NTP on analytics1001 is CRITICAL: NTP CRITICAL: No response from NTP server [22:30:31] TimStarling: orly? [22:30:48] I must have lost some Nature update [22:31:11] it will just expand into a red giant, engulfing the earth, then collapse into a white dwarf [22:31:27] this matches several definitions of "explosion" [22:31:51] * TimStarling checks dictionary [22:32:08] check, check ;) [22:36:14] New review: Tim Starling; "Has this got something to do with the long-term plan of running the whole of Wikipedia from Jimmy's ..." [operations/puppet] (production); V: 0 C: 0; - https://gerrit.wikimedia.org/r/46907 [22:37:21] New review: Demon; "I suppose in a hypothetical world where people have the entire production deployment system mirrored..." [operations/puppet] (production) C: 0; - https://gerrit.wikimedia.org/r/46907 [22:40:34] PROBLEM - Puppet freshness on cp3020 is CRITICAL: Puppet has not run in the last 10 hours [22:44:19] !log maxsem Finished syncing Wikimedia installation... : https://www.mediawiki.org/wiki/Extension:MobileFrontend/Deployments/2013-02-05 [22:44:20] Logged the message, Master [22:45:31] New patchset: Tim Starling; "Add a --verbose parameter to mw-update-l10n" [operations/puppet] (production) - https://gerrit.wikimedia.org/r/46907 [22:47:57] New review: Tim Starling; "THERE I FIXED IT" [operations/puppet] (production); V: 0 C: 0; - https://gerrit.wikimedia.org/r/46907 [22:50:06] New review: Tim Starling; "I think the mwscript wrapper should probably do an automatic sudo if the user is incorrect, for conv..." [operations/mediawiki-config] (master); V: 0 C: -1; - https://gerrit.wikimedia.org/r/44200 [22:55:50] New review: Tim Starling; "The cgroup should be on all MediaWiki servers, since shell processes such as texvc and lilypond are ..." [operations/puppet] (production); V: 0 C: -1; - https://gerrit.wikimedia.org/r/40784 [23:01:54] New review: Aaron Schulz; "Yeah I thought about that but couldn't think of crons or stuff that does not use sudo -u apache. Tho..." [operations/mediawiki-config] (master); V: 0 C: 1; - https://gerrit.wikimedia.org/r/44200 [23:08:27] RECOVERY - Host analytics1007 is UP: PING OK - Packet loss = 0%, RTA = 26.45 ms [23:16:29] Change merged: Andrew Bogott; [operations/puppet] (production) - https://gerrit.wikimedia.org/r/47576 [23:17:09] RECOVERY - MySQL Slave Delay on db33 is OK: OK replication delay 0 seconds [23:18:31] RECOVERY - MySQL Replication Heartbeat on db33 is OK: OK replication delay 0 seconds [23:26:29] !log maxsem synchronized php-1.21wmf9/extensions/MobileFrontend/javascripts/modules/mf-photo.js 'https://gerrit.wikimedia.org/r/#/c/47670/' [23:26:30] Logged the message, Master [23:29:51] !log maxsem synchronized php-1.21wmf8/extensions/MobileFrontend/javascripts/modules/mf-photo.js 'https://gerrit.wikimedia.org/r/#/c/47670/' [23:29:52] Logged the message, Master [23:31:48] TimStarling: lol [23:31:55] os x on scap, that will be the day [23:44:34] ottomata: re .gitreview, it works for you locally even if it's not in the repo yet. So you can copy it from another repo, edit it to suit your project, then git add, git commit, git review [23:51:36] https://gerrit.wikimedia.org/r/#/c/45599/ https://gerrit.wikimedia.org/r/#/c/45598/ - Some gitignore/gitreview additions if someone could please submit them ;) [23:56:39] I also have an easy merge to ask, a very simple trivial pseudo-urgent planet URL addition :) https://gerrit.wikimedia.org/r/#/c/47602/