[00:00:46] ** now I'm in ** [00:00:57] thanks for your assistance [00:01:01] I have learnt a lot ! [00:01:26] and hopefully saved your time... [00:01:32] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [00:02:02] Just for info: I did not re-run puppet [00:02:23] Ryan_Lane: do _you_ have further question to _me_ ? [00:02:33] questions [00:06:52] bye bye & good night [00:07:11] tschüss [00:16:32] PROBLEM Current Load is now: WARNING on parsoid-roundtrip3.pmtpa.wmflabs 10.4.0.62 output: WARNING - load average: 7.08, 7.02, 5.64 [00:18:53] PROBLEM Current Load is now: CRITICAL on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: Connection refused by host [00:19:32] PROBLEM Disk Space is now: CRITICAL on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: Connection refused by host [00:20:12] PROBLEM Free ram is now: CRITICAL on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: Connection refused by host [00:21:42] PROBLEM Total processes is now: CRITICAL on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: Connection refused by host [00:22:22] PROBLEM dpkg-check is now: CRITICAL on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: Connection refused by host [00:24:32] RECOVERY Disk Space is now: OK on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: DISK OK [00:25:12] RECOVERY Free ram is now: OK on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: OK: 91% free memory [00:26:42] RECOVERY Total processes is now: OK on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: PROCS OK: 83 processes [00:27:22] RECOVERY dpkg-check is now: OK on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: All packages OK [00:28:52] RECOVERY Current Load is now: OK on andrewistesting.pmtpa.wmflabs 10.4.1.85 output: OK - load average: 0.05, 0.54, 0.51 [00:28:53] PROBLEM Current Load is now: WARNING on ve-roundtrip2.pmtpa.wmflabs 10.4.0.162 output: WARNING - load average: 9.13, 8.44, 6.09 [00:31:32] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [00:37:52] RECOVERY Free ram is now: OK on integration-jobbuilder.pmtpa.wmflabs 10.4.0.21 output: OK: 21% free memory [00:39:52] RECOVERY Free ram is now: OK on conventionextension-trial.pmtpa.wmflabs 10.4.0.165 output: OK: 22% free memory [00:41:33] RECOVERY Free ram is now: OK on nova-precise2.pmtpa.wmflabs 10.4.1.57 output: OK: 22% free memory [00:53:37] Ryan_Lane: Hey, do you know if the 'no getting actual text from revision' limitation will be in force for the labs as it was for the TS? IIRC that was a technological/disk space limitation? [00:53:59] it's because we don't replicate external store [00:55:20] Ah. I'm getting the question already. Is is something that could conceivably be put on the roadmap or not (i.e.: some way to access it) or would it give direct access to suppressed revisions? [00:56:13] (The alternative generally used is to get it from the API through HTTP) [00:56:14] Personally I would much rather have a good way to get page text... [00:56:16] I really don't know [00:56:21] it's not on the current roadmap [00:56:31] it would take a lot of storage, for sure [00:56:56] and I'm not sure that even replicating external store would be the best way of getting it [00:57:21] * Coren puts asking the right questions to the right people on his to-do list. [00:57:52] PROBLEM Free ram is now: WARNING on conventionextension-trial.pmtpa.wmflabs 10.4.0.165 output: Warning: 13% free memory [00:57:54] Thanks Coren [00:57:58] I'm off for the night [00:58:04] Night sumanah [00:58:04] bye all! [00:59:32] PROBLEM Free ram is now: WARNING on nova-precise2.pmtpa.wmflabs 10.4.1.57 output: Warning: 17% free memory [01:00:43] Change on 12mediawiki a page Wikimedia Labs/Tools Lab was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=649866 edit summary: [+143] +revision text [01:01:46] hi, can someone tell me how to scp a file when being logged in on instance openid-wiki to openid-wiki2:/tmp ? [01:01:53] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [01:01:54] this does not work: scp mw-20130221.tgz wikinaut@openid-wiki.instance-proxy.wmflabs.org:/tmp [01:02:21] ssh: connect to host openid-wiki.instance-proxy.wmflabs.org port 22: Connection timed out [01:02:52] openid-wiki.pmtpa.wmflabs.org ? [01:03:53] RECOVERY Current Load is now: OK on ve-roundtrip2.pmtpa.wmflabs 10.4.0.162 output: OK - load average: 4.40, 3.74, 4.84 [01:04:27] ok, solved [01:04:41] "scp mw-20130221.tgz wikinaut@openid-wiki.pmtpa.wmflabs:/tmp" [01:04:56] a great "self service" here [01:05:52] PROBLEM Free ram is now: WARNING on integration-jobbuilder.pmtpa.wmflabs 10.4.0.21 output: Warning: 17% free memory [01:06:32] RECOVERY Current Load is now: OK on parsoid-roundtrip3.pmtpa.wmflabs 10.4.0.62 output: OK - load average: 4.51, 3.75, 4.86 [01:08:15] Wikinaut: The instance-proxy is meant to be used from the /outside/ -- you're inside. :-) [01:08:25] yes yes yes [01:08:33] danke ty [01:08:42] merci bedankt [01:16:52] PROBLEM Current Load is now: WARNING on ve-roundtrip2.pmtpa.wmflabs 10.4.0.162 output: WARNING - load average: 8.07, 6.41, 5.57 [01:18:39] Ryan_Lane: the openid-wiki2 is now a perfect clone of the openid-wiki. Both running current core and extension:OpenID code [01:19:16] for those who want to try http://openid-wiki2.instance-proxy.wmflabs.org/wiki/Main_Page and http://openid-wiki.instance-proxy.wmflabs.org/wiki/Main_Page [01:19:32] PROBLEM Current Load is now: WARNING on parsoid-roundtrip3.pmtpa.wmflabs 10.4.0.62 output: WARNING - load average: 7.78, 6.58, 5.66 [01:25:53] PROBLEM Current Load is now: WARNING on parsoid-roundtrip6-8core.pmtpa.wmflabs 10.4.0.222 output: WARNING - load average: 6.35, 6.04, 5.42 [01:32:02] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [01:33:52] PROBLEM Current Load is now: CRITICAL on nova-salt-minion1.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [01:34:32] PROBLEM Disk Space is now: CRITICAL on nova-salt-minion1.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [01:35:12] PROBLEM Free ram is now: CRITICAL on nova-salt-minion1.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [01:35:52] RECOVERY Current Load is now: OK on parsoid-roundtrip6-8core.pmtpa.wmflabs 10.4.0.222 output: OK - load average: 2.18, 3.97, 4.83 [01:36:42] PROBLEM Total processes is now: CRITICAL on nova-salt-minion1.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [01:37:22] PROBLEM dpkg-check is now: CRITICAL on nova-salt-minion1.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [01:44:32] PROBLEM Disk Space is now: CRITICAL on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: Connection refused by host [01:45:12] PROBLEM Free ram is now: CRITICAL on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: Connection refused by host [01:45:52] PROBLEM Current Load is now: CRITICAL on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: Connection refused by host [01:46:42] PROBLEM Total processes is now: CRITICAL on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: Connection refused by host [01:47:22] PROBLEM dpkg-check is now: CRITICAL on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: Connection refused by host [02:02:33] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [02:04:23] RECOVERY Current Load is now: OK on parsoid-roundtrip3.pmtpa.wmflabs 10.4.0.62 output: OK - load average: 2.07, 2.32, 4.18 [02:33:22] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [02:39:32] RECOVERY Disk Space is now: OK on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: DISK OK [02:40:12] RECOVERY Free ram is now: OK on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: OK: 89% free memory [02:40:52] RECOVERY Current Load is now: OK on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: OK - load average: 0.19, 0.61, 0.51 [02:41:42] RECOVERY Total processes is now: OK on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: PROCS OK: 84 processes [02:42:22] RECOVERY dpkg-check is now: OK on nova-salt-minion3.pmtpa.wmflabs 10.4.1.88 output: All packages OK [02:51:51] andrewbogott_afk: I can't seem to access labs? [02:52:56] andrewbogott_afk: I should still have access as a volunteer [03:03:22] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [03:33:22] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [03:58:08] preilly: Lemme check. [03:59:29] preilly: What's your labs console username? [04:03:23] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [04:34:44] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [04:37:53] RECOVERY Free ram is now: OK on conventionextension-trial.pmtpa.wmflabs 10.4.0.165 output: OK: 21% free memory [04:39:33] RECOVERY Free ram is now: OK on nova-precise2.pmtpa.wmflabs 10.4.1.57 output: OK: 23% free memory [04:40:14] RECOVERY Free ram is now: OK on sube.pmtpa.wmflabs 10.4.0.245 output: OK: 22% free memory [04:40:54] RECOVERY Free ram is now: OK on integration-jobbuilder.pmtpa.wmflabs 10.4.0.21 output: OK: 21% free memory [04:41:44] RECOVERY Free ram is now: OK on mediawiki-bugfix-kozuch.pmtpa.wmflabs 10.4.0.26 output: OK: 27% free memory [04:50:52] PROBLEM Free ram is now: WARNING on conventionextension-trial.pmtpa.wmflabs 10.4.0.165 output: Warning: 13% free memory [05:03:53] PROBLEM Free ram is now: WARNING on integration-jobbuilder.pmtpa.wmflabs 10.4.0.21 output: Warning: 17% free memory [05:04:43] PROBLEM Free ram is now: WARNING on mediawiki-bugfix-kozuch.pmtpa.wmflabs 10.4.0.26 output: Warning: 19% free memory [05:06:02] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [05:07:32] PROBLEM Free ram is now: WARNING on nova-precise2.pmtpa.wmflabs 10.4.1.57 output: Warning: 19% free memory [05:08:12] PROBLEM Free ram is now: WARNING on sube.pmtpa.wmflabs 10.4.0.245 output: Warning: 14% free memory [05:36:02] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [05:41:53] PROBLEM Free ram is now: WARNING on bots-nr1.pmtpa.wmflabs 10.4.1.2 output: Warning: 19% free memory [05:56:52] RECOVERY Free ram is now: OK on bots-nr1.pmtpa.wmflabs 10.4.1.2 output: OK: 20% free memory [06:06:02] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [06:14:52] PROBLEM Free ram is now: WARNING on bots-nr1.pmtpa.wmflabs 10.4.1.2 output: Warning: 19% free memory [06:23:53] PROBLEM Current Load is now: WARNING on parsoid-roundtrip6-8core.pmtpa.wmflabs 10.4.0.222 output: WARNING - load average: 9.66, 8.50, 6.18 [06:30:42] PROBLEM Total processes is now: WARNING on parsoid-roundtrip7-8core.pmtpa.wmflabs 10.4.1.26 output: PROCS WARNING: 152 processes [06:31:52] PROBLEM Total processes is now: WARNING on parsoid-roundtrip5-8core.pmtpa.wmflabs 10.4.0.125 output: PROCS WARNING: 151 processes [06:32:32] PROBLEM Current Load is now: WARNING on parsoid-roundtrip3.pmtpa.wmflabs 10.4.0.62 output: WARNING - load average: 9.48, 9.11, 6.80 [06:34:52] PROBLEM Current Load is now: WARNING on ve-roundtrip2.pmtpa.wmflabs 10.4.0.162 output: WARNING - load average: 10.29, 9.59, 6.96 [06:36:02] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [06:40:43] RECOVERY Total processes is now: OK on parsoid-roundtrip7-8core.pmtpa.wmflabs 10.4.1.26 output: PROCS OK: 147 processes [06:43:51] Ryan_Lane: time for a short chat ? [06:45:29] Someone here, who can explain me why: [06:45:51] On instance openid-wiki I cannot "wget http://openid-wiki2.instance-proxy.wmflabs.org/wiki/Main_Page" [06:46:11] Connecting to openid-wiki.instance-proxy.wmflabs.org (openid-wiki.instance-proxy.wmflabs.org)|208.80.153.147|:80... failed: Connection time [06:46:15] Why ? [06:46:52] RECOVERY Total processes is now: OK on parsoid-roundtrip5-8core.pmtpa.wmflabs 10.4.0.125 output: PROCS OK: 146 processes [06:49:48] ok, I must use the external address [06:49:50] solved [07:06:03] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [07:16:12] PROBLEM Total processes is now: WARNING on bastion1.pmtpa.wmflabs 10.4.0.54 output: PROCS WARNING: 152 processes [07:26:12] RECOVERY Total processes is now: OK on bastion1.pmtpa.wmflabs 10.4.0.54 output: PROCS OK: 148 processes [07:36:02] PROBLEM host: deployment-cache-upload-test2.pmtpa.wmflabs is DOWN address: 10.4.1.55 CRITICAL - Host Unreachable (10.4.1.55) [07:44:52] RECOVERY Free ram is now: OK on bots-nr1.pmtpa.wmflabs 10.4.1.2 output: OK: 22% free memory [07:52:53] PROBLEM Free ram is now: WARNING on bots-nr1.pmtpa.wmflabs 10.4.1.2 output: Warning: 19% free memory [08:07:52] RECOVERY Free ram is now: OK on bots-nr1.pmtpa.wmflabs 10.4.1.2 output: OK: 20% free memory [08:08:52] PROBLEM Current Load is now: CRITICAL on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [08:09:32] PROBLEM Disk Space is now: CRITICAL on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [08:09:52] RECOVERY Current Load is now: OK on ve-roundtrip2.pmtpa.wmflabs 10.4.0.162 output: OK - load average: 2.52, 2.59, 4.56 [08:10:14] PROBLEM Free ram is now: CRITICAL on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [08:11:44] PROBLEM Total processes is now: CRITICAL on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [08:12:24] PROBLEM dpkg-check is now: CRITICAL on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: Connection refused by host [08:12:24] RECOVERY Current Load is now: OK on parsoid-roundtrip3.pmtpa.wmflabs 10.4.0.62 output: OK - load average: 3.58, 3.59, 4.94 [08:13:54] RECOVERY Current Load is now: OK on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: OK - load average: 1.04, 1.05, 0.58 [08:14:34] RECOVERY Disk Space is now: OK on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: DISK OK [08:15:13] RECOVERY Free ram is now: OK on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: OK: 94% free memory [08:16:43] RECOVERY Total processes is now: OK on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: PROCS OK: 90 processes [08:17:24] RECOVERY dpkg-check is now: OK on deployment-cache-upload-test5.pmtpa.wmflabs 10.4.1.86 output: All packages OK [08:37:32] RECOVERY Free ram is now: OK on nova-precise2.pmtpa.wmflabs 10.4.1.57 output: OK: 23% free memory [08:38:12] RECOVERY Free ram is now: OK on sube.pmtpa.wmflabs 10.4.0.245 output: OK: 22% free memory [08:38:52] RECOVERY Free ram is now: OK on integration-jobbuilder.pmtpa.wmflabs 10.4.0.21 output: OK: 22% free memory [08:39:42] RECOVERY Free ram is now: OK on mediawiki-bugfix-kozuch.pmtpa.wmflabs 10.4.0.26 output: OK: 27% free memory [08:40:52] RECOVERY Free ram is now: OK on conventionextension-trial.pmtpa.wmflabs 10.4.0.165 output: OK: 22% free memory [08:49:02] RECOVERY Current Load is now: OK on parsoid-roundtrip6-8core.pmtpa.wmflabs 10.4.0.222 output: OK - load average: 1.82, 2.04, 4.75 [09:00:33] PROBLEM Free ram is now: WARNING on nova-precise2.pmtpa.wmflabs 10.4.1.57 output: Warning: 18% free memory [09:00:53] PROBLEM Free ram is now: WARNING on bots-nr1.pmtpa.wmflabs 10.4.1.2 output: Warning: 19% free memory [09:01:53] PROBLEM Free ram is now: WARNING on integration-jobbuilder.pmtpa.wmflabs 10.4.0.21 output: Warning: 17% free memory [09:03:53] PROBLEM Free ram is now: WARNING on conventionextension-trial.pmtpa.wmflabs 10.4.0.165 output: Warning: 13% free memory [09:04:50] I think there is problem with ram [09:06:00] what if I move the nagios out of this channel? [09:06:07] someone against? [09:06:12] PROBLEM Free ram is now: WARNING on sube.pmtpa.wmflabs 10.4.0.245 output: Warning: 14% free memory [09:11:33] !log nagios moved bot to nagios channel [09:11:35] Logged the message, Master [09:12:43] PROBLEM Free ram is now: WARNING on mediawiki-bugfix-kozuch.pmtpa.wmflabs 10.4.0.26 output: Warning: 19% free memory [09:13:43] screwed my instance again :( [09:13:58] who [09:14:09] ano [09:14:13] forgot to load my ssh key ahah [09:14:18] mhm [09:18:40] omg [09:19:07] so yeah [09:19:08] hmm [09:19:25] I am trying to get the upload cache to be a varnish box instead of a squid one :-] [09:20:31] ok [09:20:40] you think it will speed up the cluster? [09:20:49] hopefully :D [09:24:36] nop :-] [09:24:53] the root cause is the PHP files being on Gluster which is kind of slow [09:25:04] the solution is to have the PHP files on /mnt on the apaches boxes [09:25:07] git-deploy solved that nicely [09:25:16] but we are apparently not going to have git-deploy anytime soon [09:25:32] so I guess I will have to adapt the existing deployment tools [09:30:58] why [09:31:48] @seen sumanah [09:31:48] petan: Last time I saw sumanah they were leaving the channel #wikimedia-dev at 2/21/2013 12:58:07 AM (08:33:41.5197140 ago) [09:31:56] @notify sumanah [09:31:56] I will notify you, when I see sumanah around here [09:55:23] hashar are you coming to amsterdam? [09:55:25] :o [09:55:32] didn't see you on a list [09:57:18] I don't know yet [09:57:23] depends on the WMF budget [09:57:54] since I am not going to wikimania and I have a cheap/direct flight from my city to AMS, I hope to be attending [10:08:36] !log wiktionary-tools apt-get install p7zip [10:08:37] Logged the message, Master [10:08:44] Hi [10:09:50] !log wiktionary-tools apt-get install p7zip-full [10:09:51] Logged the message, Master [10:09:56] That's better [10:11:26] !log account-creation-assistance All instances need reboots after `apt-get` upgrade. Initiating now. [10:11:27] Logged the message, Master [10:11:30] Thank you, labs-morebots. [12:35:21] @notify Platonides [12:35:21] This user is now online in #huggle so I will let you know when they show some activity (talk etc) [13:14:27] Platonides how is work on tool labs going? [13:15:14] Coren: ^ [13:15:33] Coren starts on Monday [13:16:24] I tried to puppetize the basic apache config [13:16:31] although it hasn't been reviewed yet [13:16:42] and there are no per-tool uids [13:16:54] I suppose that's one of the first things Coren will do [13:17:17] Platonides: petan have you seen Coren's TODO list? [13:17:31] https://www.mediawiki.org/wiki/Wikimedia_Labs/Tools_Lab [13:17:52] Platonides ok if you want I could help with some of these, if you give me sysadmin there... [13:20:08] you are already a project admin there [13:20:16] oh really? [13:20:17] ok [13:20:35] I will start clearing that list out then, but some of these can't be solved right now [13:20:37] I was going to add you when saw that you were in the list [13:21:06] btw I disagree with merging tools and bots - both projects are completely different [13:21:22] or, I don't see any advantage [13:21:57] for tools with both bots and a web interface [13:22:07] petan: there is an advantage [13:22:09] gluster [13:22:13] we already have web interface on bots [13:22:27] the vms where bots run would have to be different than those where webapps run, of course [13:22:53] sumanah I don't understand [13:23:01] gluster is in any project where they want it [13:23:13] Platonides ok, but what is that advantage? [13:23:28] the bots with web interface such as cluebot already work in bots project [13:23:40] I can't imagine a bot which would need to have direct access to any of tools instance [13:23:45] petan: Biggest problem with 2 separate Labs projects -- gluster volumes are [13:23:46] per-project. Which means you can't cross Labs projects. We can put in [13:23:46] manual changes to make that work, but easier to combine them [13:23:51] (from some notes from a meeting last week) [13:24:03] sumanah why would you want to have same storage for both projects? [13:24:22] in my point of view it's more secure to have them separated, then if some bot break, it wouldn't be able to break tools [13:24:48] it's very easy to setup some local interface in case some of bots would need to access resources in the other project [13:24:49] Then please say that on labs-l or someplace where people can have a reasoned discussion about it, including Silke & Coren [13:25:08] I have never seen such a discussion on labs-l I could respond to [13:25:15] if I did I would have [13:25:17] Then start it, pelase [13:25:39] Change on 12mediawiki a page Wikimedia Labs/Tools Lab was modified, changed by Petrb link https://www.mediawiki.org/w/index.php?diff=650093 edit summary: [+140] comment [13:26:16] petan: or you could state your reasoning https://www.mediawiki.org/w/index.php?title=Talk:Wikimedia_Labs/Tools_Lab&action=edit&redlink=1 :-) [13:27:58] petan: Whether they are separate or a single project, the tools will be compartimentalized anyways. Having exactly one set of management tools and subsystems to manage, however, doubles MTBF and halves downtime. [13:28:38] * sumanah leaves this discussion to petan, Platonides, & Coren :-) [13:28:55] no madman :( [13:29:07] * Coren is on first coffee, and has high latency atm. [13:29:13] Coren what is MTBF, I see advantage of management tool for tools project, but still... why merge them? [13:29:23] btw jeremyb_ check out http://www.opentech2013.org/ on 30 March in case you want to tell your pals [13:29:26] Mean Time Between Failure. [13:29:44] between failure and what? recovery? [13:29:44] sumanah: are you in SF this weekend? coming to NYU? [13:30:02] I will see whether I can come to NYU. [13:30:14] (Between Failures, I think) [13:30:29] petan: Between failure and failure. It's the primary criterion of reliability; "how often does something break" [13:30:30] hrmmm, did i know about opentech already? idk [13:30:43] interesting that they chose to put a year in the domain name [13:31:04] sumanah: i see your name! [13:31:16] Coren I think we should discuss this on labs-l but I still see very little advanges if any... even if you really wanted to use this management tool on bots project, you could do that while having 2 projects - you don't need to merge them in order to let the instances access each other [13:31:20] yes, I'm a speaker [13:31:39] imagine the number of instance in 1 project, that would be a mess [13:31:43] * instances [13:31:51] we already have almost 20 of them in bots now [13:31:56] and many others will be launched [13:32:02] huh, it's not free :/ [13:32:25] (not too crazy priced either but...) [13:32:38] petan: Yes, that's why this will be clustered so that how many instances is both variable, scalable, and not relevant to the users. :-) [13:33:06] > Sumana will introduces you to your Open Tech neighbors [13:33:17] something's wrong with that sentece [13:33:21] but there is a huge difference between instances which will be in tools project and bot project [13:33:21] sentence* [13:33:23] yep. feel free to report the bug. [13:33:40] what's the point of mixing entirely different things together [13:33:48] petan: Put another way. CorenSearchBot currently live{d|s} on bots-3. That I needed to know this is a design bug. :-) [13:34:03] bots-3 is a testing instance [13:34:04] do people actually say ECT (engineering community team) [13:34:05] petan: Because I don't agree that they are "entirely different" things to begin with. :-) [13:34:06] ? [13:34:44] Coren wait a moment what is problem with corenbot what bug you mean? [13:35:00] petan: The webtool <-> bot distinction is artificial; different tools lie on a continuum between them. [13:35:17] also, apparently the friendly space policy only applies if you have javascript enabled??! [13:35:40] oh, it's 10gen people, interesting [13:35:53] okay, but why same project? projects only creates hierarchy - instances in different projects see to each other if you let them [13:36:10] you don't need to merge projects in order to let them access same resources [13:36:21] projects were created to make some structure [13:36:28] petan: CSBot lived on a specific instance. I shouldn't have a need to know that, nor indeed have it lie on a /specific/ instance. :-) [13:36:43] of course because bots project is not yet finished [13:36:49] there should be some scheduling in future [13:36:53] petan: Yes, but the structure here doesn't actually reflect any real division. [13:37:23] there is a divison - from technical point of view instances for bots will be different from instaces for web tools [13:37:29] petan: Why? [13:37:45] because web tools needs different software and hardware than bots [13:37:50] petan: CSBot is a bot. It has a web interface. [13:37:56] many of bots do [13:38:11] it could have a web interface in webtools and core of bot could be on bots project [13:38:14] that is easily possible [13:38:18] petan: Lots/most webtools have bot-like tasks attached. [13:38:35] ok, then they can launch these tasks in bots project [13:38:39] from webtools project [13:38:52] projects are not isolated from each other, they just create structure [13:39:10] webtools need a lot of small apaches, such as beta cluster [13:39:12] petan: Sure, and then have to synchronize management between two projects with different members, permissions, structure? You're just adding something that can break for no discernable benefit. [13:39:19] bots tool need a number of heavy boxes [13:39:24] such as toolserver [13:39:34] not really [13:39:41] every user already exist in every project [13:39:55] only difference is access [13:40:10] if you are not in a project, your public key is not uploaded to shared gluster keys volume [13:40:14] petan: Not with project-specific UIDs. Or group memberships. [13:40:24] petan: Or a dozen other things. [13:40:25] there are no project specific UID's [13:40:30] it's all in ldap [13:40:42] your user exist in all projects with same groups and UID [13:40:43] petan: There will be, in different OUs. [13:40:50] petan: Not human users [13:41:00] petan: Sure, you /can/ share data between the instances [13:41:13] what? [13:41:17] petan: s/instances/projects/ [13:41:33] there is only one kind of account, that is one in ldap, no matter if it's for humans or not and that will exist on all projects since creation [13:41:33] I think I'm not clear on why petan believes it's actively beneficial to have two separate Labs projects [13:41:34] petan: But why separate projects to then have to merge everything back again? [13:41:53] If there are specific real benefits then I'd like to know them :) [13:42:00] it's more secure [13:42:11] bots project has own shared storage which can be only broken by bots [13:42:12] petan: Err, no, there will be service-users and service-groups added. [13:42:18] it can't affect the other project [13:42:27] instance list will be less mess [13:42:38] Coren service users are already in ldap as well [13:42:49] petan: They won't be soon. Too much pollution. [13:42:56] ok, so, there's a potential security benefit, and the discoverability & readability of the list of instances is an issue as well. What else, petan? [13:43:13] petan: There certainly won't be 500-odd new service users added to LDAP just because of the Tools Labs. [13:43:23] (If the list of instances is currently hard to read, browse, filter, etc., then that's a UX question we should try to address) [13:43:33] sumanah the structure is easier to uderstand, both projects would have separate documentation which they need and separate SAL log [13:43:53] Coren did you discuss that with Ryan? [13:44:04] he didn't have this opinion last time I talked to him regarding that [13:44:10] petan: And why are you bringing instance discovery up again? Why would anyone care about tools-exec21 or webtools-exec17? [13:44:31] petan: Yes, that was discussed with Ryan at length. None of this comes out of a vacuum. :-) [13:44:52] petan: Endusers will interact with, maybe, at most 2 or 3 instances. [13:45:06] I don't care about discovery I am talking about hierarchy [13:45:17] lot of instances in one project will be huge mess [13:45:26] petan: could you please go into more detail regarding the necessity of separate SAL logs and why the documentation should be separated? [13:45:34] structure is useful, makes it easier for people to understand how things work [13:45:36] petan: also, when you say "a huge mess" please be more specific [13:45:48] petan: however, simplicity is also simple [13:45:51] I mean, useful [13:46:10] sumanah because both projects are about something entirelly different? that's why we have separate logs on labs and not one huge log for everything [13:46:10] "things that used to be in the Toolserver are all now in this Labs project" -- very easy to understand [13:46:12] petan: What /is/ there to structure? [13:46:38] petan: perhaps you should address Coren's point -- he believes that there are many codebases & initiatives currently on Toolserver that combine aspects of "bots" and "webtools" [13:46:42] petan: Again, you assert "entirely different". I see no difference at all, let alone a significant one. [13:46:46] sumanah one of disadvantages of toolserver was that it was a huge mess, I was hoping for making labs more structured and easier to understand [13:47:04] petan: I really think you should go into more detail regarding what use case you're trying to optimize [13:47:22] "huge mess" and "try to understand" are a bit vague. :-) [13:47:35] how are bots same to webtools? webtool is a tool letting you display something in a browser, it's a script [13:47:38] petan: Yes. Amongst the things that will make it simpler and easier to understand will be the fact that things aren't needlessly duplicated. [13:47:43] bot is running application on background doing some task [13:47:49] long time or short time... or periodic [13:48:01] how can you even compare that? [13:48:23] Coren you don't need to duplicate anything [13:48:23] For the use case of a person moving something from the Toolserver, for that usecase, it is simpler to say "here's where you put it" -- 1 Labs project. [13:48:36] petan: Yes, and there are very, /very/ few tools that do not have both aspects on a continuum. It goes from "just a webpage" to "just heavy computing" with a continuum in between where most tools lie. [13:48:45] ok, it's easier for people who are already familiar with the mess that toolserver is [13:49:16] look, for example labs-morebots is in bots project [13:49:17] petan: you haven't actually explained your assertion "both projects would have separate documentation which they need" -- if you're talking about the information architecture of the documentation, perhaps you could explain what the tradeoffs are there? [13:49:27] what does it have common with any of webtools? [13:49:30] petan: Perhaps, before you describe my initial architecture as "a mess", you should first wait to see it? [13:49:46] Coren I am describing toolserver architecture [13:49:47] not yours [13:49:56] petan: In the final architecture, "labs-morebots" has no reason to exist. [13:50:11] ok, if it didn't exist how would you log into sal from irc? [13:50:50] petan: What, what? [13:51:04] if the bot didn't exist, how would you log the messages there? [13:51:18] I think Coren doesn't mean that the functionality itself would go away. [13:51:25] Although I could be wrong. [13:51:26] like you have a project X with instance B and you want to !log X I just rebooted B for a patch of packages [13:51:47] how would you do that in your final architecture? [13:52:05] labs-morebots is a bot for labs in general not for bots project [13:52:05] I am a logbot running on i-0000015e. [13:52:06] Messages are logged to labsconsole.wikimedia.org/wiki/Server_Admin_Log. [13:52:06] To log a message, type !log . [13:52:13] o.o [13:52:23] By the way, petan, Coren, it really might be best to get this conversation onto Labs-l [13:52:32] so that Coren can respond on Monday, when he is actually being paid to do so [13:52:38] Heh. :-) [13:52:57] sumanah: I've been a volunteer for ~10 years, sumanah, I'm not going to worry about a few days. :-) [13:52:58] I am not paid at all and yet I am responding... [13:52:58] petan: you have a chance here to write something persuasive and change people's minds, but in order to do that you have to back up your claims with reasoned arguments [13:53:46] petan: and I appreciate your contributions. I just wanted to check whether, for instance, Coren might need to spend more of today wrapping up his last contract or whatever [13:54:09] I know the limiting factor isn't you, petan (please speak up if you don't know what the chemistry term "limiting factor" means) [13:54:13] petan: I know you're describing the toolserver mess. I intend to have something orders of magniture cleaner (because I have the opportunity to plan). I am saying that having two projects is neither helpful nor necessary for that objective. :-) [13:54:43] that's where I disagree [13:54:49] I find it helpful a lot [13:55:13] petan: Clearly you do, but you don't have sufficient information to disagree yet, since I haven't posted my WIP architecture plan. [13:55:35] petan: Don't say "two projects is required" before you've seen what I do with one, okay? [13:55:52] I don't say required - I say it's better to have them [13:56:17] petan: And, again, you can't say "better than X" when you don't know what "X" is. [13:56:32] fair [13:56:56] Remember: I was a ts user. I know what status quo looks like. :-) [13:58:43] I have four objectives. (a) reliable, (b) secure, (c) low-maintenance and (d) simple for the end users. [13:59:02] (roughly in that order) [13:59:26] but c and d imply it won't offer some stuff - making interface simpler consist of removing advanced options [13:59:28] Give me until the middle of next week to finish drafting up the architectural plan before you pan it. :-) [14:00:17] petan: No, I don't do simplistic* Simple means that there is a low barrier to entry with reasonable defaults for someone with no desire to do complicated stuff; not that you can't do the fancy stuff at need. :-) [14:00:33] ok [14:03:08] petan: Amongst my plans is that the endusers will interact with, at most, 2-3 instances total, one project, and one "unified" way of "I need a new tool" that doesn't need to know what kind of tool it is from a design standpoint (though the implementationw ill obviously vary) [14:03:47] that I can imagine for webtools project [14:03:50] but not for bots [14:04:03] I.e.: All the tools will have a web component and a way to run scheduled or continuous tasks. For bots, the web component will be nothing but a 'tool is up' default web service. [14:04:16] ok, what about bots with high uptimes [14:04:18] like 100 days? [14:04:33] there is no scheduling for such [14:04:43] Like all other continuous tasks, they are shuffled to the exec cluster - as restartable tasks. [14:04:59] but what if they can't be restarted - restarting means loss of data [14:04:59] (checkpointing optional, but cool if they can support it) [14:06:00] I'm not sure I see the distinction. Tasks that do not need restarting won't be restarted barring a failiure of the infrastructure. That doesn't change regardless of project structure? Or did you mean something else? [14:06:00] there are not only wikibots on bots project [14:06:34] well, if someone wants to do maintenance on a bot that is running for many days, they will need to access the machine where the bot runs [14:06:48] your webinterface may be cool for newbies, but insufficient for some people [14:06:50] petan: Why access to the machine and not just the filesystem? [14:07:02] because you need to interact with the process somehow? [14:07:10] maybe the bot consist of multiple processes [14:07:19] which needs to run on same machine [14:07:28] petan: Do you expect many people will want to attach a GDB to the running process? Because that's the only scenario I can think of that has need of access to the actual exec node. [14:07:42] I can think of other [14:07:52] petan: Of course. That can also be dealt with through the same mechanism. [14:08:21] over a webinterface? [14:08:24] And, by the way, you can always ssh to "the exec host that is running job XXX" without having to know where that is. [14:08:49] a webinterface could be nice, though I doubt I can make one generic enough to replace a good ol' shell. [14:09:10] seriously giving a webinterface to skilled unix users as alternative to terminal, is like giving a bow as alternative to sniper rifle to a soldier [14:09:36] not all tech guys prefer simple interfaces [14:09:41] petan: I've been a sysadmin for 20+ years. Do you *really* think I'd impose a web interface on *anyone*? :-) [14:10:03] Again, you're confusing "simple" with "simplistic" [14:10:12] Simplistic isn't in my goals. :-) [14:10:24] ok [14:10:56] * addshore takes over bnr1 [14:11:05] addshore no way [14:11:08] it's too huge :D [14:11:14] petan: s/guys/people/ :-) [14:11:36] Hell, with the current draft design, if you write your heavy processing bot with parallel processing in mind, it will even be able to spread your task on compute nodes. That's hardly simplistic. :-) [14:11:41] i know :/ now that I fixed my memory leak problem I barely make a dent on the resources :/ [14:12:26] I'm going to go so that I can attend to my intern & prep the rest of my day. I presume the rest of this conversation will start after Coren shows his proposal and on labs-l where it belongs :) [14:12:29] yes fixing memory leaks is always cheaper than buying more memory [14:12:54] sumanah: Heh. The serious version, yes. :-) [14:12:59] later! [14:14:18] petan: Seriously though, if you have use cases where you actually need to be on the box running the processes beyond (a) filesystem access and (b) sending signals, I really want to hear about them so that I can plan accordingly. [14:15:36] petan: I also want to hear about use cases of tools that need several processes that live on the same box but aren't started/stopped as a unit. [14:15:46] imagine a case where the system consist of multiple processes [14:15:48] such as wm-bot [14:15:54] it has a bouncer, a core some plugins [14:16:08] petan: How do the processes communicate with each other? [14:16:21] using sockets [14:16:27] petan: unix? [14:16:29] yes [14:16:41] they all live on same machine [14:16:44] kk. So clearly they need to live on the same box unless they are tweaked. [14:16:50] they could of course use network for that [14:16:56] but that wouldn't likely be that fast [14:17:17] Yeah, I'd rather require as few changes to the tools as possible; if the use case can be supported without changes = better for everyone. [14:17:35] So, okay, those all need to be running on the same box. [14:17:57] But there's no reason why they have to be running on a /specific/ box, just all on the same one. [14:18:50] of course [14:18:55] (Of course, I'd *recommend* they swith to network sockets because then it becomes more scalable) [14:19:11] btw, using any machine means using gluster storage instead of local storage [14:19:21] which is much slower and very unstable in this moment [14:19:27] there were more than enough troubles with it [14:19:39] many bots in bots project are using local storage now for stability reasons [14:19:40] petan: That's a given anyways. Gluster needs either fixing or replacing. [14:20:13] ok but until that is fixed, you can't let these services use some corrupted or unstable fs [14:20:32] petan: Clearly. [14:21:41] petan: Remember -- it's my job to make sure the devs' needs are met. If gluster doesn't work, I'll rip it out and just NFS the darn filesystems. :-) [14:22:51] But Ryan has made good progress there, I have high hopes it'll meet our requirements of stability before we go live, or that a replacement will be put in place. [14:37:36] <^demon> Can anyone here give me a public IP for a project? [14:43:30] ^demon: Depends which project [14:44:34] ^demon: No it doesn't. *someone* can give you a public IP for a project. Depends which to see whether *I* can. :-) [14:44:42] <^demon> The gerrit project. I've already got one for gerrit-dev, but I'm testing some new stuff with gerrit on a new VM that needs a public IP. [14:44:58] ^demon: I'm not it, then. Sorry. [14:45:25] <^demon> andrew seems away and Ryan's not up yet. I'll bug one of them later. [14:45:27] <^demon> Thanks :) [15:01:20] Change on 12mediawiki a page Wikimedia Labs/Tools Lab was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=650118 edit summary: [-139] Moving to talk page [15:04:35] Change on 12mediawiki a page Wikimedia Labs/Tools Lab was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=650121 edit summary: [+193] A note on storage [15:04:49] holy f**** shit [15:05:06] labs is pissing me off [15:05:21] my instance has all the traffic at destination of port 80 redirected to the local port [15:06:51] cause of LVS configuration [15:06:55] ho wonderful :-] [15:50:57] ^demon, you could use proxy-instance [15:51:28] <^demon> It only proxies apache. [15:52:33] <^demon> Or web stuff, rather. Doesn't do me any good for testing ssh with port 29418. And last I checked, it didn't do ssl. [15:52:48] <^demon> Thanks for the suggestion tho :) [15:53:15] you could use a ssh tunnel for that [15:53:22] as you do for normal ssh [17:22:22] [bz] (8NEW - created by: 2T. Gries, priority: 4Unprioritized - 6normal) [Bug 45214] Suggestion: when installing instances, starting puppet runs etc.: ping the developer by mail about the status - https://bugzilla.wikimedia.org/show_bug.cgi?id=45214 [17:24:56] Platonides, it's sending emails when instance creation finishes? [18:46:40] Ryan_Lane: hi. ready your mail, need to discuss with you [18:47:03] so, there still may be in issue with using multiple domains [18:47:04] but when using all public domains it works [18:47:09] Hi [18:47:17] Give me time, to breath... [18:47:21] puh [18:47:34] just came in by bike. -4° outside [18:47:37] Berlin [18:47:56] Well, this morning I found a "problem" which can be solved in that [18:48:23] the Server denies (of course) reuqests, when the claimed_id and local identity differ [18:48:29] due to the different urls [18:48:32] inside / outside [18:48:42] pmtpa vs. instance-proxy [18:48:51] I think, I can fix this, too [18:49:03] now you: pls. explain, what is working now for you [18:49:12] and how I can try [18:49:14] ? [18:50:07] Ryan_Lane: you wrote "so, there still may be in issue with using multiple domain" tjhis is what I call "Inter-Instance Authentication" [18:50:17] this is what I also think can fix... [18:50:36] cool [18:50:46] that would be nice, for testing purposes [18:50:55] it would also be nice for cloud users [18:51:14] since this is an issue on any cloud infrastructure that uses floating IP addresses [18:51:40] Here's a first bug: [18:51:45] OpenID permissions error [18:51:47] Jump to: navigation, search [18:51:48] The OpenID you provided is not allowed to login to this server. [18:52:02] I tried to add my Google OpenID to my Labsconsole account [18:52:08] this failed immediately [18:52:43] Or, have you disabled that ? [18:53:00] then it is "only" a problem of not showing OpenID icons, which are not working [18:53:07] Tyler has a patch for this, too [18:53:14] and I need to code-review it [18:53:56] Ryan_Lane: What OpenID do you want to allow for login to Labsconsole ? [18:53:59] Ryan_Lane: ping [18:54:18] s/OpenID/OpenIDs/ [18:56:10] Ryan_Lane: What OpenIDs are allowed or disallowed currently for logins on Labsconsole ? [18:56:24] none are allowed [18:56:28] ah [18:56:38] openid as a consumer won't work on labsconsole just yet [18:56:39] Uh [18:56:42] YES [18:56:43] it requires password auth [18:56:50] sorry, I overlooked [18:56:51] for unrelated reasons [18:56:53] you said it above [18:57:01] "as Server" [18:57:03] sorry [18:57:04] yeah [18:57:06] no worries [18:57:13] Then software should NOT show the provider screen [18:57:17] I'd like to eventually allow OpenID as a consumer [18:57:18] software = E:OpeniD [18:57:24] ok [18:57:37] I need OAuth before I can do that, though [18:57:37] now, we come closer to the point [18:57:55] and I need OAuth in mediawiki and OpenStack Keystone [18:57:56] Yes [18:57:57] so, it's a long road to that ;) [18:58:12] Ryan_Lane: so at the moment, you are happy with E:OpenID ? [19:01:48] Ryan_Lane: was there any need or wish for you to have instance-wikis as i) OpenID providers, or ii) consumers ? [19:04:21] Wikinaut: yes. I'd like to solve the issue where user pages need content [19:04:24] but otherwise, yes [19:04:41] ah. this issue has the following background: [19:05:22] the consumer, which tries to fetch the User:xyz Open gets an error 401 (or what was it?) [19:05:35] it requires a patch in the core [19:05:37] or hook [19:05:47] to avoid the "page does not exist" [19:05:57] it only requires this [19:06:00] patch [19:06:22] was that clear ? [19:06:48] you can easily see the error when you wget a User page without content [19:07:10] <^demon> Ryan_Lane: Could I get a second public ip for the gerrit project? [19:07:55] ^demon: I can do it, hang on... [19:08:12] <^demon> andrewbogott: Thanks. [19:09:07] ^demon: done [19:10:23] <^demon> And I'm all set now. Thanks again. [19:11:58] Ryan_Lane: ^https://bugzilla.wikimedia.org/show_bug.cgi?id=45241 [19:12:12] As OpenID server, allow wiki/User:Username OpenIDs also when the userpage has no content [19:15:04] Change on 12mediawiki a page Wikimedia Labs/Tools Lab was modified, changed by MPelletier (WMF) link https://www.mediawiki.org/w/index.php?diff=650245 edit summary: [+234] Authentication of end users [19:16:19] non existing user pages give "ERROR 404: Not Found" on the consumer [19:16:26] how to fix this ? [19:16:43] if you solves this, it's solved ! [19:16:52] s/solves/solve/ [19:16:54] arrrrg [19:17:28] Ryan_Lane: ^https://bugzilla.wikimedia.org/show_bug.cgi?id=45241 [19:17:30] Wikinaut As OpenID server, allow wiki/User:Username OpenIDs also when the userpage has no content [19:17:39] non existing user pages give "ERROR 404: Not Found" on the consumer [19:17:48] if you solve this, it's solved !! [19:18:12] Wikinaut: yeah [19:23:37] Ryan_Lane: something different: regarding E:RSS . I want to submit this patch now. [19:23:50] Anything against submitting it ? [19:24:10] Reedy has not answered yet [19:24:15] I pinged on all channels [19:33:57] Wikinaut: which patch? [19:34:00] oh [19:34:06] I have no idea regarding that extension [19:34:10] I'm not a maintainer [19:34:14] I am [19:34:48] I just wanted to know policy of submitting to gerrit [19:34:58] re. self-code reviewed [19:35:09] in this case, patch sets were code reviewed [19:35:15] by C.Steipp and Ori [19:35:36] Wikinaut: oh, btw [19:35:49] Wikinaut: if you wanted to add back in labs as a provider in the extension, the url is: https://labsconsole.wikimedia.org/wiki/Special:OpenIDServer/id [19:37:31] works like a charm [19:38:01] I mean, logging in an instance [19:38:12] or, what do you mean? [19:38:12] yeah [19:38:24] Wikinaut: I mean the change I reverted previously [19:39:23] hm, I don't know what change you mean (the conditional $wgServer in orig/LocalSettings ?) [19:40:16] Wikinaut: https://gerrit.wikimedia.org/r/#/c/48329/ [19:42:20] ah, I understand. [19:43:30] I understand; you meant: iif I wish, I could now think of re-adding this patch (perhaps as a kind of installation option, but this isn't possible at the moment!) [19:43:41] Tyler prepared something [19:44:04] a refactored "provider" page. Will work together with him [19:44:05] Wikinaut: yeah, re-add, but with the new url [19:44:18] you will be quicker in doing this, sure [19:44:22] sure [19:44:23] Please go on [19:44:28] +1 [19:44:28] will do [19:44:36] fine for me to push it in, then merge it? [19:44:46] let me code review :-) [19:44:55] ok [19:44:57] no, it's ok [19:45:03] if YOU do [19:45:15] otherwise we waste time [19:48:33] Wikinaut: actually. I'll leave it out [19:49:01] when we enable this for wikimedia project, then I'll add that in [19:49:23] ok [19:49:25] people will force labs as a provider, in our use-case, I think [19:49:26] better [19:49:34] Yes. I 100% agree [19:49:48] let me fix this as explained above [19:49:53] the idea is... [19:50:02] to have a kind of provider list [19:50:07] as array or JSON [19:50:17] and then admins can add providers, urls, logos [19:50:35] at their discretion and w/o modifying the code [19:51:00] it's already in gerrit, but I haven't the time yet to test it [19:51:14] and wanted to wait until the issue was solved [19:51:34] the issue of E:OpenID on labs [19:51:37] . [20:39:08] Wikinaut: yeah, that would be great [20:39:19] thanks so much for working on this [20:39:23] :-)) [20:39:25] :-) [20:39:42] need to als the Foundation for a presemt [20:39:45] ask [20:39:56] heh [20:40:03] I can send you a wikimedia labs shirt ;) [20:40:14] yes soemthing like this Size XL [20:42:29] bye for today ! [20:42:39] I would get an XL white russian please [21:44:47] Ryan_Lane: YOU HAVE SHIRTS!? [21:44:52] heh [21:44:56] I can make shirts [21:44:58] lol [21:45:07] Damianz: I'll bring you one at the ams hackathon [21:45:12] if you are going [21:45:21] :D [21:45:24] I think I am [21:45:29] I should really just print a whole bunch in different sizes [21:45:49] Finally got the beast into the garage so I can buy a new bike soon which justifies driving around europe for a few days heh [21:55:23] ha Damianz !! [21:55:24] ! [21:55:24] There are multiple keys, refine your input: !log, $realm, $site, *, :), access, account, account-questions, accountreq, addresses, addshore, afk, alert, amend, ask, b, bang, bastion, beta, blehlogging, blueprint-dns, bot, botrestart, bots, botsdocs, broken, bug, bz, cmds, console, cookies, credentials, cs, damianz, damianz's-reset, db, del, demon, deployment-beta-docs-1, deployment-prep, docs, documentation, domain, epad, etherpad, extension, -f, forwarding, gerrit, gerritsearch, gerrit-wm, ghsh, git, git-branches, git-puppet, gitweb, google, group, hashar, help, hexmode, home, htmllogs, hyperon, info, initial-login, instance, instance-json, instancelist, instanceproject, keys, labs, labsconf, labsconsole, labsconsole.wiki, labs-home-wm, labs-morebots, labs-nagios-wm, labs-project, labswiki, leslie's-reset, link, linux, load, load-all, logs, mac, magic, mail, manage-projects, meh, mobile-cache, monitor, morebots, msys, msys-git, nagios, nagios.wmflabs.org, nagios-fix, newgrp, new-labsuser, new-ldapuser, nova-resource, op_on_duty, openstack-manager, origin/test, os-change, osm-bug, pageant, password, pastebin, pathconflict, petan, ping, pl, pong, port-forwarding, project-access, project-discuss, projects, puppet, puppetmaster::self, puppetmasterself, puppet-variables, putty, pxe, python, q1, queue, quilt, report, requests, resource, revision, rights, rt, Ryan, ryanland, sal, SAL, say, search, security, security-groups, sexytime, single-node-mediawiki, socks-proxy, ssh, sshkey, start, stucked, sudo, sudo-policies, sudo-policy, svn, terminology, test, Thehelpfulone, tunnel, unicorn, whatIwant, whitespace, wiki, wikitech, wikiversity-sandbox, windows, wl, wm-bot, [21:55:25] :-} [21:55:40] Damianz: aren't you maintaining the nagios.wmflabs.org ? [21:55:57] hashar: Kinda... I know puppet checks are broken [21:55:59] Damianz: I might need some new nagios checks for beta :-] [21:56:05] ooh [21:56:26] such as making sure some web services are running. Like upload serving file, enwiki being served etc. [21:56:40] if I got you a list of URL to check is that something you could implement ? [21:56:54] Hmm [21:57:02] Are all beta apache servers using a puppet class? [21:57:28] yeah [21:57:33] That should be pretty easy then [21:57:40] but I would need some requests to be made on the cache servers [21:57:45] turns out they die from time to time [21:57:52] or stop serving the proper files :-] [21:58:23] The way it's current;y designed is different puppet classes get different monitoring applied - sure we can figure something out though :D [21:58:31] Be nice to get more things into monitoring [21:58:50] can't we add custom checks ? [21:58:58] though maybe we could have them in puppet hehe [21:59:08] like nagios_monitoring_labs( … ) [22:01:23] I will poke you about it later :-] [22:07:08] Nah - we have collection turned off on labs so can't do puppet - you can send me a pr for https://gerrit.wikimedia.org/r/gitweb?p=labs/nagios-builder.git;a=blob;f=labsnagiosbuilder/templates/host.cfg;h=73d64cca63b53dd32d858d3885206223a578321d;hb=HEAD [22:07:26] {% if 'misc::ircecho' in host.puppet_classes -%} stuff basically is how we stay in line with puppet classes [22:07:48] Might fix puppet checks after I eat tea [22:08:25] ahh [22:08:27] that is neat [22:08:33] just include the class, get the monitoring :-] [22:08:45] I will have a look at the files in that project, might end up crafting something for you :-] [22:17:11] I'm thinking maybe I should split these our to mirror the puppet structure and then include them in the host file.... would make sense longer term [22:33:31] I can't imagine how slow labs puppet would be if we allowed exported resources ;) [22:49:22] Ryan_Lane: It's too slow already :( [23:21:49] [bz] (8UNCONFIRMED - created by: 2Damian Z, priority: 4Unprioritized - 6minor) [Bug 38792] Thumbnails are broken - https://bugzilla.wikimedia.org/show_bug.cgi?id=38792 [23:23:10] Can anyone remember what the gerrit gitweb alternative chad mentioned was? [23:35:56] gitblit maybe Damianz? [23:36:17] Ah yeah that was it, thanks chrismcmahon!