[03:09:21] anomie: if you've got a minute, can you comment on what could possibly cause OAuth to return a blank response when running /identify? My IABot GUI is trying to access the korean wikipedia API, but when it attempts to identify to it to get local permissions all it gets is an empty response. It's not even timing out, the response is immediate. I've looked and looked, but with OAuth being such a complicated system, [03:09:21] I'm not sure if I'm overlooking a bug on my application or a potential bug in MW. [03:25:05] anomie: the problem is occurring at https://github.com/cyberpower678/Cyberbot_II/blob/test/IABot/www/Includes/OAuth.php#L256. CONSUMERKEY and the access key index is set and has the correct info. [03:25:27] When I take the same credentials and identify on enwiki, I get a JSON object. [06:29:02] (03CR) 10jenkins-bot: Localisation updates from https://translatewiki.net. [labs/tools/heritage] - 10https://gerrit.wikimedia.org/r/436452 (owner: 10L10n-bot) [09:48:08] (03PS1) 10Giuseppe Lavagetto: Add hieradata for mcrouter's ca_secret [labs/private] - 10https://gerrit.wikimedia.org/r/436492 [10:11:26] (03CR) 10Giuseppe Lavagetto: [V: 032 C: 032] Add hieradata for mcrouter's ca_secret [labs/private] - 10https://gerrit.wikimedia.org/r/436492 (owner: 10Giuseppe Lavagetto) [11:08:21] zhuyifei1999_: ping me if you need me to merge this https://gerrit.wikimedia.org/r/#/c/433101/ [11:09:03] arturo: yeah, but not urgent [11:09:28] zhuyifei1999_: will it cause any disruption to any service? [11:10:49] it should not. for toolforge the file already exists with exact same contect, and for toolsbeta it's already cherry-picked. [11:11:06] ok merging then [11:11:23] restart is also set to false so theoretically it should not restart docker either [11:11:58] the patch will be needed if we want the next k8s worker build to be successful [11:12:26] shall I force a puppet run and see if anything goes wrong? [11:12:35] yes, let me merge it [11:13:18] zhuyifei1999_: merged [11:13:27] !log tools force puppet run on tools-worker-1001 to check the impact of https://gerrit.wikimedia.org/r/#/c/433101 [11:13:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:14:39] this would require to rebuild the docker image: https://gerrit.wikimedia.org/r/#/c/435662/ will keep back it until tomorrow, so I can merge & rebuild in the same run [11:14:54] cc legoktm [11:16:15] https://phabricator.wikimedia.org/P7195 <= don't think I see anything wrong [11:18:40] I don't see any mention of the docker systemd override though [11:19:04] yeah, also did it produce staleness for any other files? [11:20:28] oh right, might need a while before tools-puppetmaster-01 gets the change [11:20:37] oh [11:20:55] pulling now [11:21:31] zhuyifei1999_: it is now in the tools puppetmaster [11:21:43] ok [11:21:54] (running again) [11:23:53] https://phabricator.wikimedia.org/P7195 <= got the file mentioned, but no mention of file being changed... which is good :) [11:25:46] great zhuyifei1999_ thanks for the extra checking [11:25:58] arturo: for legoktm's patch, I can do the docker building. shall I do that or shall I leave it to you? [11:26:00] np [11:27:23] zhuyifei1999_: ok, I merge and you rebuild? for the record, docs are https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Admin/Kubernetes#Docker_Images [11:27:34] k [11:27:38] do you have all the required credentials? [11:27:54] yeah, done it once when I rebuild the webservice package [11:28:12] zhuyifei1999_: patch merged! [11:28:13] I'll ssh in as root, so should be available [11:28:25] k [11:31:49] !log tools building & pushing python/web docker image T174769 [11:31:51] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:31:51] T174769: Make it less cumbersome to bootstrap and update python webservices - https://phabricator.wikimedia.org/T174769 [11:42:30] is k8s cluster full? getting a working shell pod is ridiculously difficult for me [11:42:52] 'Pod is not ready in time' [12:01:35] there should be metrics about that, right? [12:01:42] hopefully :-P [12:43:55] Cyberpower678: what is the HTTP response code? [12:59:01] I guess I have to download dumps individually if I need to work with xml dumps? [12:59:08] (to toolforge) [13:00:18] download? [13:01:40] revi: /mnt/nfs/dumps-labstore1006.wikimedia.org/xmldatadumps/public [13:01:45] thanks [14:11:11] It seems quarry breaks when if the query contains an emoji [14:11:17] It just cuts off the query string from that point. [14:15:46] that would indicate it's utf8 and not utf8mb4 [14:34:00] Krinkle: could you file a patch? [14:34:26] I don't think it's an encoding issue as paladox indicates. [14:34:39] The query itself is not stored correctly even within Quarry because it is trimmed. [14:34:52] If it was stored correctly but caused an error within MySQL, the it would be encoding issue. [14:35:04] The problem is that MySQL never gets it, it is seeing an incomplete query. [14:35:12] I do not know where the problem is in Quarry forthis. [14:35:20] which query? [14:35:33] I'll check the database [14:35:47] whether mariadb has the full query [14:40:30] Krinkle: ^ [15:28:34] tgr: let me check [15:31:29] zhuyifei1999_: two clues 1) When refreshing the page, the quarry interface itself shows the query cut off, which means it was lost in ways unrelated to MySQL, 2) The error from MySQL is about unexpected end of query, not about invalid characters. [15:32:24] For example, try to run the following: [15:32:28] ` SELECT '😂'; ` [15:32:39] Pressing submit and refreshing shows the text area containing [15:32:39] SELECT ' [15:32:45] Error: [15:32:46] You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''' at line 1 [15:33:13] check what is the client connection collation [15:33:38] some php defaults configs force an incompatible one [15:33:45] or other connectors [15:34:14] The fact that quarry is able to save (without error) the incomplete string to its own database of queries, suggests to me it gets cut off before sql collation gets involved. [15:34:23] labsdb uses binary like production, so in terms of capabilities it should work [15:34:26] But maybe the insertion itself is cutting it off? [15:34:42] but connection collation is purely client-selected [15:35:21] one quick way to fix it is running "set names 'utfbmb4'" [15:35:25] like, is it possible to do an INSERT to queries_table setting sql_text=" ... " in a way that .. contains an emoji and other characters after it, and SQL will succeed in adding to row but crop to only before the emoji? I woudl expect it to either fail, or corrupt that one character. [15:35:26] *utf8mb4 [15:35:32] The trimming is unusual to me. [15:36:04] I can't change the collation, the content I type within the textarea is itself a string saved by Quarry to its own database tables. [15:36:36] The actual query submitted to replicas is fine (in so far, it never gets the complete string) [15:43:49] jynus: quarry has its own database [15:44:00] Krinkle: what is the query id? [15:44:10] https://quarry.wmflabs.org/query/27329 [15:44:16] * zhuyifei1999_ looks [15:46:09] https://www.irccloud.com/pastebin/CYO18oHV/ [15:46:20] yep, the emoji was never saved [15:46:51] * zhuyifei1999_ checks [15:48:19] jynus: could this be this culprit? https://github.com/wikimedia/analytics-quarry-web/blob/master/quarry/web/connections.py#L14 [15:49:03] * zhuyifei1999_ is gonna live patch [15:51:02] yeah, that should probably be utf8mb4 rather than just utf8. Confusingly the "utf8" encoding in MySQL/MariaDB is only a 3-byte encoding so most emojis break it. [15:51:08] it could be, emojis don't fit into utf8 [15:51:38] *but* if the problem is inserting, it may need table changes or other changes [15:52:01] (e.g. not sure what is the charset of the field it is inserting too) [15:52:22] also a non-strict sql mode could cause bad things to be inserted [15:52:30] !log quarry live-patch `/srv/quarry/quarry/web/connections.py` on `quarry-main-01` and restart uwsgi [15:52:31] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Quarry/SAL [15:54:11] now it's a question mark... https://quarry.wmflabs.org/query/27333 need table changes indeed [15:57:10] "CONVERT TO CHARACTER SET" is what you want [16:02:19] Krinkle, jynus: https://gerrit.wikimedia.org/r/#/c/436576/ [16:04:53] actually, weird, the query_revision text is `text TEXT BINARY` [16:09:42] it is ok [16:10:01] binary allows all kind of stuff [16:10:16] and if the connection config is configured correctly, it gets tanslated automatically [16:10:56] but both writes and readers must agree that they are writing, eg. utf8 (4 bytes) [16:11:15] binary is what it is being used on wikipedias, for example [16:11:31] but make sure you are not inserting latin1 but configuring utf8 [16:13:03] I'll run it in vagrant tomorrow. the connection string should determine the charset as far as I understand [16:13:15] it should, yes [16:13:22] but then there is the code, etc. [16:13:28] it gets complicated :-) [16:13:29] what code? [16:13:41] the code storing things in memory [16:13:54] also there is a collation for queries and for results [16:13:59] *charset [16:14:09] there are many variables [16:14:21] only one gone bad will break things :-) [16:14:35] :( [17:03:32] zhuyifei1999_: arturo: yay, thanks :) [17:07:26] legoktm: could you check how well `webservice shell` is working for you? [17:07:57] it's very difficult for me to get a shell [17:16:04] zhuyifei1999_: I think something is a bit overloaded in the k8s cluster, but I haven't figured out what part yet [17:16:44] * zhuyifei1999_ neither [17:17:25] its possible that it is just the local Docker image caches being cold too I guess. [17:19:36] right now I'm actually having good luck spawning new k8s shell sessions &shrug; [17:19:45] ¯\_(ツ)_/¯ [17:20:15] oh really? /me tries [17:20:48] wow got it in < 10 seconds [17:26:17] bd808: is there auto image building for the k8s cluster? :P [17:26:45] * chicocvenancio doesn't think there is [17:27:01] not sure though [17:27:07] addshore: not yet :( The cache is just having all the layers on the local exec node from the registry [17:27:29] is building images for the registry automatic either? [17:27:41] * addshore is reading the docs now and thinking about maybe moving some things to k8s [17:27:43] its scripted, but manually run [17:27:48] ack [17:27:50] it's scripted, but needs someone to run the script [17:28:00] who runs it? / is technically able to? [17:28:09] tools roots [17:28:37] ack! so to build a new image and get it in the registry I have to poke people :( [17:29:22] yes, and convince us that its a good thing to have and maintain ;) [17:29:34] hmmmmmm [17:30:04] https://wikibase-registry.wmflabs.org ;) was one candidate [17:31:14] addshore: fyi https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Admin/Kubernetes#Image_building [17:31:18] I remember crossing this bridge before I think and thinking #1 not being able to somehow automatically have images pushed to the registry kind of sucks, and also not being able to use images form other registries [17:31:55] although I guess I could just do From wikibase/wikibase:1.30-bundle in an image and push that to the tools k8s registry? or would not also not be allowed? does it have to be the tool registry all the way down? [17:32:04] today our k8s cluster is not a general purpose Docker runtime. Its purpose built for migrating things off of grid engine [17:32:41] the `webservice` command assumes that there are a lot of things available in the container that are not normal [17:32:52] zhuyifei1999_: is it possible to pass commands to run inside the webservice shell? like $ webservice --backend=kubernetes python shell -- webservice-python-bootstrap [17:33:29] I don't think so.looking at the code [17:33:40] addshore: the biggest one being LDAP NSS support so that the container runs as the tool and can interact with the NFS mount(s) [17:33:47] legoktm: you could do that with kubectl directly though [17:34:03] well, My end game is to show wikibase and services working on kubernetes, and have the necessary configurations to make it all work to be able to show people, and using wikibase-registry as an example case [17:34:11] legoktm: https://github.com/wikimedia/operations-software-tools-webservice/blob/master/toollabs/webservice/backends/kubernetesbackend.py#L433 [17:34:23] addshore: *nod* its a nice idea for sure [17:34:40] * addshore is also perfectly happy setting up his own k8s cluster for that reason though :) [17:34:45] ok, I'm going to re-open the ticket for now [17:34:50] just trying to figure out if it would work on the tools one [17:35:01] since that was part 2 of my "make it less cumbersome" request [17:35:01] so not as simple as command line args. but yes we should have that [17:35:08] k [17:35:50] I wonder if we should increase that timeout a bit as well, bd808 [17:36:11] maybe I just have too much bad luck too often :( [17:37:26] addshore: this is why for wikibox i wanted to set up my own docker registry [17:37:29] addshore: the "bring your own container" use case is not something that Toolforge's Kubernetes cluster is likely to support in the foreseeable future [17:37:45] hare: ack and bd808 ack! [17:38:27] It would be interesting to discuss a community managed Cloud VPS project that did support that though [17:38:48] If my team had more people we'd be looking into how to do it too :) [17:39:07] At the moment, we have resources to think a lot about it but not really do much [17:39:55] legoktm: you wanna do the patch for the second part? [17:40:13] zhuyifei1999_: I wouldn't know where to start [17:40:21] is role::puppet::self dead? standalone puppetmaster seems like a lot of effort for testing a puppet patch [17:40:32] Well, I guess you did just link me the code, but I looked at it, and had no idea what to do [17:40:46] or I guess we could leave it for Neha16 [17:42:42] oh this is scheduled for the week of July 13 https://phabricator.wikimedia.org/T190638 [17:43:15] legoktm: I just found https://phabricator.wikimedia.org/T169695 [17:43:52] I guess part 2 is a dup now :) [17:44:01] I am guessing we are talking about T174769 [17:44:03] T174769: Make it less cumbersome to bootstrap and update python webservices - https://phabricator.wikimedia.org/T174769 [17:44:26] yeah [17:44:44] tgr yep [17:44:52] tgr https://gerrit.wikimedia.org/r/436600 [17:45:20] yeah, found that in the meanwhile [17:45:22] bd808: well, I can probably setup and manage a cluster doing just that... maybe [17:45:25] zhuyifei1999_: lol, same exact use case [17:45:39] bd808: although of course my cluster isn't using any of the puppetized stuff [17:45:52] but it probably could, i just havn't tried using any of it yet [17:46:41] maybe that is something I should look at and try out, as it would probably help me get to where I want to be anyway... [17:46:54] maybe I should request a project? ;) [17:47:31] although, maybe that is taking on too much for what I actually want to do... [17:47:45] addshore: you could try to pick chicocvenancio's brain about how the cluster that PAWS uses is setup. I believe it is scripted, but not puppetized after the last time y.uvi moved it [17:48:07] if you just want to run one project using Docker its probably huge overkill [17:48:56] the biggest building block we are missing today for nice k8s runtimes is a good way to make durable storage containers to attach to pods [17:49:09] hehe, indeed, this is specifically about using k8s though :) so I can provide examples for other people to be able to take wikibase and surrounding services to 'whatever cloud they want' #orchestration [17:49:48] interesting, that's actually one thing I haven't got to yet with my throw up and tear down k8s test environment [17:50:08] addshore: the tools k8s cluster is puppetized and you can see https://phabricator.wikimedia.org/T190893 for some of me documenting how I replicated it in toolsbeta [17:51:14] interesting that you chose flannel, i also chose flannel [17:51:35] bd808: will toolsadmin automatically update diffusion ACL policies if I add a new tool maintainer or do I need to do that manually? [17:52:15] yu.vi chose flannel. idk about these stuffs and what they do :) [17:54:44] if you want to see my slap dash version of setting one up on labs take a look at https://addshore.com/2018/04/from-0-to-kubernetes-cluster-on-custom-vms/ I'd like to think I did a reasonable write up [17:55:55] * addshore will request a project for now to continue his task, just gotta think of a name.... [17:57:56] (I did it manually) [18:06:56] legoktm: Its manual today. I never put in the work to have a "refresh permissions" option for the repos. It could be done, but would probably need a UI to let you decide how to resolve conflicts. [18:07:57] bd808: would "wbaas" be an okay project name, or would you like something with more actual words in it? [18:08:10] standing for wikibase as a service [18:09:12] mhhhm, or maybe i should just request a generic k8s one.... or maybe both... [18:09:22] addshore: *shrug* wikibase-as-a-service is pretty long, but you would probably tab complete it from your ssh client :) [18:09:55] i remmeber there being some issue with longer fqdns before though :P [18:11:11] https://phabricator.wikimedia.org/T178409 xD [18:12:47] * addshore will write tickets later or something [18:13:47] hare: I'll request both projects, one is for use by me in the short term, and one will attempt to serve as a home for both your and my project in the longer term? how does that sound? [18:14:11] hare: could you email me a short description of your project so that I can put it in a ticket later? [18:14:14] not my project addshore :) I haz no time [18:14:18] addshore: I already have a project request for wikibox [18:14:34] * bd808 see hare now and nods sagely [18:14:55] hare: okay! in that case I'll still request 2 I think, one aiming for a longer term generic k8s solution for us :) [18:15:13] Sure, I think in the long term we need to figure out Generic Kubernetes [18:15:50] hare: indeed [18:16:00] hare: we could call it general-k8s ;) [18:16:50] hare: the logo can have something to do with a salute [18:16:54] * addshore is off for now [18:27:05] Cyberpower678: I don't know of anything that would cause that offhand. An error should at least return an error page. [18:27:30] anomie: I'll give it another run through to see what the HTTP code is. [18:41:58] hare: https://phabricator.wikimedia.org/T196094 :) [18:45:43] One final question to the cloud crew.... If a project currently has too many resources and probably doesn't need as a many should I file a ticket to reduce them or do you not really care? :) [18:49:01] addshore: as in too much quota, or too many resources in actual use? [18:49:18] too much quota for the number of resources actually used [18:49:28] just curious :) [18:52:35] anomie: I get an empty response and a 302 HTTP code [18:52:52] The URL is https://ko.wikipedia.org/w/index.php?title=Special:OAuth/identify [18:53:26] Cyberpower678: 302 is a redirect. Follow it, or adjust your URL to whatever it's trying to redirect you to. [18:53:31] Oh and it's trying to redirect to https://ko.wikipedia.org/wiki/%ED%8A%B9%EC%88%98:MWO%EC%9D%B8%EC%A6%9D/identify [18:53:43] It should be set to follow it. [18:54:59] anomie: ummm, when I follow it, it invalidates my signature. [18:55:22] So now I get an invalid signature error. [18:56:14] * Cyberpower678 doesn't know what to do right now. [18:56:47] The interface connects to different language Wikipedias so it uses "w/index.php?title=Special:OAuth/identify" consistently. [18:58:07] anomie: would it not be possible to have OAuth still accept the signature after just redirecting the client to a URL that doesn't match up with the signature's? [19:00:22] Cyberpower678: No, that's not possible. [19:00:49] So why even redirect the korean Wikipedia, all the other languages IABot runs on don't redirect. [19:01:12] Just out of curiosity. [19:06:02] Cyberpower678: I'm investigating. [19:06:47] In the meantime, I'm adding code to detect and handle the redirects by regenerating the signature and trying again. :-) [19:16:20] !log preparing gerrit-test3 for upgrade to 2.15 (testing T174034 and T177201) [19:16:21] paladox: Unknown project "preparing" [19:16:21] T177201: Update gerrit to 2.15.2 - https://phabricator.wikimedia.org/T177201 [19:16:22] T174034: Migrate to NoteDb - https://phabricator.wikimedia.org/T174034 [19:16:29] !log git preparing gerrit-test3 for upgrade to 2.15 (testing T174034 and T177201) [19:16:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Git/SAL [19:21:35] anomie: okay new problem. With the code now in place to take on the new URL, it retries an identify attempt from scratch with the new URL but even with the signature encoded correctly based on your OAuth code, I still get invalid signature. :-( [19:22:08] Methinks non-ASCII characters may be encoding wrong. [19:22:59] Cyberpower678: No, your code is probably correct. I think it's on the MediaWiki side. I'm just trying to figure out why this never came up before. [19:26:32] meh power outage [19:28:37] Cyberpower678: File a task in Phabricator so I can track this more easily, please. [19:36:41] anomie: what's the reason for not using the API for /identify, anyway? [19:37:23] tgr: The JWT response is some standard, or was a proposed standard at the time. [19:37:46] it is, but a custom API formatter could deal with it [19:38:31] (for some value of standard... we basically replace OAuth 2 with Oauth 1 in the OpenID Connect standard and use that) [19:42:07] ko is one of the very few languages in which Special:OAuth is localized: https://github.com/wikimedia/mediawiki-extensions-OAuth/blob/master/frontend/language/MWOAuth.alias.php [19:42:29] I wonder how the namespace localization redirect gets bypassed, though [19:43:23] !log git cherry picking https://gerrit.wikimedia.org/r/#/c/436607/ onto phab-tin for scap deploy [19:43:24] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Git/SAL [19:43:27] anyway, would be nice to expose /identify as an API module [19:44:07] another gotcha with the current URL is using nice URL vs. title parameter [19:44:51] tgr: That's what I concluded. The redirect is happening from SpecialPageFactory::executePath(). We could avoid it by either making "OAuth" the canonical local name for the special page in the alias file, or just override SpecialPage::getLocalName() in SpecialMWOAuth to return 'OAuth' when OAuth headers are present. [19:45:30] one idea I have been playing around with is to have a fallback chain of possible original URLs, and trying all of them for the signature check [19:45:40] tgr: There's not much point in putting /identify in the API when you can get all the same information from the existing meta=userinfo module (and maybe other modules, I haven't checked exactly what's included). All you miss is the JWT signing. [19:45:55] overriding getLocalName sounds like the best short term solution, in any case [19:46:14] And then every client currently using /identify would have to change to the API module, etc. [19:46:37] yeah, that would have to be preserved [19:52:19] anomie: tgr https://phabricator.wikimedia.org/T196102 [19:57:31] addshore: i guess my next question would be, is this a project that was given a very generous allowance but now no longer needs that allowance? [19:57:46] If you're just a bit below the quota I'm not sure it matters that much [20:18:41] !log git sudo service gerrit stop on gerrit-test3 [20:18:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Git/SAL [20:19:37] !log git java -jar gerrit.war init -d review_site on gerrit-test3 [20:19:38] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Git/SAL [20:22:40] !log git sudo service gerrit start after upgrade to 2.15 [20:22:41] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Git/SAL [20:33:44] chicocvenancio: I don't think I'm going to get to your PAWS ingress stuff today. :( [21:16:18] hi -cloud, does wmf openstack support vlan tagging? https://wiki.openstack.org/wiki/VlanNetworkSetup [21:18:01] i am looking at options for getting some fundraising beta type stuff off my laptop [21:18:52] cwd: at the provider level yes but nothing like that is exposed to end users [21:19:04] what are you trying to do? [21:19:39] chasemp: i have a large amount of virtualboxes locally and am worried about starting a fire [21:20:19] I mean more like, what do you need vlan tagging for? [21:20:25] i can't effectively test a lot of changes without copying the vlans we have live [21:20:51] there are 5 or so zones [21:21:19] so i think it would be within one "project" in openstack terms? [21:21:29] i used it years ago but i am sure things have changed [21:21:43] projects and networks are not 1:1 [21:22:07] I think what you want isn't sanely possible in the existing setup, but potentially is post some migrations that will happen this year hopefully [21:22:21] ah ok [21:22:37] yeah jeff green said he had looked into it a while ago and it wasn't really feasible [21:23:03] but i have been dealing with some VB insanity today so i thought i'd ask [21:23:32] it would be cool if you could persist the ask to a task bc I think an approximation of it is possible in the near future [21:23:38] but I'm not sure of all the specifics [21:27:35] chasemp: sounds good, thanks :) [21:27:43] i will check back at some point [21:29:16] and will file a task now [21:29:43] cwd: nice [23:04:26] hare: no, default allowance [23:05:16] Hmm, I don't think you necessarily have to go out of your way to give back quota. [23:05:42] bd808 may have more concrete opinions on this [23:07:03] not hanging on to unused quota is nice. it makes it less likely that you will randomly decide to spin up a bunch of new VMs on some day when we are in a squeeze for resources [23:07:48] turning off and deleting unused vms is even better of course because that gives back to the global pool [23:08:43] How does quota work in practice? If my project is assigned a quota, do I own that chunk of resources, meaning there is no risk of overallocation? [23:08:53] nope [23:09:21] its just a ceiling on how much you could use if it was available [23:10:20] so less like the size of a box given to you and more like the max volume of a balloon that you could inflate [23:27:28] Is there a particular reason paws-public doesn't serve HTTPS? [23:27:56] If you manually specify HTTPS it works, but it doesn't do HTTPS by default as I'm accustomed to. [23:28:13] nothing in Cloud does https by default... yet [23:28:48] * bd808 has a pending patch for http->https redirects for Toolforge [23:31:06] beta cluster does bd808 [23:31:10] it enforces https [23:31:50] I made an absolute statement didn't I? ;) Neither of the dynamicproxy services enforce https [23:32:10] yeah :P [23:44:57] right I think it's up to individual VPS projects to enforce HTTPS in their own world [23:45:03] and Toolforge, as one of the largest ones... does not :( [23:47:54] hare: working on it -- https://gerrit.wikimedia.org/r/#/c/432935/ [23:48:22] I'd like to roll that out next week sometime. Brandon gave it a general thumbs up [23:49:41] closing the POST loophole will be a longer term thing. It took us something like 9 months to do it in production [23:59:17] I'm trying to remember what the problem with HTTPS on novaproxy was [23:59:20] was it sub-sub domains? [23:59:26] stuff outside wmflabs.org ?