[06:40:01] AaronSchulz: hmm I don't see your username yet on Element (to add to perf and releng staff channels). [06:40:29] I think you were looking at this earlier, but maybe it didn't work? [18:29:38] hey :) just noticed this comment: https://phabricator.wikimedia.org/T234455#6075575 - if I understand it correctly, the goal is to optimize the old PHP memcached client to handle operations more efficiently than the C one and then ditch the C extension? [18:40:55] We're considering that option indeed. But I'm still open to hearing other options as well. The PECL ext is well known and popular, it just seems like it might not be with salvaging. [18:41:25] I've reached out to Etsy to also get a sense of what their current approach and direction is in this area [18:51:22] awesome, thanks for the info [18:53:13] we just made the switch to the PECL ext with the upgrade, so that might have been bad timing :P [18:54:47] mszabo: Aye, perhaps, but perhaps not. If we undertake this, we will standardize on the PECL behaviour from a public standpoint. [18:54:59] So it'll be transparent/smooth upgrade going forward [18:55:26] e.g. switching to/from PECL today requires a memc wipe as value encoding is not compatible [18:55:28] that's good to hear :) since right now it's not possible to easily switch [18:55:29] but that won't be the case [18:55:29] yeah that [18:55:34] yep :) [18:55:48] think of it as pecl memcached 2.0 in pure PHP [18:56:13] mszabo: I'm curious what motivated the switch this time around, anything in particular? [18:56:31] for what its worth, I would still pick pecl today for new projects, the current state of the php client is quite old indeed. [18:56:53] yeah, as far as I know it's some very old third-party client that was grandfathered in at an early stage [19:50:46] sorry, I missed the question - IIRC the main reason was its native support for get/setMulti() and that it seemed to have received more attention in general during recent MediaWiki development [20:11:12] mszabo: I was just wondering if a particular problem or improvement attracted you to switch, or if it was just for alignment with wmf and general practices but that it seems to behave same before and after [20:11:39] yeah, we haven't noticed any big difference so far [20:12:21] ok [20:12:42] mszabo: btw, something else, I noticed a reference to php-mustache in the presentation - is that this one? https://pecl.php.net/package/mustache [20:13:02] did you adopt that before LightNCandy or later? Curious how it compares [20:13:39] I'd like to reconsider our use of lightncandy due to its security issues, but it's hard to beat its performance esp on-demand. [20:13:53] We could potentially switch to a better maintained library and pre-compile some templates [20:14:04] but maybe the C extension performs well enoughu that we don't need to? [20:15:07] Yea this is the one [20:15:27] We've been using it in 1.19 and I think we used the bobthecow library before that, so we never had a stint with lightncandy [20:15:58] I found some benchmarks from 2013 or so in JIRA, but they are likely outdated now :D [20:16:05] I see, so I guess you use both now, with one for own code and the other for upstream? [20:16:58] right now we compile in PHP, then cached with HMAC in memcached, and eval() at runtime. which is a very strange way of doing it :D [20:17:54] upstream lightncandy doesn't currently have a separately AST that we can cache as a compromise without eval(). so I could contribute that upstream, but php-mustahce seems to already have a cacheable AST so might be better to invest toward that [20:18:10] then MW could use that when available and fallback to bobthecow on stock installs without the pecl extensions [20:18:22] Yup, pretty much [20:20:26] and yeah, AST caching seems possible with the native extension, but IIRC we never explored that [20:20:50] https://github.com/jbboehr/php-mustache/issues/2#issuecomment-19983070 [20:21:02] parsing is probably faster anyway. [20:21:48] I mean if you're in PHP and you have a mustache template string, you have two choices: fetch another string from APCu, e.g. json array of tokens, then parse/deserialise/allocate that and then iterate it to execute. or parse/execute the mustache string [20:22:02] I can see how that might not even be faster in some cases [20:22:58] true, you have the serialization overhead and the locking overhead in apcu [20:31:45] looks like they added support for caching anyway, there's a toBinary and fromBinary method, and they also use this automatically if you serialize/unserialize, such as apcu store/fetch would do. [20:31:54] I'll run the benches later [20:32:13] awesome! [20:32:43] do you need to support full handlebars though, or only mustache? [20:33:02] just mustache [20:33:49] cool, for some reason I thought MediaWiki was also using HBS now [22:13:48] we use mustache.js client-side and lightncandy in mustahce mode server-side [22:13:52] sometimes reousing the same tpl files