[01:25:02] wikimedia should natively support forking and federation [01:25:25] break wikipedia down into instances, and have them fork eachother, which then fork eachother, and so on, infinitely forever [01:30:18] we already live in a world where ppl don't care about the truth so maybe having instances run by ppl they trust would help [01:30:51] like, at the end of the day the only thing that matters is who you trust, not what's true [01:31:05] so... turns out forking and federation might be helpful here [01:31:31] and y'all fucked up by prioritizing some messed up notion of "truth" (btw, truth isn't real. sorry to break it to you.) [01:32:22] I'd quote wikipedia on this one but eh, at this point, I'm too anti-wikipedia for that :v [01:32:26] Soni: Have you seen http://fed.wiki.org/view/welcome-visitors ? I think its the sort of thing you are talking about [01:33:05] that didn't go far [01:33:28] Anyways, wikipedia doesn't care about "truth". It cares about summarizing the literature of secondary sources in a balanced way. Those are two different things :P [01:33:51] Soni: I mean the technology and forking model of smallest federated wiki, not their website themselves [01:33:58] so like, unless it live-mirrors wikipedia (or, more specifically, *any* similar wiki) with a cache in front, it's useless [01:34:52] I think the content fork model was also tried by wikiinfo at one point way in the past [01:34:58] (ideally, *many* wikipedia-like wikis, at the same time, and presenting the results as a wikipedia-like wiki so that it can form chains and cycles and networks of wikis) [01:35:30] But in any case, what you're suggesting is a very significant departure from the current wikipedia model. Nothing wrong with that, but you'll get further trying to make something new on that model, then trying to convince wikipedia to change [01:35:46] maybe I only need to convince wikipedia's software to change [01:36:40] As a developer, I can assure you, we're not totally upending the entire model of MediaWiki, without the wikipedians asking for it [01:36:46] you spawn up a wiki, install a small extension, and bam you can fork/federate any combination of wikis, in whatever way you like [01:37:08] either whole pages, parts of pages, combine parts of pages in a new page, etc [01:37:40] this wouldn't benefit just wikipedia, or just ppl who wanna fork wikipedia, but also any other wiki using the same software [01:37:47] even SLIGHT changes break all sorts of things - MAJOR changes ... well ... to say all hell breaks loose is an understatement [01:38:07] Soni: So basically https://en.wikipedia.org/wiki/Project_Xanadu ? [01:38:27] is that web-browsable? [01:38:36] it doesn't really exist [01:38:42] its more a proposal [01:38:43] well that's nice [01:38:55] That was very important historically [01:39:01] in the history of the internet [01:39:05] no, I mean literally write a wiki-scraper for mediawiki [01:39:24] capable of scraping multiple wikis in various different ways, with local caching [01:39:40] By all means, yes you can do that. Wikipedia is probably not going to change, but you can certainly make an extension to do that [01:39:45] (local caching helps to hide its usage from the targets) [01:40:02] And use it on non-wikipedia wikis [01:40:16] I'll definitely use it on wikipedia and wikipedia mirrors [01:40:25] ideally wikipedia mirrors would also use it on wikipedia and wikipedia mirrors [01:40:42] well, ideally wikipedia wouldn't exist, so it'd only be mirrors mirroring eachother :v [01:41:23] I mean, you can use it to mirror wikipedia if you want [01:41:35] you're not going to convince wikipedia to mirror other people with your thing (probably) [01:42:01] probably not [01:42:06] all I can hope for is to kill wikipedia with it [01:42:17] if I can make it better than wikipedia [01:42:54] But anyways, most of that is simple matter of programming. It gets mildly hard if you want live mirrors (someone edits at one place, and it updates everywhere else), or "pull requests" (to merge back upstream). That's certainly all doable, but it starts to get mildly more complicated, especially if you are building on existing choices mediawiki has made [01:43:20] Soni: Sounds good. Wikipedia could use some competition. Competition spurs innovation. Wikipedia has had a monoply too long [01:43:21] wikipedia is too big and too easily filtered. having highly specialized "subwikipedias" and combining them together in all sorts of ways would benefit everyone. it'd be truly censorship-resistant, unlike this wikipedia thing that we currently have. [01:44:03] On the censorship front, I do wish we had "one-button mirrors", where you could basically apt get install something, and get a full mirror of some wikipedia site [01:44:45] Some people tried to set up a turkish mirror back when turkey was blocking, and they had to put a lot more effort to get it up and running, than makes sense given that everything is supposed to be forkable [01:45:18] I'd rather have a mix of specialized and general purpose wikipedias that feed on eachother and become a giant blend. one thing that is many things. [01:45:36] (a "singularity" of wikipedias, so to speak) [01:46:06] The black hole of knowledge :P [01:46:28] effectively, yes [01:46:39] I actually disagree somewhat. I think wikipedia being a single administrative zone, and single source of "truth" is a major part of its success [01:46:41] have you tried the fediverse [01:46:52] there's no such thing as "truth" [01:47:19] Single source of "truth" does not mean The "Truth" with a capital T, just that there is only 1 truth (correct or not) [01:47:48] To a certain extent, its not about being true so much as agreeing on a truth, and not having a bunch of conflicting stuff [01:47:52] if there's no The "Truth" with a capital T, then maybe there shouldn't *be* a single source of "truth" [01:48:16] Without being a single source of truth, you might as well just be the world wide web circa mid-90s [01:48:29] instead, a network of trust ("web" of trust as they call it) seems more optimal [01:48:47] more like early-90s [01:49:12] when IRC started popping up, etc [01:49:18] ppl used to trust eachother [01:49:33] then we started to get monoliths, and the erosion of trust [01:50:08] mastodon/the fediverse emphasises trust, and maybe you should too [01:50:19] There is a reason we got monoliths... I don't know if you remember what searching for actual information on the internet was like in mid-90's but there wasn't a lot of info [01:50:51] you had to know someone who knew someone who knew what you wanted, yes... that's still largely true [01:51:12] especially with the way search engines have been poisoned lately... [01:51:17] Like, i think if your model works, then it would have worked in geocities and whatnot [01:51:44] and like, who knows, maybe wikipedia has been poisoned too? how would you know? [01:51:50] Its not like before wikipedia came along, people didn't know how to create a page on the internet [01:52:02] there are ppl trying to put things on wikipedia and not disclose paid affiliations [01:52:15] sometimes, they even manage to do it right under your noses [01:52:24] I think the success of wikipedia is in forcing people to work together, and not try and "own" their page [01:52:39] Soni: I mean, I think everyone is keenly aware of the paid editing problem [01:52:50] remember that company that edited a bunch of pictures with product placement or something? [01:53:19] that doesn't happen if there's mutual trust [01:53:25] just take a look at the fediverse [01:53:37] Soni: There's a list of suspected people at https://en.wikipedia.org/wiki/Wikipedia:List_of_paid_editing_companies (I imagine there are many more unknown) [01:53:42] you don't get companies making posts on the fediverse [01:53:55] Because you don't have anybody reading the fediverse :P [01:53:59] they try to pay admins [01:54:11] they could just... make the posts themselves [01:54:13] With popularity comes malicious actors [01:54:14] but they don't [01:54:19] because... who knows why [01:55:09] anyway, the fedi has been steadily growing and we don't have half the problems y'all deal with [01:55:59] and the main difference is that we work with frameworks of trust, not "truth", not "algorithms" [01:58:05] Wikipedia also did not have the same problems when it was the size of the fediverse... [01:58:24] Soni: Anyways, i have to go, but this was an interesting conversation. :) [01:59:15] and it kinda had a framework of trust when it was the size of the fediverse... [01:59:36] that got eroded :v [13:06:14] does anyone know what the current status of MediaWiki API end-to-end tests (T219873) is? [13:06:15] T219873: Create a suite of end-to-end API test for MediaWiki core - https://phabricator.wikimedia.org/T219873 [13:06:21] can we add them in extensions already or is this still work in progress? [13:13:59] looks like https://www.mediawiki.org/wiki/MediaWiki_API_integration_tests is the right documentation, I think [20:30:27] Does anyone know what edits to make for Apache in the MediaWiki docker image to increase the LimitRequestLine parameter? [20:31:06] Tried to edit /etc/apache2/apache2.conf to include the LimitRequestLine field but it doesn't seem to work [20:31:56] I need to edit it to make a large API URL request [20:32:03] Did you restart apache? [20:32:17] You can presumably sent the stuff in POST rather than GET to get around this [20:32:33] Oh I didn't know you could POST it [20:32:52] definitely :) [20:37:23] Okay that seems to work. Thanks [20:37:52] Sweet