[00:05:23] 🤔 [12:51:41] hello [12:52:13] if I setup a private wiki I can set it after as open? or I have to ask to mods to do it for me? [12:52:47] You can use ManageWiki to change wiki settings# [12:52:54] ok, thank you! [12:53:29] Special:ManageWiki/core www.yourwiki.miraheze.org/wiki/Special:ManageWiki/core [12:56:28] Jakeukalane: ^ [12:56:47] I don't have the wiki yet [12:56:58] but I write down this links. Thank you [13:50:33] Jakeukalane, did you want your wiki to be public or private? To be clear, you can add anyone to the `member` group and they will be able to view your wiki. [13:53:38] * BurningPrincess1 waves hello to dmehus [13:54:09] hi, BurningPrincess1 [13:59:38] I selected to be private. I want to be public after I finished setting up [14:00:04] (I will take a lot though) [14:00:44] it*. By the way, how can I set a repository that is already linked as repository of that wiki? I didn't do that in the other wiki so I don't know how it is done [14:01:17] I repeat. many mistakes on the prhase. How can I set a repository that I already have linked as repository in OTHER wiki. [14:08:13] Jakeukalane, I believe there's settings for that in ManageWiki/settings. I believe on your new wiki, you'd have to enter either the subdomain or database name (ending in `wiki`) in the WikiBase repository field. [14:08:34] and on the other wiki, you'd want to ensure that that wiki is designated as a WikiBase repository [14:09:02] You may also want/need interwiki prefixes set up to either wiki, which any `interwiki-admin` or `steward` can assist you with [14:09:52] I'll search for it, just wanted to know if there was in the options or I needed admin helps. The other is repository already set up. Thank you. [14:10:25] Jakeukalane, oh ok. No problem. Yep, it's definitely possible. Let us know if you need further assistance. :) [14:12:59] What is WikiBase/ [14:13:44] BurningPrincess1, it's basically a wiki database. The best example of it is Wikidata, www.wikidata.org [14:13:52] Oh [14:14:22] It provides for linked data to be used on pages, in templates, and so forth, across multiple wikis and even wiki farms in the case of Wikidata [14:15:22] Hello puma! If you have any questions, feel free to ask and someone should answer soon. [14:17:36] I think Wikibase is a bit more complicated to set upt [14:20:20] Yeah, I agree...maybe Cargo can't do as much as Wikibase crosswiki, but from what you've described, for a simpler wiki database, that's what I'd go with. [14:21:03] yeah although, there is a way to export data from Cargo to be used as an external API [14:21:11] I don't know how it works, but Nookipedia does that [14:22:04] https://nookipedia.com/wiki/Main_Page [14:22:04] [ Nookipedia, the Animal Crossing wiki ] - nookipedia.com [14:22:07] Oh, yeah, that's interesting...not exactly the same, but I can see that being used. [14:22:41] Wow, there's a wiki for Animal Crossing? heh [14:23:00] yes, they're part of NIWA [14:23:12] (a group of wikis dedicated to (mainly) nintendo games) [14:23:23] at the bottom there's a link to all of them [14:25:10] ah [14:25:19] oh cool looks [14:25:45] Nintendo Independent Wiki Alliance, cool [14:27:06] @Doug: check -sre [14:27:17] and ARMS wiki was originally hosted on Miraheze, by the way [14:27:19] RhinosF1, seen and ty ❤️ [14:27:45] oh, neat, and Super Mario Bros wiki is the one your wiki is apart of, right? [14:28:24] yeah, originally was supposed to be kinda like the non-english version, heh. But I will see how that will go [14:31:32] Hm, namespaces for worlds or wikis.... [14:32:57] ah, cool [17:50:47] hello [17:50:57] I think autocategory creation requires managewiki permissions [17:51:16] autocreate categories* [18:02:28] Jakeukalane, yes, do you want that enabled on your wiki? If so, which wiki? [18:02:43] I can enable it for you, if you want [18:02:59] I'm getting an invalid signature error for https://meta.miraheze.org/wiki/Special:OAuthListConsumers/view/91409d445b28606bb2d839d45bfaa70b [18:03:01] [ List OAuth applications - Miraheze Meta ] - meta.miraheze.org [18:03:40] Given that it used to work, I'm wondering what changed. [18:03:49] Skynet, have you updated its signature to comply with the new DiscussionTools-forced signature requirements? [18:04:05] dmehus: The what now? [18:04:18] With the upgrade to MediaWiki 1.35 and installation of DiscussionTools, there's now a signature requirement to post to talk pages. [18:04:26] Let me get the link [18:04:38] Huh??? [18:04:49] Skynet, https://www.mediawiki.org/wiki/New_requirements_for_user_signatures/Help [18:04:50] [ New requirements for user signatures/Help - MediaWiki ] - www.mediawiki.org [18:04:55] how are talk page signatures relevant to OAuth application logins? [18:04:58] It'll likely require a change to your bot's code [18:05:03] What does this have to do with OAuth? [18:05:05] Do you mean thus like teh basic sig when you do this ~~~~ ? [18:05:35] Or does it have to be custom> [18:05:50] Majavah, well it's not, but Miraheze has done nothing that should've been changed. That's up to Skynet to resolve the issues with his bot [18:06:06] Just wondering if maybe his bot couldn't post to talk pages [18:06:19] I can't login. [18:06:19] Skynet: Pinging Reception123, Zppix, PuppyKun, Voidwalker, or RhinosF1 who might be able to help you. [18:06:39] err what? [18:06:45] dmehus: hey, can youmake ,me a custom sig?# [18:07:04] Not sure how I just triggered MirahezeBot lol. [18:07:32] I think it is scetiped to save things wjen people say certan things [18:07:41] *scipted to say things when [18:08:01] dmehus: I think Skynet is talking about signature as in RfC 5849 section 3.4, not talk page signatures [18:08:08] dmehus: In any event my problem is not with posting to talk pages, it's with logging in through OAuth. My consumer appears to be broken? [18:08:19] Majavah, ah that makes sense, thanks :) [18:08:39] Skynet, would redoing your OAuth consumer token help? [18:08:52] An SRE can approve your updated OAuth token request [18:09:04] Probably, but consumers can't be deleted, so I want to be sure before I needlessly create more consumers. [18:09:11] Oh [18:09:21] you can't update the existing OAuth token request? [18:09:39] Skynet: what does the URL used for the signature look like? iirc mw is picky about it containing the port number or not, can't remember which way it went [18:09:42] Something that paladox or SPF|Cloud can help me with. [18:10:20] Majavah: Oh yes, I know. But the same code is running fine on WMF wikis and other wikis. [18:10:29] Majavah, oh good point. Could this also be related to the bot's cookies? Can Skynet clear the cookies on the bot's server? [18:11:14] * dmehus wonders if this is related to our cookie configuration changes and now it's hitting session hijacking errors that could be resolved by resetting/deleting the bot's cookies [18:11:23] Cookies are generally meaningless with OAuth as the session only exists for the duration of an OAuth headered request. [18:11:30] oh [18:11:44] One needs to always pass an OAuth header to keep logged in. [18:11:50] ah [18:12:39] I guess just copy your existing OAuth approved token request into a new request and then reference (i.e., link) to the previous approved request so an SRE can expire the old request and approve the new one? [18:12:48] Reception123, ^ [18:13:08] or maybe paladox, ^ [18:13:14] Logging on as InternetArchiveBot...Failed!! [18:13:14] ERROR: Invalid identify response: {"error":"mwoauth-oauth-exception","message":"An error occurred in the OAuth protocol: Invalid signature"} [18:13:15] I'm still curious about the URLs used when signing requests. [18:13:27] yeah [18:13:40] I need some sysadmin to confirm if my consumer got corrupted somehow. If it did I will create a new one. [18:14:03] https://tools.ietf.org/html/rfc5849#section-3.4.1.2 states that default port must be omitted, but UTRS OAuth code has a hack to include it because otherwise MediaWiki does not like it [18:14:04] [ RFC 5849 - The OAuth 1.0 Protocol ] - tools.ietf.org [18:14:16] oh [18:14:48] SPF|Cloud, around to look at ^? [18:15:09] or https://phabricator.wikimedia.org/T59500 [18:15:10] [ ⚓ T59500 Impossible to use https://www.mediawiki.org/wiki/Special:OAuth/initiate?format=&oauth_callback= style URL ] - phabricator.wikimedia.org [18:18:47] dmehus: any system admin can view oauth [18:18:50] Just fyi [18:20:08] Skynet: can you try resetting the secret first? [18:20:29] RhinosF1 how would I go about that? [18:20:51] Skynet: I think it's in the manage your own consumer section [18:21:36] My bot lacks the permissions to manage its own consumer [18:22:24] Skynet: whoever created it should be able to. Is there an error? [18:22:44] RhinosF1: InternetArchiveBot created the consumer. [18:22:55] Looking [18:22:55] But it doesn't have permission now. [18:23:17] Zppix, yeah... I know I did ping Reception123 as well, but thought it should be someone with expertise into the matter [18:23:44] Skynet, if you need permissions, let me grant it `confirmed` [18:24:14] Added back Skynet [18:24:21] dmehus: already done [18:24:52] only (auto)confirmed and sysop have mwoauthupdateownconsumer [18:25:11] Oh [18:25:17] Yeah someone removed said right Majavah [18:25:28] I could that to `confirmed` if you want, RhinosF1? [18:25:48] dmehus: just stop talking [18:26:37] Resetting the keys worked. So the consumer was broken for some reason. [18:26:40] RhinosF1, why? And actually, I just checked, Majavah, and `confirmed` has `mwoauthupdateownconsumer` [18:26:59] that's exacrly what I said [18:27:04] Skynet: if it's been a while since it was used then it might have from the widgets incident at Christmas [18:27:13] s/exacrly/exactly [18:27:13] Majavah meant to say: that's exactly what I said [18:27:24] RhinosF1: More like it [18:27:30] dmehus: because you've been talking nonsense for this entire conversation and I don't want you to look stupid [18:27:33] 's been a while since I last checked if it worked. [18:27:37] Majavah, oh, okay you meant both autoconfirmed and confirmed with (auto)confirmed [18:27:43] Didn't know it was broken this entire time. [18:27:59] Skynet: yeah it'll be when we changed all the secret keys then [18:28:11] If I meant autoconfirmed only I'd said autoconfirmed, not (auto)confirmed :P [18:28:56] RhinosF1, just trying to help, and I did help when I suggested adding `confirmed`, which you did after I suggested that [18:29:07] RhinosF1: Thank you. :D [18:29:15] dmehus: I know you mean well [18:29:23] Skynet: no problem. [18:30:04] RhinosF1, okay, just maybe try and be a bit less direct, or DM me :) [18:35:41] RhinosF1: I am however having trouble accessing the Miraheze DB. Seems to be timing out. [18:36:48] Skynet: that's a SPF|Cloud question [18:37:01] I remember talk about them grants [18:37:15] But the db name will have changed as well [18:37:19] As it's new infra [18:37:26] paladox might also know [18:38:39] That would probably explain that as well. [18:39:12] RhinosF1: so I guess I relentlessly ping them without mercy until they cater to my whims? [18:39:36] Or create a task [18:39:38] But yes [18:39:48] That's something for someone on infra [18:39:56] I only deal in MediaWiki servers [18:41:31] RhinosF1, by "db name," do you mean the database server's hostname rather than the wiki database name? Assuming you must mean that, as wiki database name wouldn't have changed in the migration [18:42:48] I'll just wait on paladox or SPF|Cloud to comment here before going to Phabricator. [18:42:57] hmm? [18:43:42] paladox can you fix my DB access? [18:44:40] paladox I understand you made some changes to it. [18:47:41] Requires approval from SPF|Cloud/John i think. I removed access here https://github.com/miraheze/puppet/pull/1381 (at the request of SPF|Cloud ?) but i vaguely remember why. [18:47:41] [ db: Remove some grants by paladox · Pull Request #1381 · miraheze/puppet · GitHub ] - github.com [18:48:25] paladox why? I had received prior approval from John [18:48:45] SPF|Cloud: Why did you remove my grant? [18:51:03] Wow, IABot has been broken since May?? [18:51:35] 🤔 [18:52:12] I believe that there was some issues regarding one of the fields, but i cannot 100% remember. Wish i had a better commit message. It wasn't my decision, i was just asked to remove it. [18:52:30] but almost a year is a long time to remember :) [18:55:48] Skynet: my guess would be it now needs an NDA or something [18:56:03] But wait for SPF|Cloud [18:57:15] I mean I can vouch for Skynet, he is highly trustworthy, so if its a trust thing... [18:58:39] Yeah trust wise no issue [19:00:04] :-) [19:56:14] RhinosF1 I can't seem to reset my personal consumer keys. Probably because it's not approved? [19:56:15] https://meta.miraheze.org/wiki/Special:OAuthConsumerRegistration/update/f7fb64ca60ab4865734e9449ffaf5121 [19:56:17] [ Permission error - Miraheze Meta ] - meta.miraheze.org [19:56:47] Skynet: it's expired [19:56:57] You'll need to request again [19:57:22] Ugh. [20:06:08] RhinosF1: reproposed [20:07:17] Ah, glad to see Miraheze working again at iabot.toolforge.org [20:09:24] Skynet: link to it? [20:09:29] Skynet: your DB access was removed because the grants gave you access to more information than intended [20:09:59] SPF|Cloud oh. [20:10:02] RhinosF1 https://meta.miraheze.org/wiki/Special:OAuthListConsumers/view/c9bf76560a908b8491f33f03a70a6c0f [20:10:03] [ List OAuth applications - Miraheze Meta ] - meta.miraheze.org [20:10:25] you had access to deleted revisions as well [20:10:25] SPF|Cloud: Like what. I only had access to text, revision, and page [20:11:58] dmehus: do you have access to approve consumers or is that Sysadmin only? [20:12:02] I can never remember [20:12:39] SPF|Cloud: can we grant more restrictive access? [20:13:14] without refactoring the mediawiki database architecture, I doubt [20:17:48] SPF|Cloud: I don't understand the problem. Text doesn't have any deleted content. [20:18:07] to my knowledge, it does [20:18:12] Neither do the page and revision tables. All deleted content goes into the archive table,. [20:18:18] if you can prove otherwise, please tell me [20:19:07] The replication DB on WMF once accidentally provided unrefactored access to archive. It's literally the table of deleted revisions. Text is a public archive of all live revisions., [20:19:56] SPF|Cloud: what do WMCS do for their replicas as that would pose the same issues for deleted content? [20:20:29] no idea, but good question [20:20:38] I'm looking at the database tables now to gather more information [20:22:33] SPF|Cloud: text isn't replicated [20:22:38] for every revision in the archive table, https://www.mediawiki.org/wiki/Manual:Archive_table#ar_text_id is a reference to https://www.mediawiki.org/wiki/Manual:Text_table#old_id [20:22:38] [ Manual:archive table - MediaWiki ] - www.mediawiki.org [20:22:38] [ Manual:text table - MediaWiki ] - www.mediawiki.org [20:22:41] https://quarry.wmflabs.org/query/52841 [20:22:42] [ Untitled query #52841 - Quarry ] - quarry.wmflabs.org [20:23:03] in MediaWiki 1.35, things have changed, but obviously we weren't running that back in May 2020 [20:23:47] Ah, SPF|Cloud was right. I was wrong, but either way, I'm not abusing the privilege of the access. [20:24:18] As a matter of fact IABot is literally only capable of grabbing live revisions as it does a join with the revision table. [20:24:24] that's not sufficient to get access though [20:24:35] I can sign an NDA [20:24:39] SPF|Cloud: could we cover it with an NDA? [20:25:18] signing an NDA is one of the steps, yes [20:25:27] The others being? [20:25:42] SPF|Cloud: WMCS uses filtered database views [20:26:34] Majavah: yes, but I wasn't sure how they would filter a text table, because you would need to write a mechanism looking up the state of a revision associated with an old_id :) [20:26:43] apparently the text table isn't replicated [20:27:01] Doesn't exist according to quarry [20:27:06] I think NDA is best bet [20:27:16] and one of the other steps is explaining the necessity of access [20:27:27] It is not, but IABot is living next to their API and can get the results as quick with an API binary sweep. [20:27:52] in this context: what is the use case that cannot be fulfilled using the MediaWiki API [20:29:13] WMCS has a nice replica service, although running sanitised wiki replicas is a huge task [20:29:47] The 0 latency there vs the high latency here. [20:30:18] I get instant API results and can make many without much of a time penalty. [20:30:52] is your service that time sensitive? [20:31:25] Given the load and amount of stuff it has to do, optimizing for time is a priority, yes. [20:32:01] If you do not want to grant access to the TEXT table, IABot can work around it while making use of the remaining tables. [20:32:22] But it's just more optimized to work with a DB when working with remote servers. [20:33:04] what kind of operations do you perform? [20:33:16] Skynet: i approved the OAuth grant [20:33:37] Stewards don't seem to have access to that, only sysadmins so seems fine [20:33:47] SPF|Cloud live revision history search to assess when specific URLs were added. [20:33:48] in the case of a service using the API, I presume you fetch revision numbers per page, then revision information (including content) in batches [20:34:19] I optimized API searching by using a parallelized binary sweep. [20:35:23] Since the API imposes restrictions on retrieval. [20:36:00] I think the API should be faster than the last time you tried. [20:36:15] do you have a link to the source code? I would like to review the code [20:37:02] I'm reworking the DB text search, but yes. [20:37:06] One sec... [20:37:14] Multitasking right now [20:39:41] SPF|Cloud [20:39:42] https://github.com/internetarchive/internetarchivebot/blob/master/app/src/Core/APII.php#L4048 [20:39:43] [ internetarchivebot/APII.php at master · internetarchive/internetarchivebot · GitHub ] - github.com [20:39:48] thank you [20:40:13] It's being reworked right now as not all DBs like the contains prompt, and as I came to discover, text is gzipped [20:44:57] https://github.com/internetarchive/internetarchivebot/blob/master/app/src/Core/APII.php#L4063 this returns the rev_timestamp value of a revision where text.old_id 'CONTAINS' URLs? [20:44:58] [ internetarchivebot/APII.php at master · internetarchive/internetarchivebot · GitHub ] - github.com [20:45:08] I'm confused [20:46:39] Yes, why the confusion? [20:47:04] old_id is an auto incrementing integer [20:47:33] Okay? [20:47:34] old_text is the actual text (or a gzipped variant, if you're using compression) [20:47:44] Yes [20:48:04] Oh. Wow. There's a bug [20:48:16] I never even knew that was there. [20:48:25] I wonder how many requests failed because of that. [20:48:40] glad I have helped you [20:49:29] I ditched the query in favor a less taxing query because of gzip. [20:49:46] But why did you need to look at the code? [20:50:55] because I am trying to see how we could help you by generating files containing the content you need [20:52:03] I CAN use the API, but it's not desired. [20:52:13] IABot is flexible in that sense. [20:53:37] You work for the Internet Arcive> [20:53:52] you said it's not desired due to the latency [20:54:40] therefore, I would like to know if we can reduce the load by pregenerating certain files with content each X days/weeks/months [20:55:15] BurningPrincess1: it's just sponsored as far as I recall rather than actual employment [20:55:21] Hm ok [20:55:43] RhinosF1: I work for them [20:55:58] History of IABot: [20:56:02] I would prefer my wiki not to be arcived [20:56:07] Skynet: oh, maybe I misread somewhere [20:56:18] Volunteered bot->sponsored->paid employee [20:56:34] Skynet: ah [20:56:53] We should have asked you when someone kept asking how to get stuff taken off IA [20:57:14] I would rather myu wiki not be on the IA [20:57:20] BurningPrincess1: maybe ask Skynet how to go about it [20:57:35] Skynet: how can I have my wiki not on the IA? [20:57:45] But we upload public wiki dumps anyway [20:57:46] BurningPrincess1 ?? [20:57:51] Gah [20:57:58] IABot is an invited bot. [20:58:08] I don't want my wiki, on the IA [20:58:13] So Reception123 will need to take you off the dump list [20:58:17] It will not run on a wiki that is not invited to. [20:58:21] K [20:58:28] Skynet: I think they mean like archive.org as a whole maybe? [20:58:31] BurningPrincess1: ^ [20:58:34] Ok [20:58:46] BurningPrincess1: do you? [20:58:50] Yes [20:59:03] Unless you mean archiving pages, then I have no control over that. [20:59:07] I would rather people not upload dumps of my wiki [20:59:20] Or it be bot archived by your bopt [20:59:32] BurningPrincess1: Skynet's bot doesn't do the archive [20:59:33] The Wayback records snapshots of pages. Not dumps [20:59:39] IKA [20:59:46] It just fixes links to broken pages if you ask [21:00:07] k.# [21:00:13] They take their own snapshots and Reception123 handles public dumps [21:00:35] But as far as I recall, they don't take it down unless there's copyright issues [21:02:32] k. [21:03:03] @dmehus on bovedawiki [21:09:09] so, there was an automatic IA bot doing snapshots and it has been broken since may *_* [21:09:38] No [21:10:06] * BurningPrincess1 archives RhinosF1 in the Wayback thingy [21:10:48] Jakeukalane: https://meta.miraheze.org/wiki/User:InternetArchiveBot#What_is_InternetArchiveBot_and_what_does_it_do? [21:10:48] [ User:InternetArchiveBot - Miraheze Meta ] - meta.miraheze.org [21:10:56] BurningPrincess1: how would that even work [21:11:22] Its a snapshot of your mind [21:12:03] BurningPrincess1: that would be creepy [21:12:16] I do think one way, we have "snapshots" of brains uploaded to computers [21:12:23] *will have [21:12:26] * RhinosF1 doesn't even want to know what a snapshot of my mind looks like [21:14:08] @Jakeukalane, looking [21:16:15] @Jakeukalane, ✅ {{Done}} [21:46:09] that sound a bit too much like K. Dick [21:46:19] thank you @dmeus [21:53:09] No problem, Jakeukalane :) [23:38:37] * hispano76 He loves Miraheze without ads