[12:00:32] I've tried to reimage 2 ms-be nodes (Dell) in codfw to move them to new VLAN, and in both cases the systems are failing to DHCP. Is there a known issue here? [12:01:09] CLIENT MAC ADDR 00 62 0B 74 EA 40 for ms-be2074 and 00 62 0B 75 4A 80 for ms-be2076 [12:02:28] These systems were originally installed in October 2023 (T349839), this is the first reimage since. [12:02:29] T349839: Q2:rack/setup/install ms-be refresh - https://phabricator.wikimedia.org/T349839 [12:30:58] phabricator just gave me an error "Unable to establish a connection to any database host (while trying "phabricator_policy"). All masters and replicas are completely unreachable. AphrontConnectionLostQueryException: #2006: MySQL server has gone away This error may occur if your configured MySQL "wait_timeout" or "max_allowed_packet" values are too small. This may also indicate that something used the MySQL "KILL " command to [12:30:58] kill the connection running the query." [12:32:06] is this expected? [12:32:45] it shouldn't be [12:32:47] let me check [12:33:27] Something hit the DB [12:33:45] https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&from=now-3h&to=now&timezone=utc&var-job=$__all&var-server=db1250&var-port=9104&refresh=1m&viewPanel=panel-2 [12:35:31] It seems the values are back to normal [12:36:01] Thanks, phab seems happier now [12:36:10] andre: is there anything similar to "recentchanges" in phabricator? I can inspect the binlogs but that's going to be messy, so maybe there's a better way to scan for that weird activity [12:39:30] https://phabricator.wikimedia.org/feed/ ? [12:40:26] jynus: yeah, I was just checking that [12:40:36] but nothing stands out there [12:41:35] Feed is just user-visible activity [12:42:25] For DB connectivity, not really I'd say. There are generally things like https://phabricator.wikimedia.org/daemon/ or https://phabricator.wikimedia.org/config/cluster/databases/ but they are not helpful at all in this case [12:43:00] marostegui: I think the only place to see DB issues listed is https://logstash.wikimedia.org/app/dashboards#/view/AWt2XRVF0jm7pOHZjNIV (may be filtered out by default) or the error log on phab1004 [12:43:14] in the past high activity came from repo imports, maybe have a look at difussion [12:43:34] andre: I don't have access to those first two links :( [12:44:09] it seem to have started around Jan 12 https://grafana.wikimedia.org/goto/6u7xzaSvg?orgId=1 [12:45:02] https://phabricator.wikimedia.org/config/cluster/repositories/ lists only some Diffusion pulling errors, and still not very helpful [12:45:17] maybe /var/log/phd/daemons.log on phab1004 comes closest? but it's always been quite noisy [12:46:15] arnaudb: I cannot correlate that increase with a database activity increase. Also, from that graph only throughput seems to increase but not requests? [12:47:26] perhaps the database latency is a side effect? I don't see any significant increase activity wise on the db graphs [12:48:00] On the database I do see an increase on writes, so that must be coming from somewhere [12:48:16] I just wanted to check whether it is a legit increase [12:56:14] fwiw: https://grafana.wikimedia.org/goto/Y8B7iaSDg?orgId=1 all writes activities are added here, there seem to be an increase but zooming out to 90 days shows that october was more intense [12:56:16] I don't have a good explanation for that increase [12:57:25] Maybe then it was just a spike, let's see if it happens again [12:57:27] Thank you all [12:57:35] there was a few phabricator updates in between so that might be the explanation for the november rate variation [12:58:25] marostegui: I'm not sure this is a blip, yesterday we had a similar yet a bit different blip: https://wikimedia.slack.com/archives/C05H0JYT85V/p1768906413997449 [12:58:40] (which also resorbed itself) [12:59:07] arnaudb: mmm, then that's a thing indeed [12:59:29] I haven't found anything obvious yesterday, thinking it might be nothing, but I'm doubting now [12:59:34] I don't know if we get some of those "normally" or we are simply seeing some more reports? [13:00:10] we had an alert about http probes being laggy at the same timing so that looks fairly new to me [13:00:19] we usually don't have those frequently [13:01:42] I can try to narrow the writes from that spike I posted and check binlogs, but I won't be able to tell probably if it is a normal traffic or not unless it is very obvious [13:03:12] lmk if you see something obvious, I'll try and see if I find something in the logs [13:03:18] roger, thanks [13:07:36] I've opened T415189 about the DHCP/PXE failures [13:07:37] T415189: DHCP failing for at least 2 ms-be servers in codfw - https://phabricator.wikimedia.org/T415189 [13:09:12] arnaudb andre there are lots of deletes on `phabricator_cache`.`cache_markupcache `phabricator_cache`.`cache_general` and `repository_statusmessage` WHERE repositoryID = 3533 AND statusType = 'needs-update' and similar inserts like: INSERT INTO `cache_general` [13:09:17] But I have no idea if this is normal or not [13:12:25] That doesn't sound suspicious to me but I'd still have no idea why that's suddenly a higher rate than usual :-/ [13:13:55] there is _something_ that makes some mysql queries slower or fail  `rg mysql -i *error.log |rg -i AH01071 -c` returns `229`, timestamps are starting at 0:05 to now [13:15:43] maybe not all but a fair number: `rg "#2006: MySQL server has gone away" -c *error.log` → `187` (out of 229) [13:16:16] I think that's probably because the query dies [13:16:37] because of the query exec time on mariadb's side? [13:16:57] Probably times out, because we don't have any query killers there [13:17:13] And I guess the thread tries to get reused and that's why the connection gets mysql server has gone away [13:17:58] some might be very broad queries by crawlers; Phab itself is supposed to time out with "Maximum execution time of 30 seconds exceeded" [13:18:22] that could explain the httpd volume increase [13:18:41] andre: are those supposed to be logged to the dashboard you pasted earlier in logstash? [13:18:47] yes [13:19:06] I don't see any in the last 3 hours if that's the case [13:19:25] you may want to re-enable some more filters on that Logstash dashboard, there are a few "DB" ones [13:29:54] I don't see anything obvious in the error logs, httpd access rate looks steady over time (https://grafana.wikimedia.org/goto/lhSZGaIvg?orgId=1=) [17:35:30] topranks and I are going to deploy turning on IPv6 on the authdns servers. this is strictly on our end and does not mean that the actual glue records will be updated [17:35:48] but it is a big change so we will be careful. oncallers, please note, if we break it, we will (try to?) fix it [17:36:06] tappof: Amir1: we will handle the pages if any related to this [17:36:23] bblack: ^ not turning it on today as discussed [17:37:28] \o/ [17:44:42] ack sukhe and thanks [17:45:13] topranks: running on dns7001, 195.200.68.4 [17:46:06] ouch [17:46:38] bird doesn't seem to be running on dns7001 [17:46:53] can't even connect to dns7001 :) [17:47:11] yeah it's down [17:47:19] I depooled it thankfully before starting [17:47:23] looking at https://puppetboard.wikimedia.org/report/dns7001.wikimedia.org/9450ae7b1ac7e2ff90d5f0ac74d13333185398b7 [17:47:32] I was able to ssh on fine [17:47:39] try now? [17:48:13] can't even do ssh -4, responds to ping [17:49:08] I wonder if it the ferm rules that got updated for the v6 addresses [17:49:13] that are somehow causing this connection issue [17:50:26] I can ssh fine [17:50:33] that's interesting [17:50:54] I guess from bast3007, so yes could be some filtering thing? [17:51:06] that bird startup issue is a race condition between anycast-healthchecker and bird starting, I believe [17:51:18] taavi: yeah that should be it so not to worried about that [17:51:19] https://www.irccloud.com/pastebin/u2MBwdEX/ [17:51:41] taavi: yeah I couldn't quite work out what was wrong tbh [17:52:46] ah ok I think I know what it is [17:52:49] topranks: yeah, fails from bast1003 [17:52:54] some policy we are missing? [17:53:12] no.... so we "include" this file in the bird conf: /etc/bird/anycast6-prefixes.conf [17:53:31] I think the anycast-healthchecker role is what makes that file though [17:53:46] and I suspect if we update bird.conf _first_, and immediately restart, before we create that file, it causes this error [17:53:59] I couldn't understand the problem as when I first connected that file was there, and looked ok [17:54:05] ah that ok [17:54:12] I am trying to see for example why I can't connect from 1003 [17:54:16] but it works from 3007 [17:54:18] bastion [17:54:18] but a restart of bird works fine, so I'm guessing when it first tried to reload after bird.conf changed that file had not yet been created [17:54:26] we should look at the puppetization and fix that [17:54:53] '/etc/bird/anycast-prefixes.conf': [17:54:54] replace => false; # The content is managed by anycast-healthchecker [17:54:56] '/etc/bird/anycast6-prefixes.conf': [17:54:59] replace => false; # The content is managed by anycast-healthchecker [17:55:28] this suggests to me that anycast-hc creates the file from what I remember (I think jbon.d added this) [17:55:42] topranks: but I guess first let's see why we can't SSH from bast1003? [17:55:59] telnet -4 dns7001 22 works [17:56:02] -6 fails [17:56:03] sukhe: agreed [17:56:05] yeah [17:56:18] packets are making it, so I expect its an iptables thing [17:57:00] it's not just TCP though, even ping is failing [17:58:14] explicit rule for bast3007 is there, no similar one for bast1003: [17:58:17] https://www.irccloud.com/pastebin/yfUgZEev/ [17:58:59] well that's something [18:00:29] no it's me being dumb again [18:00:33] the rule is there [18:00:50] https://www.irccloud.com/pastebin/kUJusQN7/ [18:03:23] PCC and even the Puppet run indicates nothing has changed on that end [18:03:40] https://puppetboard.wikimedia.org/report/dns7001.wikimedia.org/9450ae7b1ac7e2ff90d5f0ac74d13333185398b7 [18:06:26] oooo [18:06:29] sukhe: dns7001 is depooled is that right? [18:06:31] yep [18:06:33] topranks: see this [18:06:37] + up ip addr add 2620:0:861:53::1/32 dev lo [18:06:41] Augeas[lo_2620:0:861:53::1/32](provider=augeas) [18:07:02] hahahahahaa [18:07:04] lol [18:07:07] /32 [18:07:09] ffs [18:07:15] interface::ip { $alabel: [18:07:15] address => $adata['address'], [18:07:15] interface => 'lo', [18:07:15] } [18:07:15] ... [18:07:18] define interface::ip($interface, $address, $prefixlen='32', $options=undef, $ensure='present') { [18:07:55] good spot [18:07:56] basically actually since this references authdns_addrs [18:08:00] https://www.irccloud.com/pastebin/s4I7ZjOS/ [18:08:04] and then carries over everywhere else [18:08:16] it goes with the default /32 [18:08:21] lol [18:08:24] @ the paste [18:08:27] sigh ok [18:08:34] let's revert I guess and we need to do some more work here [18:08:36] including on the bird side [18:08:48] our puppetization clearly is done with the v4 assumption and that's fair [18:09:03] what's the fault here, that augeas thing you guys use to add the loopback/ [18:09:10] yeah [18:09:21] # Skip loopbacks if bird sets up the loopbacks in a given site. [18:09:22] $authdns_addrs.each |$alabel,$adata| { [18:09:22] unless $adata['skip_loopback'] or $adata['skip_loopback_site'] == $::site { [18:09:24] interface::ip { $alabel: [18:09:27] address => $adata['address'], [18:09:29] interface => 'lo', [18:09:32] } [18:09:34] } [18:09:37] } [18:09:39] on doh it gets set up right [18:09:43] root@doh1001:~# ip -br -6 addr show scope global dev lo [18:09:44] lo UNKNOWN 2001:67c:930::1/128 [18:10:27] so we should have skip_loopbacks = true right? [18:11:16] skip_loopback: true # bird::anycast takes care of this one [18:11:46] it's more confusing obviously [18:11:57] but on the others we don't have that, we have this: [18:11:58] skip_loopback_site: eqiad [18:12:02] for ns01, you will see a per-site skip [18:12:09] yeah [18:12:48] or $adata['skip_loopback_site'] == $::site [18:12:58] ^^ so yeah this should prevent it, but something is wrong? [18:13:10] that's how it is being prevented for the v4 case if you see [18:13:13] ns2-v4: [18:13:13] address: '198.35.27.27' [18:13:13] skip_loopback: true # bird::anycast takes care of this one [18:13:22] basically the idea was that since bird sets it up, we don't have to [18:13:27] or rather, repeat the setup [18:14:17] yeah that makes sense [18:14:28] but we have the same now for both ?? [18:14:48] https://www.irccloud.com/pastebin/hyOW9qMt/ [18:15:11] yep [18:15:33] not sure where the distinction is, we are iterating over the addresses anyway [18:16:07] but even then it goes to augeas to set up the loopback and there it gets it wrong [18:16:39] ok let's revert I guess and then take it from there, we will need to figure this out and then clean up dns7001 [18:16:45] and we will also fix the bird thing [18:17:07] ok yeah [18:17:29] taavi's current patch is also the right fix [18:17:35] for interface::ip at least [18:20:19] still at a loss to see why augeas is called but ok anyway, reverting for now [18:24:35] on the bird side we have [18:24:45] if $do_ipv6 and $vip_params['address_ipv6'] { [18:24:48] both conditions met [18:24:52] the PCC on that looks as expected now at least, but I also wonder whether some of the `ip` commands in there need a `-6` flag in them [18:24:52] +profile::bird::do_ipv6: true [18:24:58] + address_ipv6: '2a02:ec80:53::1' # ns2 v6 IP, anycsat [18:25:09] interface::ip { "lo-vip-${vip_fqdn}-ipv6": [18:25:12] prefixlen => '128', [18:26:29] one more interesting thing from /e/n/i [18:26:29] up ip addr add 2620:0:861:53::1/32 dev lo [18:26:30] up ip addr add 2620:0:860:53::1/32 dev lo [18:26:30] up ip addr add 2a02:ec80:53::1/128 label lo:anycast dev lo [18:27:06] 2a02:ec80:53::1 is the anycast one and that looks fine with the correct prefix and label [18:27:35] taavi: none of those commands are jumping out at me as needing a '-6', was there any in particular jumping out to you? [18:27:49] however, it also added the ns0-1 /128s with the incorrect label and prefixes [18:29:20] seems to have added the ns2 correctly, but ns0 and ns2 with /32 netmask [18:29:25] root@dns7001:~# ip -br -6 addr show dev lo scope global [18:29:25] lo UNKNOWN 2a02:ec80:53::1/128 2620:0:860:53::1/32 2620:0:861:53::1/32 [18:29:33] yep [18:29:56] digging [18:30:18] there is definitely a race condition too [18:30:46] you can't add the same IP twice, so if one or other part of the config tries to add it the one that does it first (and the netmask) is what will be on the int [18:31:11] topranks: but in theory, the other one should bail out, irrespective of when it runs? [18:31:25] what do you mean? [18:31:45] the ip command would fail yes, I tested manually given it'd do no harm: [18:31:48] we are adding it in two places, in modules/profile/manifests/dns/auth/config.pp and then in the birdone [18:31:49] root@dns7001:~# ip addr add 2620:0:860:53::1/128 dev lo [18:31:49] RTNETLINK answers: File exists [18:32:01] but the DNS one should not add it unless it meets the condition [18:32:04] unless $adata['skip_loopback'] or $adata['skip_loopback_site'] == $::site { [18:32:20] in this case, we do set skip_loopback so it doesn't matter when it runs, it simply won't go to the interface::ip bit [18:32:28] yeah why that conditional doesn't seem to work for the ipv6 I have no idea [18:32:36] yeah it clearly fails there [18:32:40] well it clearly is going to the interface::ip bit [18:32:47] but that's the thing that should prevent the race condition bit [18:32:48] yep [18:33:10] in terms of the race I just meant it might explain diff between ns2 and the others [18:33:25] yep definitely [18:33:28] one interesting bit is ns2 has the "skip_loopback: true" [18:33:50] the ones that it isn't skipping the interface::ip bit for have "skip_loopback_site: " [18:34:09] yeah [18:34:41] /e/n/i on dns7001 and 7002 [18:34:45] are fine for the v4s [18:34:48] I don't quite get why we have those two different ways to do it tbh [18:35:07] topranks: do you mean why we are setting this up in two different places? [18:36:11] no... more why we have both "skip_loopback" and "skip_loopback_site" [18:36:29] I just don't know why we have those two vars set up in different ways, I'm sure there is a good reason [18:36:46] in this case on the face of it the skip_loopback_site evaluation is not working as it needs to [18:36:59] even more odd because the v4 is working as intended :P [18:37:11] as to the way, I don't recall but I know I worked on it so I am running git blame [18:37:22] yeah but the v4 won't have a conflict on the netmask [18:37:25] assuming /32 works [18:37:33] so perhaps it's broken for both? [18:38:26] yeah we can double check [18:39:22] anyway, I am cleaning up dns7001 to get it to a working state [18:39:26] then we can repool [18:39:31] yeah the logic is failed [18:39:35] think about it [18:39:39] unless $adata['skip_loopback'] or $adata['skip_loopback_site'] == $::site [18:39:43] this is magru [18:39:55] so won't $::site be 'magru' ? [18:41:06] skip_loopback_site is only eqiad or codfw [18:41:10] even in the hiera [18:41:13] for the unicast [18:41:38] right but we add the eqiad and codfw IPs to the loopback everywhere [18:42:45] for v4 and v6 [18:42:57] https://www.irccloud.com/pastebin/KqtOn50K/ [18:43:18] what is the idea behind the "skip_loopback_site" var? [18:43:40] in the sense I take it there is something bad that would happen we are trying to avoid if we just had "skip_loopback: true" on them all? [18:46:58] I am trying to recall but I think I have to go through the logs to get the commit on why we did this separation since I can't recall offhand [18:47:39] this distinction was made when we separated the ns01 unicasts from the ns2 anycast [18:47:59] since ns2 anycast will be everywhere including the ns01 unicast hosts but not the other way around [18:48:38] anyway let's clean up dns7001 first and then come back to this [18:51:04] 2620::/32 dev lo proto kernel metric 256 pref medium [18:51:04] 2a02:ec80:53::1 dev lo proto kernel metric 256 pref medium [19:03:48] ok I manually cleaned up the routing table and /e/n/i [19:03:50] will check and pool [19:04:51] basically we set up the IPs at all sites [19:05:37] but we only skip running ip::interface for ns0/ns1 on the eqiad/codfw dns boxes [19:06:10] at the POPs we rare running that for ns0/ns1 [19:07:36] you can see the ns0/ns1 address addition in e/n/i has been done by ip::interface [19:07:45] https://www.irccloud.com/pastebin/ixP9MPb8/ [19:08:16] whereas the ns2 IP (where we have skip_loopback: true so it's skipped everywhere) is done by bird role: [19:08:27] https://www.irccloud.com/pastebin/3sFrgOG2/ [19:09:30] To fix I guess we can ether change the logic so we don't run ip::interface for the NS IPs anywhere [19:09:49] the historic reason for this at once upon a time, none of these were being handled by bird [19:10:05] and then we set up bird on the DNS boxes to do authdns announcements via bird [19:10:11] and then the skip things came into picture [19:10:41] 14:09:30 < topranks> To fix I guess we can ether change the logic so we don't run ip::interface for the NS IPs anywhere [19:10:50] given that all authdns_addrs are now on bird, this can work yep [19:10:56] that logic is flawed I think though. we only skip for ns0/ns1 in eqiad and codfw, yet we configure the IPs everywhere [19:11:38] I'm worried there may be some other reason we made it like that though we're not seeing [19:11:53] topranks: I have to go through the git logs and blame stuff to convine ourselves but yeah [19:12:00] yeah [19:12:26] but yeah on the face of it to me it seems like skip_loopback_site is redundant [19:12:42] we can scrap that var and put skip_loopback on all the addresses [19:14:13] T348041 [19:14:18] T348041: Remove static routes for ns[01] and replace their announcements with bird - https://phabricator.wikimedia.org/T348041 [19:14:18] https://phabricator.wikimedia.org/T348041 [19:14:23] > We will need to set skip_loopback to ns0-v4 and ns1-v4 as bird will create the loopback IPs. (Or since already created, we can skip those? I will confirm when we work on it.) [19:15:37] so yeah, I think it's safe to say that at this stage given all authdns_addrs are on bird [19:15:56] and bird will take care of this, we can simply let it do its job [19:15:58] I will confirm via PCC [19:23:07] yeah seems like the change to config.pp in this one made a bad assumption I think [19:23:08] https://gerrit.wikimedia.org/r/c/operations/puppet/+/964918 [19:24:22] topranks: yeah that's certainly me and I am trying to recall why and I can't. I just pinged bblack to ask another thing, which is if you look at dns7002 for example, for DoTLS, we are doing this: [19:24:27] sukhe@dns7002:~$ sudo cat /etc/haproxy/haproxy.cfg [19:24:31] listen dns_ns0-v4 bind 208.80.154.238:853 ssl tfo allow-0rtt curves X25519:X448:P-256 crt /etc/acmecerts/dotls-for-authdns/live/ec-prime256v1.chained.crt.key server gdnsd 127.0.0.1:535 send-proxy-v2 [19:24:35] listen dns_ns1-v4 bind 208.80.153.231:853 ssl tfo allow-0rtt curves X25519:X448:P-256 crt /etc/acmecerts/dotls-for-authdns/live/ec-prime256v1.chained.crt.key server gdnsd 127.0.0.1:535 send-proxy-v2 [19:24:51] this also doesn't add up for me -- why are listening for ns0-1 IPs in magru [19:25:00] we listen to them everywhere [19:25:15] why is that? [19:25:40] need to check the rest of the gdnsd automation to work out tbh [19:25:59] the automation bit I understand, this is again authdns_addrs [19:26:14] but what I don't recall is if we are carrying that bit forward, or is there some other actualy reason why we are doing this that I am missing [19:28:23] dns7001 is back online [19:30:15] since at least this patch we've been adding all the authdns_addrs to the loopback of every dns box: [19:30:16] https://gerrit.wikimedia.org/r/c/operations/puppet/+/556447 [19:30:56] I'm not sure how we configure gdnsd to bind to them, and if we ever has a scenario where we added the IPs but only listened on the "correct" ones for that site [19:31:05] I suspect we just added all IPs and listened on all always [19:31:15] and then we just route the specific ones we want in each site [19:31:56] yeah but that doesn't add up to me unless I am missing something with this setup. are the ns0-1s even routable to magru for example? like is there any scenario in which someone asking for ns0-1 will reach magru!? [19:32:58] nope [19:35:11] if you remove "skip_loopback_site" and set all the IPs to "skip_loopback" then I don't think the ns0/ns1 IPs will be added at the POPs [19:35:28] the question then is if gdnsd is configured to specifically bind to those IPs, and will crash/not start if they aren't configured [19:35:42] or if it'll work fine and just listen on the configured (correct for that site) IPs [19:35:45] but the DoTLS ones will be added still because they don't depend on the skip* [19:36:14] <% @authdns_addrs.each do |label, data| -%> [19:36:18] bind <%= data['address'] %>:853 [19:36:27] so that makes me wonder if there is a correlation between this [19:36:28] and [19:36:32] 14:35:11 < topranks> if you remove "skip_loopback_site" and set all the IPs to "skip_loopback" then I don't think the ns0/ns1 IPs [19:36:36] yeah [19:37:09] probably the skip_loopback_site was added to ensure the IPs are added to the loopback [19:37:19] and that bind :853 would work [19:37:20] most likely, that adds up [19:37:46] but the real question is that if there is a reason we did so, or, simply because adding it everywhere was easier and it does not matter since those IPs are not routable anyway [19:38:02] I pinged Brandon to clarify this and then we can clean that up as well [19:38:18] anyway I am glad that something adds up at least [19:38:19] gdnsd is also not listening on 0.0.0.0:53, so we need to be careful it won't also fail, though I can't see in the config what controls what IPs it listens on [19:38:20] :P [19:38:34] just easier I would bet the house on [19:38:54] but yes - much cleaner if we only configure the IPs in use at site [19:38:58] only route those IPs on site [19:39:00] yep [19:39:11] /etc/gdnsd/config-options is the one which dictates where gdnsd listen on [19:39:19] and that again is dictated by our friend authdns_addrs :) [19:39:20] it's at best confusing to set up ns0/ns1 everywhere when traffic for them never hits those boxes [19:39:49] sukhe: ah ok cool [19:40:16] topranks: amazing how this all even works out tbh [19:40:21] * sukhe gives up [19:40:24] so what you did by adding skip_loopback_site makes sense, you had to do it that way to ensure the ns0/ns1 IPs were still added in the non-bird way in the places bird shouldn't announce them [19:40:39] because of the config in /etc/gdnsd/config-options [19:41:04] yeah. and perhaps Brandon told me there is a good reason but I don't remember that at all. or perhaps we didn't discuss it and I based it on the PCC output to carry those IPs to PoPs [19:41:14] but I don't think it matters if those IPs are not routable anyway [19:41:21] so let's clean it up perhaps before our next round [19:41:40] yeah [19:41:54] you could just fix it by making sure the netmask is set correctly for the v6 ones [19:42:11] but overall may be a good opportunity to clean this up and stop configuring IPs and listening on them when they aren't ever going to be used [19:42:33] yep [19:42:34] it's confusing leading to things like this, and potentially if something else went wrong and they were announced by Bird (some other error) we could mess up our DNS [19:42:54] so better the boxes only have the IPs they need [19:43:03] yep +1. [20:04:47] Is there a special slack/irc/signal/whatever channel for summit communication? Or will we just use this one?