[07:23:11] duesen: I see. Technically speaking redioscope can totally produce to kafka. We'd need to know which cluster it will target (guessing jumbo), double check that jumbo allows connections from aux pods and use the 'external-services-networkpolicy' helm chart module to create/maintain network policies for accessing kafka [07:24:25] jayme: ok, thank you! Tbh I was hoping I could just http PUT somehwere. I guess I have to look into writing a Kafka producer now :D [07:35:51] duesen: there are gateways to kafka (like eventgate), maybe that can be used (sorry, I'm not very knowlegeable here). I would try to clarify these questions with data engineeting/platform/what the name currently is before starting an implementation [08:39:43] duesen: you could in theory output from redioscope to fluentbit - dumping a file all at once might be a bit harder though. See fb95e175dcf111b69add501ea77d54d4add612e6 in deployment-charts for how the api-gateway used to emit logs to eventgate using it [08:40:51] all that envoy did was output to a file, and fluentbit tails the same file [10:03:29] jayme: yes, of course, I'm talking to them. at this point i'm just collecting opotions and ideas. [10:04:01] hnowlan: I hadn't heard about fluentbit before, I'll look into it [10:10:38] duesen: I think the fluentbit idea is useful when you dump data to "disk" anyways. But since you control the code that produces the data, sending it to directly might be preferable. [10:12:32] it probably depends on the data you're acutally sending since (IIRC) events are supposed to be small. Sending the data to produce the CSV in a log-style way (line by line) might circumvent that [10:41:42] duesen: do you have an idea of the on-disk size of the dumps? [11:34:20] hnowlan: not big. about 1MB. [11:34:50] it's a csv with about 8k rows [11:35:13] and that's tweakable. it's a top-n dump. we can always change n. [11:37:22] that sounds fine to me [11:37:37] fluentbit can buffer stuff anyway