@codebuzz@www.codebuzz.nl I have some shell scripts that handle some of the log formatting details, but I mostly write my mesages by hand. Lately I’ve been browsing twtxt.net since they aggregate most of the known network. I have a couple of demo aggregators sitting around, but I’m in the middle of some infra rebuilds so a lot of my services are offline rn. They’re both built on a simple social graph analysis that extracts urls for your direct follows the follows listed on each of those feeds (friend-of-a-friend replication). certain formatting operations are awkward with my setup, so I may write an app of some kind in the future. likely gemini-based, but I have a number of projects ahead of that one in the queue.
@cuaxolotl@sunshinegardens.org what problem does building a social graph solve?
@cuaxolotl@sunshinegardens.org This is largely by accident and not on purpose:
Lately I’ve been browsing twtxt.net since they aggregate most of the known network
@cuaxolotl@sunshinegardens.org The reason I ask is that I maintain the Twtxt search engine and crawler service that basically does exactly this, so I’m curious what you’re trying to solve by doing this yourself? Not that that’s a bad idea. I just want to understand what you are trying to achieve. 🤗
@cuaxolotl@sunshinegardens.org Good enough 😅 LMK if I can help in any way then, what I built isn’t perfect, but the crawler is able to crawl the entire space in ~15m or so (every day)
@Codebuzz@www.codebuzz.nl It currently takes my yarnd
pod here around ~2m on average to fetch, process and cache ~700 feeds.