And regarding those broken URLs: I once speculated that these bots operate on an old dataset, because I thought that my redirect rules actually were broken once and produced loops. But a) I cannot reproduce this today, and b) I cannot find anything related to that in my Git history, either. But it’s hard to tell, because I switched operating systems and webservers since then …
But the thing is that I’m seeing new URLs constructed in this pattern. So this can’t just be an old crawling dataset.
I am now wondering if those broken URLs are bot bugs as well.
They look like this (zalgo is a new project):
https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/
When you request that URL, you get redirected to /git/:
$ curl -sI https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/
HTTP/1.0 301 Moved Permanently
Date: Sat, 22 Nov 2025 06:13:51 GMT
Server: OpenBSD httpd
Connection: close
Content-Type: text/html
Content-Length: 510
Location: /git/
And on /git/, there are links to my repos. So if a broken client requests https://www.uninformativ.de/projects/slinp/zalgo/scksums/bevelbar/, then sees a bunch of links and simply appends them, you’ll end up with an infinite loop.
Is that what’s going on here or are my redirects actually still broken … ?