I have configured my twtxt.txt as simple as possible. I have setup a publish_command on jenny. Hopefully all works fine, and I am good to go. Next will be setting the announce_me to true. Here we go!

​ Read More

With a SHA1 encoding the probability of a hash collision becomes, at various k (number of twts):

>>> import math
>>>
>>> def collision_probability(k, bits):
...     n = 2 ** bits  # Total unique hash values based on the number of bits
...     probability = 1 - math.exp(- (k ** 2) / (2 * n))
...     return probability * 100  # Return as percentage
...
>>> # Example usage:
>>> k_values = [100000, 1000000, 10000000]
>>> bits = 44  # Number of bits for the hash
>>>
>>> for k in k_values:
...     print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 44 bits: 0.0284%
Probability of collision for 1000000 hashes with 44 bits: 2.8022%
Probability of collision for 10000000 hashes with 44 bits: 94.1701%
>>> bits = 48
>>> for k in k_values:
...     print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 48 bits: 0.0018%
Probability of collision for 1000000 hashes with 48 bits: 0.1775%
Probability of collision for 10000000 hashes with 48 bits: 16.2753%
>>> bits = 52
>>> for k in k_values:
...     print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 52 bits: 0.0001%
Probability of collision for 1000000 hashes with 52 bits: 0.0111%
Probability of collision for 10000000 hashes with 52 bits: 1.1041%
>>>

If we adopted this scheme, we could have to increase the no. of characters (first N) from 11 to 12 and finally 13 as we approach globally larger enough Twts across the space. I think at least full crawl/scrape it was around ~500k (maybe)? https://search.twtxt.net/ says only ~99k

​ Read More

I’ve been using Codeium too the last week or so ! It’s pretty good and like @xuu said is a pretty desent Junior assistant, it helps me write good docs and the tab completion is amazing!

It of course completely sucks at doing anything “intelligent” or complex, but if you just use it as a fancier auto complete it’s actually half way decent 👌

​ Read More

One thing that’s on my mind over the last few days about all this Twt editing and identity stuff we’ve been having hot debates over is this


I don’t really have a problem with editing twts, or someone changing their feed’s URL.

Personally I think the folks that do are rightfully pedantic and like a good user experience, which I don’t blame ‘em. I would expect the same too. Anyway, just wanted to get that out there, I believe we can support editing and identity in a way that is still simple, as long as we bring clients along for the ride with us. The old/legacy original client though will have to remain well, ya know 😅

​ Read More

Can anyone recommend a decent Android ROM that strips out as much of the spyware as possible? Is GrapheneOS a good option? I need to get a new phone anyway so I don’t mind buying within a supported device list as long as I can get one on the used market for $300-$400 or less.

If anyone could recommend some learning resources for this stuff I’d really appreciate it.

​ Read More

The bug in jenny that @aelaraji@aelaraji.com found:

Jenny has to look for the metadata fields, it must find the # prev = ... line. To do so, I naively wrote something along these lines:

for line in content.splitlines():
    if line.startswith('# prev = '):
        ...

Problem is, we use \u2028 a lot in twtxt feeds and Python interprets those as line separators as well. That’s not what we want here. Jenny must only split at a \n.

Now @prologic@twtxt.net had a quote/copy of some of his metadata fields in a twt. Like so:

# prev = foo bar

Perfectly legitimate, but now jenny found the # prev = twice (once in the actual header, once in a twt), didn’t know what to do, and thus did not fetch the archived feeds. đŸ€Š

Should be fixed in this commit: https://www.uninformativ.de/git/jenny/commit/6e8ce5afdabd5eac22eae4275407b3bd2a167daf.html

​ Read More

been rather uninterested in technology lately for some reason. it’s probably the US Election’s fault, since I live in the US and all

​ Read More

Je cherche un espace oĂč publier une sorte de blog. Juste du texte. Un truc comme faisait rawtext.club ou midnight.pub, mais qui accepte les nouvelles inscriptions. Vous auriez des suggestions? #smolweb

​ Read More

Introduction to JuiceFS | JuiceFS Document Center – Thinking about using JuiceFS to solve a long-running problem I’ve always had.

  • Be able to run services on any node in my cluster and let Docker Swarm pick whatever node it likes (instead of now where I have to pin some workloads to specific nodes, as that’s where their local storage volume is)
  • Manage the scalability of data and growth over time instead of what I do now which is to extend EXT4 filesystems on my Docker Swarm nodes every few years.

​ Read More

Suddenly, VLC crashes when I jump forward in videos. It’s 100% reproducible. Reboot didn’t fix it. Starting on the shell, I see:

Assertion !p->parent->stash_hwaccel failed at src/libavcodec/pthread_frame.c:649

Turns out, it’s this: https://forum.mxlinux.org/viewtopic.php?t=81068 Before I even went online, I assumed that turning off hardware acceleration might help. And it does. Phew!

​ Read More

Alors j’ai vĂ©rifiĂ© : toujours pas de ministre de l’éducation nationale Ă  ce jour. La rentrĂ©e se fera donc sans. Cela illustre malheureusement tout l’intĂ©rĂȘt que porte E Macron Ă  l’instruction de nos enfants.

​ Read More

Vu le Comte de Monte Cristo hier soir, avec le brillant Pierre Ninet. Je n’ai pas vu les 3h passer, ce film est une oeuvre d’art. Il y a un peu de tous les genres, les acteurs sont excellents et je parie que certains jeunes seront revus bientĂŽt. Bravo! N’hĂ©sitez pas Ă  aller le voir si ce n’est pas dĂ©jĂ  fait

​ Read More