prologic

twtxt.net

"Problems are Solved by Method" 🇦🇺👨‍💻👨‍🦯🏹♔ 🏓⚯ 👨‍👩‍👧‍👧🛥 -- James Mills (operator of twtxt.net / creator of Yarn.social 🧶)

Recent twts from prologic
In-reply-to » Can someone much smarter than me help me figure out a couple of newly discovered deadlocks in yarnd that I think have always been there, but only recently uncovered by the Go 1.23 compiler.

nevermind; I think this might be some changes internally in Go 1.23 and a dependency I needed to update 🤞

⤋ Read More
In-reply-to » no my fault your client can't handle a little editing ;)

Location Addressing is fine in smaller or single systems. But when you’re talking about large decentralised systems with no single point of control (kind of the point) things like independable variable integrity become quite important.

⤋ Read More
In-reply-to » no my fault your client can't handle a little editing ;)

What is being proposed as a counter to content-addressing is called location-addressing. Two very different approaches, both with pros/cons of course. But a local cannot be verified, the content cannot be be guaranteed to be authenticate in any way, you just have to implicitly trust that the location points to the right thing.

⤋ Read More
In-reply-to » no my fault your client can't handle a little editing ;)

For example, without content-addressing, you’d never have been able to find let alone pull up that ~3yr old Twt of me (my very first), hell I’d even though I lost my first feed file or it became corrupted or something 🤣 – If that were the case, it would actually be possible to reconstruct the feed and verify every single Twt against the caches of all of you 🤣

⤋ Read More

Speaking of AI tech (sorry!); Just came across this really cool tool built by some engineers at Google™ (currently completely free to use without any signup) called NotebookLM 👌 Looks really good for summarizing and talking to document 📃

⤋ Read More
In-reply-to » Getting a little sick of AI this, AI that. Yes I'll be left behind while everyone else jumps on the latest thing, but I'm not sure I care.

@eldersnake@we.loveprivacy.club Yeah I’m looking forward to that myself 🤣 It’ll be great to see where technology grow to a level of maturity and efficiency where you can run the tools on your own PC or Device and use it for what, so far, I’ve found it to be somewhat decent at; Auto-Complete, Search and Q&A.

⤋ Read More
In-reply-to » no my fault your client can't handle a little editing ;)

@sorenpeter@darch.dk I really don’t think we can ignore the last ~3 years and a bit of this threading model working quite well for us as a community across a very diverse set of clients and platforms. We cannot just drop something that “mostly works just fine” for the sake of “simplicity”. We have to weight up all the options. There are very real benefits to using content addressing here that really IMO shouldn’t be disregarded so lightly that actually provide a lot of implicit value that users of various clients just don’t get to see. I’d recommend reading up on the ideas behind content addressing before simply dismissing the Twt Hash spec entirely, it wasn’t even written or formalised by me, but I understand how it works quite well 😅 The guy that wrote the spec was (is?) way smarter than I was back then, probably still is now 🤣

⤋ Read More
In-reply-to » @movq going a little sideways on this, "*If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? 😅*", wouldn't it preparing for a potential (even if very, very, veeeeery remote) growth be a good thing? Mastodon signs all messages, keeps a history of edits, and it doesn't break threads. It isn't a problem there.😉 It is here.

@xuu I don’t think this is a lextwt problem tbh. Just the Markdown aprser that yarnd currently uses. twtxt2html uses Goldmark and appears to behave better 🤣

⤋ Read More
In-reply-to » An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let's say I produced the following Twt:

@xuu Long while back, I experimented with using similarity algorithms to detect if two Twts were similar enough to be considered an “Edit”.

⤋ Read More
In-reply-to » Current Twt Hash spec and probability of hash collision:

Right I see what you mean @xuu – Can you maybe come up with a fully fleshed out proposal for this? 🤔 This will help solve the problem of hash collision that result from the Twt/hash space growing larger over time without us having to change anything about the way we construct hashes in the first place. We just assume spec compliant clients will just dynamically handle this as the space grows.

⤋ Read More
In-reply-to » @prologic the basic idea was to stem the hash.. so you have a hash abcdef0123456789... any sub string of that hash after the first 6 will match. so abcdef, abcdef012, abcdef0123456 all match the same. on the case of a collision i think we decided on matching the newest since we archive off older threads anyway. the third rule was about growing the minimum hash size after some threshold of collisions were detected.

@xuu I think we never progressed this idea further because we weren’t sure how to tell if a hash collision would occur in the first place right? In other words, how does Client A know to expand a hash vs. Client B in a 100% decentralised way? 🤔

⤋ Read More
In-reply-to » Getting a little sick of AI this, AI that. Yes I'll be left behind while everyone else jumps on the latest thing, but I'm not sure I care.

Plus these so-called “LLM”(s) have a pretty good grasp of the “shape” of language, so they appear to be quite intelligent or produce intelligible response (when they’re actually quite stupid really).

⤋ Read More
In-reply-to » Getting a little sick of AI this, AI that. Yes I'll be left behind while everyone else jumps on the latest thing, but I'm not sure I care.

@eldersnake@we.loveprivacy.club You don’t get left behind at all 🤣 It’s hyped up so much, it’s not even funny anymore. Basically at this point (so far at least) I’ve concluded that all this GenAI / LLM stuff is just a fancy auto-complete and indexing + search reinvented 🤣

⤋ Read More
In-reply-to » @movq going a little sideways on this, "*If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? 😅*", wouldn't it preparing for a potential (even if very, very, veeeeery remote) growth be a good thing? Mastodon signs all messages, keeps a history of edits, and it doesn't break threads. It isn't a problem there.😉 It is here.

@bender@twtxt.net This is the different Markdown parsers being used. Goldmark vs. gomarkdown. We need to switch to Goldmark 😅

⤋ Read More
In-reply-to » @movq going a little sideways on this, "*If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? 😅*", wouldn't it preparing for a potential (even if very, very, veeeeery remote) growth be a good thing? Mastodon signs all messages, keeps a history of edits, and it doesn't break threads. It isn't a problem there.😉 It is here.

@quark@ferengi.one i’m guessing the quotas text should’ve been emphasized?

⤋ Read More
In-reply-to » LinkedIn Is Training AI on User Data Before Updating Its Terms of Service An anonymous reader shares a report: LinkedIn is using its users' data for improving the social network's generative AI products, but has not yet updated its terms of service to reflect this data processing, according to posts from various LinkedIn users and a statement from the company to 404 Media. Instead, the company says it ... ⌘ Read more

@slashdot@feeds.twtxt.net NahahahahHa 🤣 So glad I don’t use LinkedIn 🤦‍♂️

⤋ Read More
In-reply-to » An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let's say I produced the following Twt:

@xuu you mean my original idea of basically just automatically detecting Twt edits from the client side?

⤋ Read More
In-reply-to » So.. basically a rehash of the email "unsend" requests? What if i was to make a (delete: 5vbi2ea) .. would it delete someone elses twt?

@xuu this is where you would need to prove that the editor delete request actually came from that feed author. Hence why integrity is much more important here.

⤋ Read More
In-reply-to » @prologic I wouldn't want my client to honour delete requests. I like my computer's memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer can't find it.

@falsifian@www.falsifian.org without supporting dudes properly though you’re running into GDP issues and the right to forget. 🤣 we’ve had pretty lengthy discussions about this in the past years ago as well, but we never came to a conclusion. We’re all happy with.

⤋ Read More
In-reply-to » An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let's say I produced the following Twt:

Finally @lyse@lyse.isobeef.org ’s idea of updating metadata changes in a feed “inline” where the change happened (with respect to other Twts in whatever order the file is written in) is used to drive things like “Oh this feed now has a new URI, let’s use that from now on as the feed’s identity for the purposes of computing Twt hashes”. This could extend to # nick = as preferential indicators to clients as well as even other updates such as # description = – Not just # url =

⤋ Read More
In-reply-to » An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let's say I produced the following Twt:

Likewise we could also support delete:229d24612a2, which would indicate to clients that fetch the feed to delete any cached Twt matching the hash 229d24612a2 if the author wishes to “unpublish” that Twt permanently, rather than just deleting the line from the feed (which does nothing for clients really).

⤋ Read More

An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let’s say I produced the following Twt:

2024-09-18T23:08:00+10:00	Hllo World

And my feed’s URI is https://example.com/twtxt.txt. The hash for this Twt is therefore 229d24612a2:

$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:08:00+10:00\nHllo World" | sha1sum | head -c 11
229d24612a2

You wish to correct your mistake, so you make an amendment to that Twt like so:

2024-09-18T23:10:43+10:00	(edit:#229d24612a2) Hello World

Which would then have a new Twt hash value of 026d77e03fa:

$ echo -n "https://example.com/twtxt.txt\n2024-09-18T23:10:43+10:00\nHello World" | sha1sum | head -c 11
026d77e03fa

Clients would then take this edit:#229d24612a2 to mean, this Twt is an edit of 229d24612a2 and should be replaced in the client’s cache, or indicated as such to the user that this is the intended content.

⤋ Read More

With a SHA1 encoding the probability of a hash collision becomes, at various k (number of twts):

>>> import math
>>>
>>> def collision_probability(k, bits):
...     n = 2 ** bits  # Total unique hash values based on the number of bits
...     probability = 1 - math.exp(- (k ** 2) / (2 * n))
...     return probability * 100  # Return as percentage
...
>>> # Example usage:
>>> k_values = [100000, 1000000, 10000000]
>>> bits = 44  # Number of bits for the hash
>>>
>>> for k in k_values:
...     print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 44 bits: 0.0284%
Probability of collision for 1000000 hashes with 44 bits: 2.8022%
Probability of collision for 10000000 hashes with 44 bits: 94.1701%
>>> bits = 48
>>> for k in k_values:
...     print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 48 bits: 0.0018%
Probability of collision for 1000000 hashes with 48 bits: 0.1775%
Probability of collision for 10000000 hashes with 48 bits: 16.2753%
>>> bits = 52
>>> for k in k_values:
...     print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 52 bits: 0.0001%
Probability of collision for 1000000 hashes with 52 bits: 0.0111%
Probability of collision for 10000000 hashes with 52 bits: 1.1041%
>>>

If we adopted this scheme, we could have to increase the no. of characters (first N) from 11 to 12 and finally 13 as we approach globally larger enough Twts across the space. I think at least full crawl/scrape it was around ~500k (maybe)? https://search.twtxt.net/ says only ~99k

⤋ Read More
In-reply-to » Taking the last n characters of a base32 encoded hash instead of the first n can be problematic for several reasons:

I think it was a mistake to take the last n base32 encoded characters of the blake2b 256bit encoded hash value. It should have been the first n. where n is >= 7

⤋ Read More

Taking the last n characters of a base32 encoded hash instead of the first n can be problematic for several reasons:

  1. Hash Structure: Hashes are typically designed so that their outputs have specific statistical properties. The first few characters often have more entropy or variability, meaning they are less likely to have patterns. The last characters may not maintain this randomness, especially if the encoding method has a tendency to produce less varied endings.

  2. Collision Resistance: When using hashes, the goal is to minimize the risk of collisions (different inputs producing the same output). By using the first few characters, you leverage the full distribution of the hash. The last characters may not distribute in the same way, potentially increasing the likelihood of collisions.

  3. Encoding Characteristics: Base32 encoding has a specific structure and padding that might influence the last characters more than the first. If the data being hashed is similar, the last characters may be more similar across different hashes.

  4. Use Cases: In many applications (like generating unique identifiers), the beginning of the hash is often the most informative and varied. Relying on the end might reduce the uniqueness of generated identifiers, especially if a prefix has a specific context or meaning.

In summary, using the first n characters generally preserves the intended randomness and collision resistance of the hash, making it a safer choice in most cases.

⤋ Read More

Current Twt Hash spec and probability of hash collision:

The probability of a Twt Hash collision depends on the size of the hash and the number of possible values it can take. For the Twt Hash, which uses a Blake2b 256-bit hash, Base32 encoding, and takes the last 7 characters, the space of possible hash values is significantly reduced.

Breakdown:
  1. Base32 encoding: Each character in the Base32 encoding represents 5 bits of information (since ( 2^5 = 32 )).
  2. 7 characters: With 7 characters, the total number of possible hashes is:
    [
 32^7 = 3,518,437,208
 ]
 This gives about 3.5 billion possible hash values.
Probability of Collision:

The probability of a hash collision depends on the number of hashes generated and can be estimated using the Birthday Paradox. The paradox tells us that collisions are more likely than expected when hashing a large number of items.

The approximate formula for the probability of at least one collision after generating n hashes is:
[
P(\text{collision}) \approx 1 - e^{-\frac{n^2}{2M}}
]
Where:

  • (n) is the number of generated Twt Hashes.
  • (M = 32^7 = 3,518,437,208) is the total number of possible hash values.

For practical purposes, here are some example probabilities for different numbers of hashes (n):

  • For 1,000 hashes:
    [
 P(\text{collision}) \approx 1 - e^{-\frac{1000^2}{2 \cdot 3,518,437,208}} \approx 0.00014 \, \text{(0.014%)}
    ]
  • For 10,000 hashes:
    [
 P(\text{collision}) \approx 1 - e^{-\frac{10000^2}{2 \cdot 3,518,437,208}} \approx 0.14 \, \text{(14%)}
    ]
  • For 100,000 hashes:
    [
 P(\text{collision}) \approx 1 - e^{-\frac{100000^2}{2 \cdot 3,518,437,208}} \approx 0.999 \, \text{(99.9%)}
    ]
Conclusion:
  • For small to moderate numbers of hashes (up to around 1,000–10,000), the collision probability is quite low.
  • However, as the number of Twts grows (above 100,000), the likelihood of a collision increases significantly due to the relatively small hash space (3.5 billion).

⤋ Read More

Just experimenting…

$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\n2020-07-18T12:39:52Z\nHello World! 😊" | sha256sum | base64 | tr -d '=' | tail -c 12
NWY4MSAgLQo

⤋ Read More
In-reply-to » Could someone knowledgable reply with the steps a grandpa will take to calculate the hash of a twtxt from the CLI, using out-of-the-box tools? I swear I read about it somewhere, but can't find it.

It would appear that the blake2b 256bit digest algorithm is no longer supported by the openssl tool, however blake2s256 is; I’m not sure why 🤔

$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\n2020-07-18T12:39:52Z\nHello World! 😊" | openssl dgst -blake2s256 -binary | base32 | tr -d '=' | tr 'A-Z' 'a-z' | tail -c 7
zq4fgq

Obviously produce the wrong hash, which should be o6dsrga as indicated by the yarnc hash utility:

$ yarnc hash -u https://twtxt.net/user/prologic/twtxt.txt -t 2020-07-18T12:39:52Z "Hello World! 😊"
o6dsrga

But at least the shell pipeline is “correct”.

⤋ Read More