In-reply-to » @doesnm What do you think of when you say "decentralized"?

@prologic@twtxt.net currently? it wouldnt :D.

we would need to come up with a way of registering with multiple brokers that can i guess forward to a reader broker. something that will retry if needed. need to read into how simplex handles multi brokers

⤋ Read More
In-reply-to » (#eqgicaq) @3r1c I think I’m gonna like that blog. 😅 https://unixdigest.com/articles/is-the-madness-ever-going-to-end.html

I am reminded of this when I look at entire forks of vscode just to add a LLM code completion assistant.

⤋ Read More
In-reply-to » @prologic I’m sure you can somehow install something that calculates blake2b on OpenBSD. But it’s not part of the base system as a standalone CLI tool, there only appear to be Perl modules for it. The other SHA tools do exist.

@movq@www.uninformativ.de i’m sorry if I sound too contrarian. I’m not a fan of using an obscure hash as well. The problem is that of future and backward compatibility. If we change to sha256 or another we don’t just need to support sha256. But need to now support both sha256 AND blake2b. Or we devide the community. Users of some clients will still use the old algorithm and get left behind.

Really we should all think hard about how changes will break things and if those breakages are acceptable.

⤋ Read More
In-reply-to » @prologic I wanted to wait for things to settle down. It’s still unclear to me in which direction we’re going – and if that new/different stuff is even possible to implement in jenny. That said, I’ve been really busy with private stuff these last few days, I’ve lost track of most of what you’re discussing. 🥴

I share I did write up an algorithm for it at some point I think it is lost in a git comment someplace. I’ll put together a pseudo/go code this week.

Super simple:

Making a reply:

  1. If yarn has one use that. (Maybe do collision check?)
  2. Make hash of twt raw no truncation.
  3. Check local cache for shortest without collision
    • in SQL: select len(subject) where head_full_hash like subject || '%'

Threading:

  1. Get full hash of head twt
  2. Search for twts
    • in SQL: head_full_hash like subject || '%' and created_on > head_timestamp

The assumption being replies will be for the most recent head. If replying to an older one it will use a longer hash.

⤋ Read More
In-reply-to » @prologic I’m sure you can somehow install something that calculates blake2b on OpenBSD. But it’s not part of the base system as a standalone CLI tool, there only appear to be Perl modules for it. The other SHA tools do exist.

I mean sure if i want to run it over on my tooth brush why not use something that is accessible everywhere like md5? crc32? It was chosen a long while back and the only benefit in changing now is “i cant find an implementation for x” when the down side is it breaks all existing threads. so…

⤋ Read More
In-reply-to » Had to build a list of all feeds (that I follow) and all twts in them and there are two collisions already:

These collisions aren’t important unless someone tries to fork. So.. for the vast majority its not a big deal. Using the grow hash algorithm could inform the client to add another char when they fork.

⤋ Read More
In-reply-to » I'd like to see them fine me 2% of zero dollars

83(4) GDPR sets forth fines of up to 10 million euros, or, in the case of an undertaking, up to 2% of its entire global turnover of the preceding fiscal year, whichever is higher.

Though I suppose it has to be the greater of the two. But I don’t even have one euro to start with.

⤋ Read More
In-reply-to » So I'm a location based system, how exactly do I reply to one of these two Twts from @Yarns ? 🤔

I demand full 9 digit nano second timestamps and the full TZ identifier as documented in the tz 2024b database! I need to know if there was a change in daylight savings as per the locality in question as of the provided date.

⤋ Read More
In-reply-to » @movq going a little sideways on this, "*If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? 😅*", wouldn't it preparing for a potential (even if very, very, veeeeery remote) growth be a good thing? Mastodon signs all messages, keeps a history of edits, and it doesn't break threads. It isn't a problem there.😉 It is here.

i feel like we should isolate a subset of markdown that makes sense and built it into lextwt. it already has support for links and images. maybe basic formatting bold, italic. possibly block quote and bullet lists. no tables or footnotes

⤋ Read More
In-reply-to » Current Twt Hash spec and probability of hash collision:

the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.

if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.

⤋ Read More
In-reply-to » Current Twt Hash spec and probability of hash collision:

@prologic@twtxt.net the basic idea was to stem the hash.. so you have a hash abcdef0123456789... any sub string of that hash after the first 6 will match. so abcdef, abcdef012, abcdef0123456 all match the same. on the case of a collision i think we decided on matching the newest since we archive off older threads anyway. the third rule was about growing the minimum hash size after some threshold of collisions were detected.

⤋ Read More
In-reply-to » An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let's say I produced the following Twt:

There is nothing wrong with how we currently run a diff to see what has been removed. if i build a merkle tree off all the twt hashes in a feed i can use that to verify a twt should be in a feed or not. and gossip that to my peers.

⤋ Read More
In-reply-to » An alternate idea for supporting (properly) Twt Edits is to denoate as such and extend the meaning of a Twt Subject (which would need to be called something better?); For example, let's say I produced the following Twt:

So.. basically a rehash of the email “unsend” requests? What if i was to make a (delete: 5vbi2ea) .. would it delete someone elses twt?

⤋ Read More
In-reply-to » Current Twt Hash spec and probability of hash collision:

isn’t the benefit of blake2b that it is a more efficient algo than sha1 and has the same or similar entropy to sha3? i thought we had partially solved this with some type of expanding hash size? additionally we could increase bit density by using base36 or base64/url-safe…

⤋ Read More
In-reply-to » @bender Yes, they do 🤣 Implicitly, or threading would never work at all 😅 Nor lookups 🤣 They are used as keys. Think of them like a primary key in a database or index. I totally get where you're coming from, but there are trade-offs with using Message/Thread Ids as opposed to Content Addressing (like we do) and I believe we would just encounter other problems by doing so.

@prologic@twtxt.net a signature IS encryption in reverse. If my private key becomes compromised then they can impersonate me. Being able to manage promotion and revocation of keys needed even in a system where its used for just signatures.

⤋ Read More
In-reply-to » @bender Yes, they do 🤣 Implicitly, or threading would never work at all 😅 Nor lookups 🤣 They are used as keys. Think of them like a primary key in a database or index. I totally get where you're coming from, but there are trade-offs with using Message/Thread Ids as opposed to Content Addressing (like we do) and I believe we would just encounter other problems by doing so.

Key rotation is a very important feature in a system like this.

⤋ Read More
In-reply-to » @bender Yes, they do 🤣 Implicitly, or threading would never work at all 😅 Nor lookups 🤣 They are used as keys. Think of them like a primary key in a database or index. I totally get where you're coming from, but there are trade-offs with using Message/Thread Ids as opposed to Content Addressing (like we do) and I believe we would just encounter other problems by doing so.

the right way to solve this is to use public/private key(s) where you actually have a public key fingerprint as your feed’s unique identity that never changes.

i would rather it be a random value signed by a key. That way the key can change but the value stays the same.

⤋ Read More

Interesting.. QUIC isn’t very quick over fast internet.

QUIC is expected to be a game-changer in improving web application performance. In this paper, we conduct a systematic examination of QUIC’s performance over high-speed networks. We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart. Moreover, the performance gap between QUIC and HTTP/2 grows as the underlying bandwidth increases. We observe this issue on lightweight data transfer clients and major web browsers (Chrome, Edge, Firefox, Opera), on different hosts (desktop, mobile), and over diverse networks (wired broadband, cellular). It affects not only file transfers, but also various applications such as video streaming (up to 9.8% video bitrate reduction) and web browsing. Through rigorous packet trace analysis and kernel- and user-space profiling, we identify the root cause to be high receiver-side processing overhead, in particular, excessive data packets and QUIC’s user-space ACKs. We make concrete recommendations for mitigating the observed performance issues.

https://dl.acm.org/doi/10.1145/3589334.3645323

⤋ Read More
In-reply-to » On the Subject of Feed Identities; I propose the following:

So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.

What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:

The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.

The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?

From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.

I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.

# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w 
# sig: BEGIN SALTPACK SIGNED MESSAGE. ... 
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...

⤋ Read More
In-reply-to » Google's James Manyika: 'The Productivity Gains From AI Are Not Guaranteed' Google executive James Manyika has warned that AI's impact on productivity is not guaranteed [Editor's note: the link may be paywalled], despite predictions of trillion-dollar economic potential. From the report: "Right now, everyone from my old colleagues at McKinsey Global Institute to Goldman Sachs are putting out these extra ... ⌘ Read more

anything with McKinsey on it just means finding reasons to fire staff.

⤋ Read More
In-reply-to » Telegram founder Pavel Durov arrested at French airport Article URL: https://www.theguardian.com/media/article/2024/aug/24/telegram-app-founder-pavel-durov-arrested-at-french-airport

@bender@twtxt.net and I saw some conspiracy theory that he knew he was going to be arrested. He was working with French intelligence on a plea deal to defect. And now Russia is freaking out that Ukraine allies can have war comms access.

Yikes! If only they had salty.im!

⤋ Read More