@aelaraji@aelaraji.com Can you give me examples of hashes that you have detected wrong between Emacs client and twtxt.net?
Perhaps there is some character, some space, that is creating the discrepancy.
@prologic@twtxt.net Agreed! But clients can hallucinate and generate wrong hashes
aka Lies
𤣠Also, If you chheck your own twt on twtxt.net, it looks like a root twt instead of a replay.
[ ā³ Reply to twt ]
button?
I donāt think so, at least the tests I did passed. If youāre pretty sure itās a bug, please create an issue in the repository with the specific case and Iāll investigate it.
There are 2 buttons to make replicas, one makes a replica in the thread where the twt is located (this is the one that should be used the most, as it serves a thread), the other creates a replica to a specific twt.
Iāll let you know a bit about the status: Iām just now implementing the thread screen. There you can be sure where you are. Itās a bit confusing right now, sorry. I think the client is still in alpha. When Iāve finished what Iām doing, and the direct message system, Iāll freeze development and focus on creating more tests, looking for bugs and making small visual adjustments.
@arne@uplegger.eu Hi! I love that youāre implementing it! Maybe, when weāre both done, we could test the clients by communicating both.
I donāt think Iām going to be able to help you much, my knowledge of OpenSSL and PHP is not as high as Iād like it to be.
Maybe the OpenSSL version uses SHA-1 by default in PHP. Or that the IV is derived together with the key (not generated separately). But Iām not able to answer your questions, sorry.
Iām invoking the commands directly, without any libraries in between. Maybe that would help you?
Some satisfying icicle-breaking in our backyard: photos.falsifian.org/video/sM7G3vfS6yuc/VID_20250217_203250.mp4
I couldnāt resist taking home a prize:
Itās been snowy here in #Toronto.
(I tried formatting the images in markdown for the benefit of yarn and any other clients that understand it.)
well, Gemini clients like Lagrange allow to show inline images when you click on an image link. Text based clients, like Amfora, usually allow to watch the image in another āwindowā.
For example here: gemini://text.eapl.mx/en-making-a-tic-tac-toe-variant and there https://text.eapl.mx/en-making-a-tic-tac-toe-variant
I agree that some topics require images to make it easier to explain.
tt
rewrite in Go and quickly implemented a stack widget for tview. The builtin Pages is similar but way too complicated for my use case. I would have to specify a mandatory name and some additional options for each page. Also, it allows me to randomly jump around between pages using names, but only gives me direct access the first, however, not the last page. Weird. I don't wanna remember names. All I really need is a classic stack. You open a new fullscreen dialog and maybe another one on top of that. Closing the upper most brings you back to the previous one and so on.
Thinking about trying tt. If it really usable i will abandon twtxtdon (service to read twtxt feeds from mastodon client), which currently has only authorization implemented
@prologic@twtxt.net Of course you donāt notice it when yarnd only shows at most the last n messages of a feed. As an example, check out mckinleyās message from 2023-01-09T22:42:37Z. It has ā[Scheduled][Scheduled][Scheduled]ā⦠in it. This text in square brackets is repeated numerous times. If you search his feed for closing square bracket followed by an opening square bracket (][
) you will find a bunch more of these. It goes without question he never typed that in his feed. My client saves each twt hash Iāve explicitly marked read. A few days ago, I got plenty of apparently years old, yet suddenly unread messages. Each and every single one of them containing this repeated bracketed text thing. The only conclusion is that something messed up the feed again.
@eapl.me@eapl.me Yeah, you need some kind of storage for that. But chances are that thereās already a cache in place. Ideally, the client remembers etags or last modified timestamps in order to reduce unnecessary network traffic when fetching feeds over HTTP(S).
A newsreader without read flags would be totally useless to me. But I also do not subscribe to fire hose feeds, so maybe thatās a different story with these. I donāt know.
To me, filtering read messages out and only showing new messages is the obvious solution. No need for notifications in my opinion.
There are different approaches with read flags. Personally, I like to explicitly mark messages read or unread. This way, I can think about something and easily come back later to reply. Of course, marking messages read could also happen automatically. All decent mail clients Iāve used in my life offered even more advanced features, like delayed automatic marking.
All I can say is that Iām super happy with that for years. It works absolutely great for me. The only downside is that I see heaps of new, despite years old messages when a bug causes a feed to be incorrectly updated (https://twtxt.net/twt/tnsuifa). ;-)
thatās a fair point.
Perhaps, since Twitter in 2006 never implemented read flags, every derivative microblogging system never saw that as an expected feature. This is curious because Twitter started with SMS, where on our phones we can mark messages as read or unread.
I think it all comes from the difference between reading an email (directed to you) vs. reading public posts (like a blog or a āwall,ā where you donāt mark posts as read). Itās not necessary to mark it as āreadā, you just jump over it.
Reading microblogging posts in an email program is not common, I think, and I havenāt really used it, so I cannot say how it works, and whether it would be better for me or not.
However, Iāve used Thunderbird as a feed reader, and I understand the advantages when reading blog posts.
About read flags being simple, well⦠we just had a discussion this morning about how tracking read messages would require a lot of rethinking for clients such as timeline
where no state is stored. Even considering some kind of ānotification of unread messages or mentionsā is not expected for those minimalist client, so itās an interesting compromise to think about.
@eapl.me@eapl.me Read flags are so simple, yet powerful in my opinion. I really donāt understand why this is not a thing in most twtxt clients. Itās completely natural in e-mail programs and feed readers, but it hasnāt made the jump over to this domain.
You write too much for my client š
@andros@twtxt.andros.dev How about putting the whole encrypted conversation into a sperate twtxt-file. Just like the archive feature (?). That way, the general clients donāt have to cope with the decrytption stuff and it wonāt break the general public conversations.
@prologic@twtxt.net @lyse@lyse.isobeef.org First, please leave me your comments on the repository! Even if itās just to give your opinion on what shouldnāt be included. The more variety, the better.
Second, Iām going to try to do tests with Elliptic keys and base64. Thanks for the advice @eapl@eapl.me
Finally, Iād like to give my opinion. Secure direct messages are a feature that ActivityPub and Mastodon donāt have, to give an example. By including it as an extension, weāre already taking a significant leap forward from the competition. Does it make sense to include it in a public feed? In fact, weāre already doing that. When we reply to a user, mentioning them at the beginning of the message, itās already a direct message. The message is within a thread, perhaps breaking the conversation. Direct messages would help isolate conversations between 2 users, as well as keeping a thread cleaner and maintaining privacy. I insist, itās optional, it doesnāt break compatibility with any client and implementing it isnāt complex. If you donāt like it, youāre free to not use it. If you donāt have a public key, no one can send you direct messages.
another one would be to allow changing public keys over time (as it may be a good practice [0]
). A syntax like the following could help to know what public key you used to encrypt the message, and which private key the client should use to decrypt it:
!<nick url> <encrypted_message> <public_key_hash_7_chars>
Also Iād remove support for storing the message as hex, only allowing base64 (more compact, aiming for a minimalistic spec, etc.)
@arne@uplegger.eu nice work with the client.
I also see you are using the Yellow CMS for your websiteš
@sorenpeter@darch.dk Yes it works, thx: https://doesnm.cc/mentions.txt . Iām deleted html tags because my client do not support html rendering
@prologic@twtxt.net I say we should find a way to support mentions with only url, no nick, as per the original spec.
- For
@<nick url>
we already got support
- For
@<nick>
the posting client should expand it to@<nick url>
, if not then the reading client should just render it as@nick
with no link.
- For
@<url>
the sending client should try to expand it to@<nick url>
, if not then the reading client should try to find or construct a nick base on:
- Look in twtxt.txt for a
nick =
- Use (sub)domain from URL
- Use folder or file name from URL
- Look in twtxt.txt for a
Iām sharing new developments on the client. I now have a more stable timeline. The first version will appear in the next few weeks.
#emacs #twtxttrying to set up @movq@www.uninformativ.deās jenny client⦠currently trying to find where twtxt files are stored on the server so i can set up the scp script i have for this
My client is twet which i grabbed at https://github.com/jdtron/twet
Lol only i use discontinued client? (with patches but iām lost sources so they āproprietaryā)
Iām still making progress with the Emacs client. Iām proud to say that the code that is responsible for reading the feeds is almost finished, including: Twt Hash Extension, Twt Subject Extension, Multiline Extension and Metadata Extension. Iām fine-tuning some tests and will soon do the first buffer that displays the twts.
I found 2 active Registries: tilde.instite and twtxt.envs.net . I think that is missing a repository or system for them to find each other. It is easy to share registry users. Your work is awesome! Maybe you are supporting twtxt with the pod and software around them. I am very busy with the Emacs client, but I like to work creating my own version of Registry using Django.
nick = _@domain.tld
in the twtxt.txt?
What should the advantage be to nick = _
compared to just not defining a nick and let the client use the domain as the handle?
What is not intuitive is that you put something in the nick field that is not to be taken literary. The special meaning of _
is only clean if you read the documentation, compared to having something in nick that makes sense in the current context of the twtxt.txt.
If NICK = DOMAIN then only show @DOMAIN
So instead of @eapl.me@eapl.me it will just be @eapl.me
@doesnm@doesnm.p.psf.lt So the user should then set nick = _@domain.tld
in the twtxt.txt?
It seems more intuitive and userfriendly to just use: nick = domain.tld
and have then convention for clients to render the handle as @domain.tld instead of @domain.tld@domain.tld
For a feed with no nick defined (eg. https://akkartik.name/twtxt.txt) it will also be simpler and make more sense to just use the domain as the nick and render it as @domain.tld
and going back to a handle you could input in your client to look for the user/file, like @nick@domain.tls
I think Webfinger is the way to go. It has enough information to know where to find that nickās URL.
@prologic@twtxt.net does that webfinger fork made by darch work OK with yarn as it is now? (Iāve never used it, so Iām researching about it)
https://darch.dk/.well-known/webfinger/
Thanks @bender@twtxt.net for the feedback. I fixed and expanded the article. Iām sorry for my poor interaction. Furthermore, Iām reading and writing while programming a client in Emacs.
@prologic@twtxt.net What IRC client is that?
Yes it work: 2024-12-01T19:38:35Z twtxt/1.2.3 (+https://eapl.mx/twtxt.txt; @eapl)
:D
The .log is just a simple append each request. The idea with the .cvs is to have it tally up how many request there have been from each client as a way to avoid having the log file grow too big. And that you can open the .cvs as a spreadsheet and have an easy overview and filtering options.
Access to those files are closed to the public.
Maybe realize webmention in my feed? (receiving). Sending maybe will be implemened in WIP clientā¦
@sorenpeter@darch.dk on 4 for gemini if your TLS client certificate contains your nick@host could that work for discovery?
@eapl.me@eapl.me here are my replies (somewhat similar to Lyseās and Jamesā)
Metadata in twts: Key=value is too complicated for non-hackers and hard to write by hand. So if there is a need then we should just use #NSFS or the alt-text file in markdown image syntax

if something is NSFWIDs besides datetime. When you edit a twt then you should preserve the datetime if location-based addressing should have any advantages over content-based addressing. If you change the timestamp the its a new post. Just like any other blog cms.
Caching, Yes all good ideas, but that is more a task for the clients not the serving of the twtxt.txt files.
Discovery: User-agent for discovery can become better. Iām working on a wrapper script in PHP, so you donāt need to go to Apaches log-files to see who fetches your feed. But for other Gemini and gopher you need to relay on something else. That could be using my webmentions for twtxt suggestion, or simply defining an email metadata field for letting a person know you follow their feed. Interesting read about why WebMetions might be a bad idea. Twtxt being much simple that a full featured IndieWeb sites, then a lot of the concerns does not apply here. But thatās the issue with any open inbox. This is hard to solve without some form of (centralized or community) spam moderation.
Support more protocols besides http/s. Yes why not, if we can make clients that merge or diffident between the same feed server by multiples URLs
Languages: If the need is big then make a separate feed. I donāt mind seeing stuff in other langues as it is low. You got translating tool if you need to know whats going on. And again when there is a need for easier switching between posting to several feeds, then itās about building clients with a UI that makes it easy. No something that should takes up space in the format/protocol.
Emojis: Iām not sure what this is about. Do you want to use emojis as avatar in CLI clients or it just about rendering emojis?
@bender@twtxt.net @prologic@twtxt.net Iām not exactly asking yarnd to change. If you are okay with the way it displayed my twts, then by all means, leave it as is. I hope you wonāt mind if I continue to write things like 1/4
to mean āfirst out of fourā.
What has text/markdown
got to do with this? I donāt think Markdown says anything about replacing 1/4
with ¼, or other similar transformations. Itās not needed, because ¼ is already a unicode character that can simply be directly inserted into the text file.
Whatās wrong with my original suggestion of doing the transformation before the text hits the twtxt.txt file? @prologic@twtxt.net, I think it would achieve what you are trying to achieve with this content-type thing: if someone writes 1/4
on a yarnd instance or any other client that wants to do this, it would get transformed, and other clients simply wouldnāt do the transformation. Every client that supports displaying unicode characters, including Jenny, would then display ¼ as ¼.
Alternatively, if you prefer yarnd to pretty-print all twts nicely, even ones from simpler clients, thatās fine too and you donāt need to change anything. My 1/4
-> ¼ thing is nothing more than a minor irritation which probably isnāt worth overthinking.
@sorenpeter@darch.dk I run Weechat headless on a VM and mostly connect via mobile or dwsktop. I use the android client or gliwing bear. Work blocks all comms on their always on MitM VPN so I cant in office anymore. So I just use mobile.
@Codebuzz@www.codebuzz.nl Speed is an issue for the client software, not the format itself, but yes I agree that it makes the most sense to append post to the end of the file. Iām referring to the definition that itās the first url =
in the file that is the one that has to be used for the twthash computation, which is a too arbitrary way of defining something that breaks treading time and time again. And this is the case for not using url+date+message = twthash.
I know no client support it (yet) - but it could be the future š
@2024-10-08T19:36:38-07:00@a.9srv.net Thanks for the followup. I agrees with most of it - especially:
Please nobody suggest sticking the content type in more metadata. š
Yes, URL can be considered ugly, but they work and are understandable by both humans and machines. And its trivial for any client to hide the URLs used as reference in replies/treading.
Webfinger can be an add-on to help lookup people, and it can be made independent of the nick by just serving the same json regardless of the nick as people do with static sites and a as I implemented it on darch.dk (wf endpoint). Try RANDOMSTRING@darch.dk
on http://darch.dk/wf-lookup.php (wf lookup) or RANDOMSTRING@garrido.io
on https://webfinger.net
@doesnm@doesnm.p.psf.lt Agree. salty.im should allow the user to post multiple brokers on their webfinger so the client can find a working path.
@movq@www.uninformativ.de iām sorry if I sound too contrarian. Iām not a fan of using an obscure hash as well. The problem is that of future and backward compatibility. If we change to sha256 or another we donāt just need to support sha256. But need to now support both sha256 AND blake2b. Or we devide the community. Users of some clients will still use the old algorithm and get left behind.
Really we should all think hard about how changes will break things and if those breakages are acceptable.
These collisions arenāt important unless someone tries to fork. So.. for the vast majority its not a big deal. Using the grow hash algorithm could inform the client to add another char when they fork.
Only with dovecot xD. For mail im use android native mail client and not mutt. And jenny display some errors with found some files and /tmp dir (android dont have /tmp)
@prologic@twtxt.net YES James, it should be up to the client to deal with changes like edits and deletions. And putting this load on the clients, location-addressing with make this a lot easier since what is says it: Look in this file at this timestamp, did anything change or went missing? (And then threading will not break;)
Diving into mblaze, I think Iāve nearly* reached peek email geek.
Just a bunch of shell commands I can pipe together to search, list, view and reply to email (after syncing it to a local Maildir).
EXAMPLES at https://git.vuxu.org/mblaze/tree/README
So far Iām using most of the tools directly from the command line, but I might take inspiration from https://sr.ht/~rakoo/omail/ to make my workflow a bit more efficient.
*To get any closer, I think Iād have to hand-craft my own SMTP client or something.
Yes, that is exactly what I meant. I like that collection and ātwtxt v2ā feels like a departure.
Maybe thereās an advantage to grouping it into one spec, but IMO that shouldnāt be done at the same time as introducing new untested ideas.
See https://yarn.social (especially this section: https://yarn.social/#self-host) ā It really doesnāt get much simpler than this š¤£
Again, I like this existing simplicity. (I would even argue you donāt need the metadata.)
That page says āFor the best experience your client should also support some of the Twtxt Extensionsā¦ā but it is clear you donāt need to. I would like it to stay that way, and publishing a big long spec and calling it ātwtxt v2ā feels like a departure from that. (I think the content of the document is valuable; Iām just carping about how itās being presented.)
why can we both have a format that you can write by hand and better clients?
Some more arguments for a local-based treading model over a content-based one:
The format:
(#<DATE URL>)
or(@<DATE URL>)
both makes sense: # as prefix is for a hashtag like we allredy got with the(#twthash)
and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.Having something like
(#<DATE URL>)
will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the#twthash
. This will also make it possible to make 3th part twt-mentions services.Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you donāt follow.
#fzf is the new emacs: a tool with a simple purpose that has evolved to include an #email client. https://sr.ht/~rakoo/omail/
Iām being a little silly, of course. fzf doesnāt actually check your email, but it appears to be basically the whole user interface for that mail program, with #mblaze wrangling the emails.
Iāve been thinking about how I handle my email, and am tempted to make something similar. (When I originally saw this linked the author was presenting it as an example tweaked to their own needs, encouraging people to make their own.)
This approach could surely also be combined with #jenny, taking the place of (neo)mutt. For example mblazeās mthread tool presents a threaded discussion with indentation.
@prologic@twtxt.net Thanks for writing that up!
I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.
I am not sure how I feel about all this being done at once, vs. letting conventions arise.
For example, even today I could reply to twt abc1234 with ā(#abc1234) Edit: ā¦ā and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.
Similarly we could just start using 11-digit hashes. We should iron out whether itās sha256 or whatever but thereās no need get all the other stuff right at the same time.
I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).
However I recognize that Iām not the one implementing this stuff, and itās less work to just have everything determined up front.
Misc comments (I havenāt read the whole thing):
Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. Iād suggest gaining 11 bits with base64 instead.
āClients MUST preserve the original hashā ā do you mean they MUST preserve the original twt?
Thanks for phrasing the bit about deletions so neutrally.
I donāt like the MUST in āClients MUST follow the chain of reply-to referencesā¦ā. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldnāt declare the client non-conforming just because they didnāt get to all the bells and whistles.
Similarly I donāt like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (Iām again thinking again of the 40-line shell script).
For āwho followsā lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?
Why canāt feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasnāt too bad, but 1.0 would have been slightly simpler.
Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.
Iām a little sad about other protocols being not recommended.
I donāt know how I feel about including markdown. I donāt mind too much that yarn users emit twts full of markdown, but Iām more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.
@prologic@twtxt.net Do you have a link to some past discussion?
Would the GDPR would apply to a one-person client like jenny? I seriously hope not. If someone asks me to delete an email they sent me, I donāt think I have to honour that request, no matter how European they are.
I am really bothered by the idea that someone could force me to delete my private, personal record of my interactions with them. Would I have to delete my journal entries about them too if they asked?
Maybe a public-facing client like yarnd needs to consider this, but that also bothers me. I was actually thinking about making an Internet Archive style twtxt archiver, letting you explore past twts, including long-dead feeds, see edit histories, deleted twts, etc.
I wrote some code to try out non-hash reply subjects formatted as (replyto ), while keeping the ability to use the existing hash style.
I donāt think we need to decide all at once. If clients add support for a new method then people can use it if they like. The downside of course is that this costs developer time, so I decided to invest a few hours of my own time into a proof of concept.
With apologies to @movq@www.uninformativ.de for corrupting jennyās beautiful code. I donāt write this expecting you to incorporate the patch, because it does complicate things and might not be a direction you want to go in. But if you like any part of this approach feel free to use bits of it; I release the patch under jennyās current LICENCE.
Supporting both kinds of reply in jenny was complicated because each email can only have one Message-Id, and because itās possible the target twt will not be seen until after the twt referencing it. The following patch uses an sqlite database to keep track of known (url, timestamp) pairs, as well as a separate table of (url, timestamp) pairs that havenāt been seen yet but are wanted. When one of those āwantedā twts is finally seen, the mail file gets rewritten to include the appropriate In-Reply-To header.
Patch based on jenny commit 73a5ea81.
https://www.falsifian.org/a/oDtr/patch0.txt
Not implemented:
- Composing twts using the (replyto ā¦) format.
- Probably other important things Iām forgetting.
the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.
if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.
@sorenpeter@darch.dk hmm, how does your client handles āa little editingā? I am sure threads would break just as well. š
@prologic@twtxt.net I wouldnāt want my client to honour delete requests. I like my computerās memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer canāt find it.
Maybe Iām being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic ā or maybe I didnāt even send that twt, I donāt remember š ), I never really liked hashes to begin with. They arenāt super hard to implement but they are kind of against the beauty of the original twtxt ā because you need special client support for them. Itās not something that you could write manually in your
twtxt.txt
file. With @sorenpeter@darch.dkās proposal, though, that would be possible.
Tangentially related, I was a bit disappointed to learn that the twt subject extension is now never used except with hashes. Manually-written subjects sounded so beautifully ad-hoc and organic as a way to disambiguate replies. Maybe Iāll try it some time just for fun.
no my fault your client canāt handle a little editing ;)
HTTPS is supposed to do [verification] anyway.
TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesnāt, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
I was confused by this response for a while, but now I think I understand what youāre getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I canāt verify a feed unless I download it myself from the origin server. Is that right?
I.e. if the HTTPS origin server is online and I donāt mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.
feed locations [being] URLs gives some flexibility
It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI,
urn:uuid:*
, or a regular old URL if you wanted to. The spec seems to indicate that theurl
tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. Iām not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
Iām also not very familiar with IPFS or IPNS.
I havenāt been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if Iām too lazy to change how I publish my feed :-)
@sorenpeter@darch.dk There was a client that would generate a unique hash for each twt. It didnāt get wide adoption.
Interesting.. QUIC isnāt very quick over fast internet.
QUIC is expected to be a game-changer in improving web application performance. In this paper, we conduct a systematic examination of QUICās performance over high-speed networks. We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart. Moreover, the performance gap between QUIC and HTTP/2 grows as the underlying bandwidth increases. We observe this issue on lightweight data transfer clients and major web browsers (Chrome, Edge, Firefox, Opera), on different hosts (desktop, mobile), and over diverse networks (wired broadband, cellular). It affects not only file transfers, but also various applications such as video streaming (up to 9.8% video bitrate reduction) and web browsing. Through rigorous packet trace analysis and kernel- and user-space profiling, we identify the root cause to be high receiver-side processing overhead, in particular, excessive data packets and QUICās user-space ACKs. We make concrete recommendations for mitigating the observed performance issues.
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.
What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:
The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.
The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co
that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?
From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.
I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.
# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
@prologic@twtxt.net do that mean that for every new post (not replies) the client will have to generate a UUID or similar when posting and add that to to the twt?
@lyse@lyse.isobeef.org This looks like a nice way to do it.
Another thought: if clients canāt agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.
A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if thereās a period when clients are doing different things.
(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I donāt know about other clients.)
@movq@www.uninformativ.de @prologic@twtxt.net Another option would be: when you edit a twt, prefix the new one with (#[old hash]) and some indication that itās an edited version of the original tweet with that hash. E.g. if the hash used to be abcd123, the new version should start ā(#abcd123) (redit)ā.
What I like about this is that clients that donāt know this convention will still stick it in the same thread. And I feel itās in the spirit of the old pre-hash (subject) convention, though thatās before my time.
I guess it may not work when the edited twt itself is a reply, and there are replies to it. Maybe that could be solved by letting twts have more than one (subject) prefix.
But the great thing about the current system is that nobody can spoof message IDs.
I donāt think twtxt hashes are long enough to prevent spoofing.
@lyse@lyse.isobeef.org ah, if only you were to finally clean up that code, and make that client widely availableā¦! One can only dream, right? :-)
@bender@twtxt.net Based on my experience so far, as a user, I would be upset if my client dropped someone from my follower list, i.e. stopped fetching their feed, without me asking for that to happen.
Smart TV é um bom negócio desde que nunca a liguemos à net. Os preços são atractivos porque os fabricantes recorrem à recolha e venda dos dados dos clientes. Ao mantê-la como uma TV burra, compensa
I havnt seen any emails about the outage at work. I know i have the mac crowdstrike client though. My buddy that works at a hospital says they wernt affected.
Its quite nice. I have been half tempted to make a twtxt client with it
>
?
@sorenpeter@darch.dk this makes sense as a quote twt that references a direct URL. If we go back to how it developed on twitter originally it was RT @nick: original text
because it contained the original text the twitter algorithm would boost that text into trending.
i like the format (#hash) @<nick url> > "Quoted text"\nThen a comment
as it preserves the human read able. and has the hash for linking to the yarn. The comment part could be optional for just boosting the twt.
The only issue i think i would have would be that that yarn could then become a mess of repeated quotes. Unless the client knows to interpret them as multiple users have reposted/boosted the thread.
The format is also how iphone does reactions to SMS messages with +number liked: original SMS
>
?
Iām also more in favor of #reposts being human readable and writable. A client might implement a bottom that posts something simple like: #repost Look at this cool stuff, because bla bla [alt](url)
This will then make it possible to also ārepostā stuff from other platforms/protocols.
The reader part of a client, can then render a preview of the link, which we talked about would be a nice (optional) feature to have in yarnd.
Anyone else working with Mac OS (work), Windows (client project) and Fedora (private) on the same day, almost every day?
Iām not super a fan of using json. I feel we could still use text as the medium. Maybe a modified version to fix any weakness.
What if instead of signing each twt individually we generated a merkle tree using the twt hashes? Then a signature of the root hash. This would ensure the full stream of twts are intact with a minimal overhead. With the added bonus of helping clients identify missing twts when syncing/gossiping.
Have two endpoints. One as the webfinger to link profile details and avatar like you posted. And the signature for the merkleroot twt. And the other a pageable stream of twts. Or individual twts/merkle branch to incrementally access twt feeds.
@mckinley@twtxt.net i use pass along with the android and browser-pass clients. it is very good and keeping in sync is pretty simple.
(cont.)
Just to give some context on some of the components around the code structure.. I wrote this up around an earlier version of aggregate code. This generic bit simplifies things by removing the need of the Crud functions for each aggregate.
Domain ObjectsA domain object can be used as an aggregate by adding the event.AggregateRoot
struct and finish implementing event.Aggregate. The AggregateRoot implements logic for adding events after they are either Raised by a command or Appended by the eventstore Load or service ApplyFn methods. It also tracks the uncommitted events that are saved using the eventstore Save method.
type User struct {
Identity string ```json:"identity"`
CreatedAt time.Time
event.AggregateRoot
}
// StreamID for the aggregate when stored or loaded from ES.
func (a *User) StreamID() string {
return "user-" + a.Identity
}
// ApplyEvent to the aggregate state.
func (a *User) ApplyEvent(lis ...event.Event) {
for _, e := range lis {
switch e := e.(type) {
case *UserCreated:
a.Identity = e.Identity
a.CreatedAt = e.EventMeta().CreatedDate
/* ... */
}
}
}
Events
Events are applied to the aggregate. They are defined by adding the event.Meta
and implementing the getter/setters for event.Event
type UserCreated struct {
eventMeta event.Meta
Identity string
}
func (c *UserCreated) EventMeta() (m event.Meta) {
if c != nil {
m = c.eventMeta
}
return m
}
func (c *UserCreated) SetEventMeta(m event.Meta) {
if c != nil {
c.eventMeta = m
}
}
Reading Events from EventStore
With a domain object that implements the event.Aggregate
the event store client can load events and apply them using the Load(ctx, agg)
method.
// GetUser populates an user from event store.
func (rw *User) GetUser(ctx context.Context, userID string) (*domain.User, error) {
user := &domain.User{Identity: userID}
err := rw.es.Load(ctx, user)
if err != nil {
if err != nil {
if errors.Is(err, eventstore.ErrStreamNotFound) {
return user, ErrNotFound
}
return user, err
}
return nil, err
}
return user, err
}
OnX Commands
An OnX command will validate the state of the domain object can have the command performed on it. If it can be applied it raises the event using event.Raise() Otherwise it returns an error.
// OnCreate raises an UserCreated event to create the user.
// Note: The handler will check that the user does not already exsist.
func (a *User) OnCreate(identity string) error {
event.Raise(a, &UserCreated{Identity: identity})
return nil
}
// OnScored will attempt to score a task.
// If the task is not in a Created state it will fail.
func (a *Task) OnScored(taskID string, score int64, attributes Attributes) error {
if a.State != TaskStateCreated {
return fmt.Errorf("task expected created, got %s", a.State)
}
event.Raise(a, &TaskScored{TaskID: taskID, Attributes: attributes, Score: score})
return nil
}
Crud Operations for OnX Commands
The following functions in the aggregate service can be used to perform creation and updating of aggregates. The Update function will ensure the aggregate exists, where the Create is intended for non-existent aggregates. These can probably be combined into one function.
// Create is used when the stream does not yet exist.
func (rw *User) Create(
ctx context.Context,
identity string,
fn func(*domain.User) error,
) (*domain.User, error) {
session, err := rw.GetUser(ctx, identity)
if err != nil && !errors.Is(err, ErrNotFound) {
return nil, err
}
if err = fn(session); err != nil {
return nil, err
}
_, err = rw.es.Save(ctx, session)
return session, err
}
// Update is used when the stream already exists.
func (rw *User) Update(
ctx context.Context,
identity string,
fn func(*domain.User) error,
) (*domain.User, error) {
session, err := rw.GetUser(ctx, identity)
if err != nil {
return nil, err
}
if err = fn(session); err != nil {
return nil, err
}
_, err = rw.es.Save(ctx, session)
return session, err
}
@stackeffect@twtxt.stackeffect.de
I am seeing this characters on your twts: )?Ć¢\200ĀØĆ¢\200ĀØ. Which client are you using?
@stigatle@twtxt.net
A twtxt client would be nice! Or a very simple cgi script to print twts to web nicelyānot a second Yarn, just something to show twts in a pretty form on the web.
@eldersnake@yarn.andrewjvpowell.com
RSS links are archaic. Clients discover them if properly linked, they do not need to be human visible.
I am noticing that Yarn doesnāt treat āoutsideā (that is, twts coming from a client other than Yarn) twts hashes right. Two examples:
There are many more, but those two will give you the gist. Yarn links the hash to the posterās twtxt.txt, so conversation matching will not work.
@lyse@lyse.isobeef.org Unless you are stripping stuff on your twts, there is no much to implement. Things will be bold , italics , underlined , and so on, on a client that can render them. Since jenny uses Mutt, I can use my own regex in it to color them as I like. Thatās pretty much it.
@prologic@twtxt.net deedum for android.
Kristall for OS X
Elaho for iOS
though I can only vouch for the first two.
@ āthatās it. Iām sticking to this txtnish client.ā Solid choice.
thatās it. Iām sticking to this txtnish client.
thatās it. Iām finding another twtxt client.
Can we not have clients sign their own public keys before listing them on their Podās account?
Yeah.. we probably could. when they setup an account they create a master key that signs any subsequent keys. or chain of signatures like keybase does.
@prologic@twtxt.net (#gqg3gea) ha yeah. COVID makes for a timey-wimey mish-mash. Worked on some WKD and fought with my XMPP client a bit.
@prologic@twtxt.net huh.. true.. the email is md5/sha256 before storing.. if twtxt acted as provider you would store that hash and point the SRV record to the pod. .. to act as a client it would need to store the hash and the server that hosts the image.
A fork of twtxtc, a #twtxt client in C: [[https://github.com/neauoire/twtxtc]] #links
a microblogging creative coding platform like dwitter, but for sound. users would be encouraged to remix, the output of one persons code would become the input of the new code. only text would be stored on the server, with audio rendered client-side. to save on time, there could be caches of frozen audio for remixes. #halfbakedideas
@hjertnes@hjertnes.social are you using emacs as twtxt client or something? does it render the org markup for you into links?
@kas@enotty.dk [re: gopher client] If you happen to be on Windows, then Gopher Browser for Windows by Matt Owen is pretty nice, otherwise I use Lynx indeed for gopher.
@von@tilde.town having topic-specific twtxt feeds is not a silly idea. Not sure if the clients allow easy switching though.
Made my own super basic twtxt client in 3 lines of code as a bashrc function. #l33t
Added clients and articles sections and added domgoergenās twtxt.txt to https://indieweb.org/twtxt
// todo Create a Kaios client for twtxt
I just recently found an issue with my custom client. It was ignoring microseconds on timestamps. Which meant I was missing some twtxt from people. I got that fixed and I know see all of them.
@mdom@domgoergen.com my own custom client I wrote, I use cron to run the update my timeline every 20 mins. My update process also processes 10 curl calls at time. I did that to save time when I poll everyone.
@sdk@codevoid.de Well Iāve added the special datetime to my kitbashed client. I store the URL it gets but Iām not doing anything with it right now.
@sdk@codevoid.de as for the 140 character limit. I swear I read somewhere that the limit was really more of a suggestion than anything else. I donāt think any of the clients Iāve looked out enforce it. As long as itās on a single line, no one seems to care too much.
@sdk@codevoid.de A comment might not be in the spec, but I know several of the twtxt files Iāve looked at have them. I know my kit bashed twtxt client ignores those lines and Iām sure other clients do too.
Well after 4 hours of work, I finally think I solved all my client side calculation issues. I have no desire to screw with it again