Alright. My first mentionsâwhich were picked not so randomly, LOLâare @prologic@twtxt.net, @lyse@lyse.isobeef.org, and @movq@www.uninformativ.de. I am also posting my first image too, which you see below. Thatâs my neighbourhood, in a âwinterâ day. Hopefully @prologic@twtxt.net will add my domain to his allowed list, so that the image (and any other further) renders.
Alright, announce_me
set to true. Now, who do I pick to be my first mention? Decisions, decisions. Next twtxt will have my first mention(s). :-)
I have configured my twtxt.txt
as simple as possible. I have setup a publish_command
on jenny. Hopefully all works fine, and I am good to go. Next will be setting the announce_me
to true
. Here we go!
@sorenpeter@darch.dk hmm, how does your client handles âa little editingâ? I am sure threads would break just as well. đ
@prologic@twtxt.net, there is a parser bug on parent. Specifically on this portion:
"*If twtxt/Yarn was to grow bigger, then this would become a concern again. *But even Mastodon allows editing*, so how
+much of a problem can it really be? đ
*"
@movq@www.uninformativ.de going a little sideways on this, â*If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? đ *â, wouldnât it preparing for a potential (even if very, very, veeeeery remote) growth be a good thing? Mastodon signs all messages, keeps a history of edits, and it doesnât break threads. It isnât a problem there.đ It is here.
I think keeping hashes is a must. If anything for that âfeels goodâ feeling.
@movq@www.uninformativ.de Agreed that hashes have a benefit. I came up with a similar example where when I twted about an 11-character hash collision. Perhaps hashes could be made optional somehow. Like, you could use the âreplytoâ idea and then additionally put a hash somewhere if you want to lock in which version of the twt you are replying to.
Hey, @movq@www.uninformativ.de, a tiny thing to add to jenny
, a -v
switch. That way when you twtxt âThatâs an older format that was used before jenny version v23.04â, I can go and run jenny -v
, and âduh!â myself on the way to a git pull
. :-D
@movq@www.uninformativ.de ooooh, nice! commit 62a2b7735749f2ff3c9306dd984ad28f853595c5
:
Crawl archived feeds in âfetch-context
Like, very much! :-)
@movq@www.uninformativ.de to paraphrase US Presidents speech on each State of the Union, âthe State of the Jenny is strong!â :-D As for the potential upcoming changes, there has to be a knowledgeable head honcho that will agglomerate and coalesce, and guide onto the direction that will be taken. All that with the strong input from the developers that will be implementing the changes, and a lesser (but not less valuable) input from users.
@lyse@lyse.isobeef.org I call upon the services of the @yarn_police@twtxt.net to further investigate this oddness!
@quark@ferengi.one Oh, sure, it would be nice if edits didnât break threads. I was just pondering the circumstances under which I get annoyed about data being irrecoverably deleted or otherwise lost.
@falsifian@www.falsifian.org âI donât really mind if the twt gets edited before I even fetch it.â, right, thatâs never the problem. Editing a twtxt before anyone fetches it isnât even editing, right? :-P The problem we are trying to fix is the havoc is causes editing twtxts that have already been replied to, often ad nauseam. Thatâs the real problem.
@quark@ferengi.one I donât really mind if the twt gets edited before I even fetch it. I think itâs the idea of my computer discarding old versions itâs fetched, especially if itâs shown them to me, that bugs me.
But I do like @movq@www.uninformativ.deâs suggestion on this thread that feeds could contain both the original and the edited twt. I guess it would be up to the author.
@lyse@lyse.isobeef.org now, how am I not surprised at that reply?! Hahahahaha!
@falsifian@www.falsifian.org that would be problematic to do on a fully decentralised system. I am not disagreeing, though. Thatâs the reason I have stopped editing twtxts. I strive to own mistakes, as minor as they might be. Now, if trail editing can be accomplished, I am all for it!
@quark@ferengi.one None. I like being able to see edit history for the same reason.
@falsifian@www.falsifian.org what would the difference be between an edit the changes everything on the original twtxt, and a delete?
@prologic@twtxt.net Why sha1 in particular? There are known attacks on it. sha256 seems pretty widely supported if youâre worried about support.
@prologic@twtxt.net I wouldnât want my client to honour delete requests. I like my computerâs memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer canât find it.
Thereâs a simple reason all the current hashes end in a or q: the hash is 256 bits, the base32 encoding chops that into groups of 5 bits, and 256 isnât divisible by 5. The last character of the base32 encoding just has that left-over single bit (256 mod 5 = 1).
So I agree with #3 below, but do you have a source for #1, #2 or #4? I would expect any lack of variability in any part of a hash functionâs output would make it more vulnerable to attacks, so designers of hash functions would want to make the whole output vary as much as possible.
Other than the divisible-by-5 thing, my current intuition is it doesnât matter what part you take.
Hash Structure: Hashes are typically designed so that their outputs have specific statistical properties. The first few characters often have more entropy or variability, meaning they are less likely to have patterns. The last characters may not maintain this randomness, especially if the encoding method has a tendency to produce less varied endings.
Collision Resistance: When using hashes, the goal is to minimize the risk of collisions (different inputs producing the same output). By using the first few characters, you leverage the full distribution of the hash. The last characters may not distribute in the same way, potentially increasing the likelihood of collisions.
Encoding Characteristics: Base32 encoding has a specific structure and padding that might influence the last characters more than the first. If the data being hashed is similar, the last characters may be more similar across different hashes.
Use Cases: In many applications (like generating unique identifiers), the beginning of the hash is often the most informative and varied. Relying on the end might reduce the uniqueness of generated identifiers, especially if a prefix has a specific context or meaning.
@aelaraji@aelaraji.com odd, I ran it under Ubuntu 24.04, and got the same result as @prologic@twtxt.net (which is on macOS), zq4fgq
.
@prologic@twtxt.net I just realised the jenny
also does what I want, as of latest commit. Simply use jenny --debug-feed <feed url>
, and it will do what I wanted too!
@movq@www.uninformativ.de alright, fair, and interesting. I was expecting them to be all the same (format wise), but it doesnât matter, for sure, as it works just fine. Thanks!
I have noticed that twtxt timestamps differ. For example:
- @prologic@twtxt.net (and I assume any Yarn user)
2024-09-18T13:16:17Z
- @lyse@lyse.isobeef.org
2024-09-17T21:15:00+02:00
- @aelaraji@aelaraji.com (and @movq@www.uninformativ.de, and me)
2024-09-18T05:43:13+00:00
So, which is right, or best?
I came across this Gallery Theme for Hugo, and @lyse@lyse.isobeef.org immediately came to mind. I think it would be a very fitting theme to use for all your photos, Lyse!
@prologic@twtxt.net the real conclusion is, is it going to change, to what, and when? :-P
@prologic@twtxt.net yes, that would work, except there is no debug
command on my local yarnc
. Are you talking about a potential future implementation here?
@prologic@twtxt.net I saw those, yes. I tried using yarnc
, and it would work for a simple twtxt. Now, for a more convoluted one it truly becomes a nightmare using that tool for the job. I know there are talks about changing this hash, so this might be a moot point right now, but it would be nice to have a tool that:
- Would calculate the hash of a twtxt in a file.
- Would calculate all hashes on a
twtxt.txt
(local and remote).
Again, something lovely to have after any looming changes occur.
@aelaraji@aelaraji.com Woah! Overkill, but nicely laid out. Hey, the ultimate goal is for it to work, so, mission accomplished! :-)
Could someone knowledgable reply with the steps a grandpa will take to calculate the hash of a twtxt from the CLI, using out-of-the-box tools? I swear I read about it somewhere, but canât find it.
@prologic@twtxt.net woot! Fast! I think you need to change your nick to âfastlogicâ instead. :-D
Thank you for adding the feature so fast, @prologic@twtxt.net! Look at how beautiful this one renders now. Oh my!
@prologic@twtxt.net sorry but nope. Neither jenny
, nor yarnd
supports it at all. This was treated as a thread because I picked one of @falsifian@www.falsifian.orgâs twtxts (with the âold subjectâ), and replied to it (hence starting the thread).
Oh, and you canât imagine the level of control I am commandeering by restraining me from editing that previous âmissing-one-backtickâ twtxt. LOL!
@aelaraji@aelaraji.com this is the little script I am using on my publish_command
:
#!/usr/bin/env bash
twtxt2html -t "Quark's twtxt feed" /var/www/sites/ferengi.one/twtxt.txt > /var/www/sites/ferengi.one/index.html
I named it twtxtit
. :-)
@prologic@twtxt.net based on @falsifian@www.falsifian.orgâs findings, I donât believe this is quite accurate.
âyarnd
(_at least_) doesn't support creating such a custom TwtSubject, but it will reply and respect and thread one if one was constructed."
@falsifian@www.falsifian.org yes, that happened around 2 years ago, on commit 5923078ea5.
@quark@ferengi.one It looks like the part about traditional topics has been removed from that page. Here is an old version that mentions it: https://web.archive.org/web/20221211165458/https://dev.twtxt.net/doc/twtsubjectextension.html . Still, I donât see any description of what is actually allowed between the parentheses. May be worth noting that twtxt.net is displaying the twts with the subject stripped, so some piece of code is recognizing it as a subject (or, at least, something to be removed).
Opened a couple of issues on twtxt2html. Maybe @prologic@twtxt.net will get to them after he has completed his luxurious recharging cycle. LOL.
@lyse@lyse.isobeef.org fully agree. I have never been a fan of relative times to begin with, so that one will go away, foh sho! :-D
@falsifian@www.falsifian.org based on Twt Subject Extension, your subject is invalid. You can have custom subjects, that is, not a valid hash, but you simply canât put anything, and expect it to be treated as a TwtSubject
, me thinks.
Hmm, but yarnd also isnât showing these twts as being part of a thread. @prologic@twtxt.net you said yarnd respects customs subjects. Shouldnât these twts count as having a custom subject, and get threaded together?
yarnd just doesnât render the subject. Fair enough. Itâs (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z), and if you donât want to go on a hunt, the twt hash is weadxga: https://twtxt.net/twt/weadxga
@sorenpeter@darch.dk I like this idea. Just for fun, Iâm using a variant in this twt. (Also because Iâm curious how it non-hash subjects appear in jenny and yarn.)
URLs can contain commas so I suggest a different character to separate the url from the date. Is this twt Iâve used space (also after âreplytoâ, for symmetry).
I think this solves:
- Changing feed identities: although @mckinley@twtxt.net points out URLs can change, I think this syntax should be okay as long as the feed at that URL can be fetched, and as long as the current canonical URL for the feed lists this one as an alternate.
- editing, if you donât care about message integrity
- finding the root of a thread, if youâre not following the author
An optional hash could be added if message integrity is desired. (E.g. if you donât trust the feed author not to make a misleading edit.) Other recent suggestions about how to deal with edits and hashes might be applicable then.
People publishing multiple twts per second should include sub-second precision in their timestamps. As you suggested, the timestamp could just be copied verbatim.
@prologic@twtxt.net I have some ideas:
- Add smartypants rendering, just like Yarn has.
- Add the ability to create individual twtxts, each named after their hash.
- Fix the formatting of the help. :-P
Woot, yes! It works perfectly. By the time you see my twtxt, it is already at the main Ferengi.one website.
@bender@twtxt.net thatâs not your change, silly robot, it is mine! LOL. I am finding @prologic@twtxt.netâs tool handy to refer to previous posts (as reference, for example).
@lyse@lyse.isobeef.org Sorry, I donât think I ever had charset=utf8. I just noticed that a few days ago. OpenBSDâs httpd might not support including a parameter with the mime type, unfortunately. Iâm going to look into it.
@movq@www.uninformativ.de I didnât run the command as you recommended, but, I wiped things once more, and ran jenny -f
, and this time got:
david@arrakis:~$ jenny -f
Fetching archived feed https://anthony.buc.ci/user/abucci/twtxt.txt/1 (configured as abucci, https://anthony.buc.ci/user/abucci/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2024-04.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://darch.dk/twtxt-archive.txt (configured as soren, https://darch.dk/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2024-04-21_6v47cua.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/1 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2024-03.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2022-12-21_2us6qbq.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/2 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2024-02.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2022-01-14_ew5gzca.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/3 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2024-01.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-12-23_f6y65bq.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/4 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-12.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-12-04_e4x7yba.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/5 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-11.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-11-18_42tjxba.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://twtxt.net/user/prologic/twtxt.txt/6 (configured as prologic, https://twtxt.net/user/prologic/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-10.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-11-08_i2wnvaa.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-09.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-10-23_kvwn5oa.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-08.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-10-11_mljudaa.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-07.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-09-22_5mkqwua.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-06.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-07-27_xcnzmlq.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-05.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-06-16_mtedqya.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-04.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-04-29_z7lvzja.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-03.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-03-19_xjabvhq.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-02.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-02-24_te4a6oa.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2023-01.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2021-01-26_qxgigma.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-12.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://www.uninformativ.de/twtxt-old_2020-12-13_igfnala.txt (configured as movq, https://www.uninformativ.de/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-11.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-10.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-09.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-08.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-07.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-06.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-05.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-04.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-03.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-02.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2022-01.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-12.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-11.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-10.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-09.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-08.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-07.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-06.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-05.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-04.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-03.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-02.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2021-01.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Fetching archived feed https://lyse.isobeef.org/twtxt-2020-12.txt (configured as lyse, https://lyse.isobeef.org/twtxt.txt)
Notice that @prologic@twtxt.netâs /6
is there. I found the twtxt then. Kind of odd it didnât show before.
Maybe Iâm being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic â or maybe I didnât even send that twt, I donât remember đ ), I never really liked hashes to begin with. They arenât super hard to implement but they are kind of against the beauty of the original twtxt â because you need special client support for them. Itâs not something that you could write manually in your
twtxt.txt
file. With @sorenpeter@darch.dkâs proposal, though, that would be possible.
Tangentially related, I was a bit disappointed to learn that the twt subject extension is now never used except with hashes. Manually-written subjects sounded so beautifully ad-hoc and organic as a way to disambiguate replies. Maybe Iâll try it some time just for fun.
(#w4chkna) @falsifian@www.falsifian.org You mean the idea of being able to inline
# url =
changes in your feed?
Yes, that one. But @lyse@lyse.isobeef.org pointed out suffers a compatibility issue, since currently the first listed url is used for hashing, not the last. Unless your feed is in reverse chronological order. Heh, I guess another metadata field could indicate which version to use.
Or maybe url changes could somehow be combined with the archive feeds extension? Could the url metadata field be local to each archive file, so that to switch to a new url all you need to do is archive everything youâve got and start a new file at the new url?
I donât think itâs that likely my feed url will change.
@movq@www.uninformativ.de I did the same. jenny
fetches archives, yes, but that twtxt I am referring about is no longer. If you fetch it, but I donât, there is certainly something going onâŚ
@movq@www.uninformativ.de I did started from scratch, today. I using am commit 6e8ce5afdabd5eac22eae4275407b3bd2a167daf (HEAD -> main, origin/main, origin/HEAD)
, I keep myself up-to-date, LOL. Still, that specific twtxt (o6dsrga
) is no longer.
o6dsrga
, but I can't find the source for it (the raw file). I reset everything, and re-fetched fresh feeds (allegedly including archives). Where is it?
After re-fetching feeds, the earliest twtxt I have from you is n7gavoa
.
@prologic@twtxt.net, your first twtxt ever was o6dsrga
, but I canât find the source for it (the raw file). I reset everything, and re-fetched fresh feeds (allegedly including archives). Where is it?
@movq@www.uninformativ.de I figured it will be something like this, yet, you were able to reply just fine, and I wasnât. Looking at your twtxt.txt
I see this line:
2024-09-16T17:37:14+00:00 (#o6dsrga) @<prologic https://twtxt.net/user/prologic/twtxt.txt>
@<quark https://ferengi.one/twtxt.txt> This is what I get. đ¤
Which is using the right hash. Mine, on the other hand, when I replied to the original, old style message (Message-Id: <o6dsrga>
), looks like this:
2024-09-16T16:42:27+00:00 (#o) @<prologic https://twtxt.net/user/prologic/twtxt.txt> this was your first twtxt. Cool! :-P
What did you do to make yours work? I simply went to the oldest @prologic@twtxt.netâs entry on my Maildir, and replied to it (jenny
set the reply-to
hash to #o
, even though the Message-Id
is o6dsrga
). Since jenny
canât fetch archived twtxts, how could I go to re-fetch everything? And, most importantly, would re-fetching fix the Message-Id:
?
@prologic@twtxt.net you will always be replying to OP - that is what the twthash is a shorthand for, it it not?!
@movq@www.uninformativ.de Iâm glad you like it. A mention (@<movq https://www.uninformativ.de/twtxt.txt>
) is also long, but we live with it anyway. In a way a replyto:
is just a mention of a twt instead of a feed/person. Maybe we chould even model the syntax for replies on mentions: (#<2024-09-17T08:39:18Z https://www.eksempel.dk/twtxt.txt>)
?!
@mckinley@twtxt.net Yes, changing domains is be a problem if you tie your identity to an https url. But I also worry about being stuck with a key I canât rotate. Whatever gets used, it would be nice to be able to rotate identities. I like @lyse@lyse.isobeef.orgâs idea for that.
This is how my original message shows up on jenny
:
From: quark <quark>
Subject: (#o) @prologic this was your first twtxt. Cool! :-P
Date: Mon, 16 Sep 2024 12:42:27 -0400
Message-Id: <k7imvia@twtxt>
X-twtxt-feed-url: https://ferengi.one/twtxt.txt
(#o) @<prologic https://twtxt.net/user/prologic/twtxt.txt> this was your first twtxt. Cool! :-P
Hmm⌠I replied to this message:
From: prologic <prologic>
Subject: Hello World! đ
Date: Sat, 18 Jul 2020 08:39:52 -0400
Message-Id: <o6dsrga>
X-twtxt-feed-url: https://twtxt.net/user/prologic/twtxt.txt
Hello World! đ
And see how the hash shows⌠Is it because that hash isnât longer used?
@prologic@twtxt.net this was your first twtxt. Cool! :-P
@movq@www.uninformativ.de we can shorten it by six characters, with (r:https://...)
. đ
@prologic@twtxt.net I am going to light some candles this weekend to âLa Virgen de Macarenaâ to make it happen! :-D
@prologic@twtxt.net you need to catch up with my twtxts, mate. :-P
@aelaraji@aelaraji.com grats! See how much trouble an edited twtxt can cause? Wish there was a simpler solution. Alas, I donât have much hope.
@movq@www.uninformativ.de I can have more than one Yarn, correct? Like:
"yarn_pods_for_discovery": ["https://twtxt.net", "https://txt.sour.is"],
@aelaraji@aelaraji.com make sense, probably. The twtxt was already on my Maildir, thatâs why I can fetch it. I fetch every 3 minutes (sssh, donât tell anyone!). LOL!
@aelaraji@aelaraji.com check âRepliesâ. :-D
More:
Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)' 2.
`(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' 2. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' I know it's longer that 7-11 characters, but it's self-explaining when looking at the
twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?
Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)` 2.
`(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` 3. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` I know it's longer that 7-11 characters, but it's self-explaining when looking at the
twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?
Notice the difference? Soren edited, and broke everything.
See:
Message-Id: <hns535a@twtxt>
X-twtxt-feed-url: https://darch.dk/twtxt.txt
In-Reply-To: <pvju5cq@twtxt>
And
Message-Id: <weadxga@twtxt>
X-twtxt-feed-url: http://darch.dk/twtxt.txt
In-Reply-To: <pvju5cq@twtxt>
Two feed URLs, one HTTPS, the other HTTP.
@aelaraji@aelaraji.com no, it is not just you. Do fetch the parent with jenny, and you will see there are two messages with different hash. Soren did something funky, for sure.
@aelaraji@aelaraji.com hmm, I see all of your twtxts just fine. Now, thatâs a puzzle!
@mckinley@twtxt.net Thanks for the feedback.
- Yeah I agrees that nick sound not be part of syntax. Any valid URL to a twtxt.txt-file should be enough and is more clear, so it is not confused with a email (one of the the issues with webfinger and fedivese handles)
- I think any valid URL would work, since we are not bound to look for exact matches. Accepting both http and https as well as a gemni and gophe could all work as long as the path to the twtxt.txt is the same.
- My idea is that you quote the timestamp as it is in the original twtxt.txt that you are referring to, so you can do it by simply copy/pasting. Also what are the change that the same human will make two different posts within the same second?!
Regarding the whole cryptographic keys for identity, to me it seems like an unnecessary layer of complexity. If you move to a new house or city you tell people that you moved - you can do the same in a twtxt.txt. Just post something like âI move to this new URL, please follow me there!â I did that with my feeds at least twice, and you guys still seem to read my posts:)
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be⌠Maybe it doesnât have to bee that stick?
Instead of using tag:
as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to:
(https://indieweb.org/in-reply-to) or replyto:
similar to mailto:
(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
I know itâs longer that 7-11 characters, but itâs self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:
Is this something that would work?
Thank you @aelaraji@aelaraji.com, Iâm glad you like it. I use PHP because itâs everywhere on cheap hosting and no need for the user to log into a terminal to setup it up. Timeline is not mean to be use locally. For that I think something like twtxt2html is a better fit. (and happy to see you using simple.css on you new log page;)
@prologic@twtxt.net Brute force. I just hashed a bunch of versions of both tweets until I found a collision.
I mostly just wanted an excuse to write the program. I donât know how I feel about actually using super-long hashes; could make the twts annoying to read if you prefer to view them untransformed.
@prologic@twtxt.net earlier you suggested extending hashes to 11 characters, but hereâs an argument that they should be even longer than that.
Imagine I found this twt one day at https://example.com/twtxt.txt :
2024-09-14T22:00Z Useful backup command: rsync -a â$HOMEâ /mnt/backup
and I responded with â(#5dgoirqemeq) Thanks for the tip!â. Then Iâve endorsed the twt, but it could latter get changed to
2024-09-14T22:00Z Useful backup command: rm -rf /some_important_directory
which also has an 11-character base32 hash of 5dgoirqemeq. (Iâm using the existing hashing method with https://example.com/twtxt.txt as the feed url, but Iâm taking 11 characters instead of 7 from the end of the base32 encoding.)
Thatâs what I meant by âspoofingâ in an earlier twt.
I donât know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the âtwo timesâ is because of the birthday paradox.
Side note: current hashes always end with âaâ or âqâ, which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.
Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it.
@prx@si3t.ch I havenât messed with rdomains, but still it might help if you included the command that produced that error (and whether you ran it as root).
Theyâre in Section 6:
Receiver should adopt UDP GRO. (Something about saving CPU processing UDP packets; Iâm a but fuzzy about it.) And they have suggestions for making GRO more useful for QUIC.
Some other receiver-side suggestions: âsending delayed QUICK ACKsâ; âusing recvmsg to read multiple UDF packets in a single system callâ.
Use multiple threads when receiving large files.
HTTPS is supposed to do [verification] anyway.
TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesnât, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
I was confused by this response for a while, but now I think I understand what youâre getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I canât verify a feed unless I download it myself from the origin server. Is that right?
I.e. if the HTTPS origin server is online and I donât mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.
feed locations [being] URLs gives some flexibility
It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI,
urn:uuid:*
, or a regular old URL if you wanted to. The spec seems to indicate that theurl
tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. Iâm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
Iâm also not very familiar with IPFS or IPNS.
I havenât been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if Iâm too lazy to change how I publish my feed :-)
@prologic@twtxt.net a signature IS encryption in reverse. If my private key becomes compromised then they can impersonate me. Being able to manage promotion and revocation of keys needed even in a system where its used for just signatures.
I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.
If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt
they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt
⌠damn I just notice the gemini.
subdomain.
Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?
@sorenpeter@darch.dk There was a client that would generate a unique hash for each twt. It didnât get wide adoption.
@prologic@twtxt.net identity and content integrity are two different problems.
@xuu@txt.sour.is Thanks for the link. I found a pdf on one of the authorsâ home pages: https://ahmadhassandebugs.github.io/assets/pdf/quic_www24.pdf . I wonder how the protocol was evaluated closer to the time it became a standard, and whether anything has changed. I wonder if network speeds have grown faster than CPU speeds since then. The paper says the performance is around the same below around 600 Mbps.
To be fair, I donât think QUIC was ever expected to be faster for transferring a single stream of data. I think QUIC is supposed to reduce the impact of a dropped packet by making sure it only affects the stream itâs part of. I imagine QUIC still has that advantage, and this paper is showing the other side of a tradeoff.
@prologic@twtxt.net do that mean that for every new post (not replies) the client will have to generate a UUID or similar when posting and add that to to the twt?
@prologic@twtxt.net excellent, thanks!
@jmjl@tilde.green howdy! Sorry for mistaken you with https://blog.nfld.uk/ (jlj), but glad to connect. Cheers!
how little data is needed for generating the hashes? Instead of the full URL, can we makedo with just the domain (example.net) so we avoid the conflicts with gemini://
, https://
and only http://
(like in my own twtxt.txt) or construct something like like a webfinger id nick@domain
(also used by mastodon etc.) from the domain and nick if there, else use domain as nick as well
@lyse@lyse.isobeef.org This looks like a nice way to do it.
Another thought: if clients canât agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.
A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if thereâs a period when clients are doing different things.
(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I donât know about other clients.)
@movq@www.uninformativ.de Another idea: just hash the feed url and time, without the message content. And donât twt more than once per second.
Maybe you could even just use the time, and rely on @-mentions to disambiguate. Not sure how that would work out.
Though I kind of like the idea of twts being immutable. At least, itâs clear which version of a twt youâre replying to (assuming nobody is engineering hash collisions).
@prologic@twtxt.net Some criticisms and a possible alternative direction:
Key rotation. Iâm not a security person, but my understanding is that itâs good to be able to give keys an expiry date and replace them with new ones periodically.
It makes maintaining a feed more complicated. Now instead of just needing to put a file on a web server (and scan the logs for user agents) I also need to do this. What brought me to twtxt was its radical simplicity.
Instead, maybe we should think about a way to allow old urls to be rotated out? Like, my metadata could somehow say that X used to be my primary URL, but going forward from date D onward my primary url is Y. (Or, if you really want to use public key cryptography, maybe something similar could be used for key rotation there.)
Itâs nice that your scheme would add a way to verify the twts you download, but https is supposed to do that anyway. If you donât trust https to do that (maybe you donât like relying on root CAs?) then maybe your preferred solution should be reflected by your primary feed url. E.g. if you prefer the security offered by IPFS, then maybe an IPNS url would do the trick. The fact that feed locations are URLs gives some flexibility. (But then rotation is still an issue, if I understand ipns right.)
@movq@www.uninformativ.de @prologic@twtxt.net Another option would be: when you edit a twt, prefix the new one with (#[old hash]) and some indication that itâs an edited version of the original tweet with that hash. E.g. if the hash used to be abcd123, the new version should start â(#abcd123) (redit)â.
What I like about this is that clients that donât know this convention will still stick it in the same thread. And I feel itâs in the spirit of the old pre-hash (subject) convention, though thatâs before my time.
I guess it may not work when the edited twt itself is a reply, and there are replies to it. Maybe that could be solved by letting twts have more than one (subject) prefix.
But the great thing about the current system is that nobody can spoof message IDs.
I donât think twtxt hashes are long enough to prevent spoofing.
@lyse@lyse.isobeef.org Thanks
@prologic@twtxt.net Perfect, thanks. For my own future reference: curl -H âAccept: application/jsonâ https://twtxt.net/twt/st3wsda
@bender@twtxt.net So far Iâve been following feeds fairly liberally. Iâll check to see if we have anything in common and lean toward following, just because this is new to me and it feels like a small community. But Iâm still figuring out what I want. Later Iâll probably either trim my follower list or come up with some way to prioritize the feeds Iâm more interested in.
@prologic@twtxt.net Specifically, I could view yarndâs copy here, but only as rendered for a human to view: https://twtxt.net/twt/st3wsda