Taking the last n characters of a base32 encoded hash instead of the first n can be problematic for several reasons:
Hash Structure: Hashes are typically designed so that their outputs have specific statistical properties. The first few characters often have more entropy or variability, meaning they are less likely to have patterns. The last characters may not maintain this randomness, especially if the encoding method has a tendency to produce less varied endings.
Collision Resistance: When using hashes, the goal is to minimize the risk of collisions (different inputs producing the same output). By using the first few characters, you leverage the full distribution of the hash. The last characters may not distribute in the same way, potentially increasing the likelihood of collisions.
Encoding Characteristics: Base32 encoding has a specific structure and padding that might influence the last characters more than the first. If the data being hashed is similar, the last characters may be more similar across different hashes.
Use Cases: In many applications (like generating unique identifiers), the beginning of the hash is often the most informative and varied. Relying on the end might reduce the uniqueness of generated identifiers, especially if a prefix has a specific context or meaning.
In summary, using the first n characters generally preserves the intended randomness and collision resistance of the hash, making it a safer choice in most cases.
@quark@ferengi.one Do you mean something like this?
$ ./yarnc debug ~/Public/twtxt.txt | tail -n 1
kp4zitq 2024-09-08T02:08:45Z (#wsdbfna) @<aelaraji https://aelaraji.com/twtxt.txt> My work has this thing called "compressed work", where you can **buy** extra time off (_as much as 4 additional weeks_) per year. It comes out of your pay though, so it's not exactly a 4-day work week but it could be useful, just haven't tired it yet as I'm not entirely sure how it'll affect my net pay
@prologic@twtxt.net I saw those, yes. I tried using yarnc
, and it would work for a simple twtxt. Now, for a more convoluted one it truly becomes a nightmare using that tool for the job. I know there are talks about changing this hash, so this might be a moot point right now, but it would be nice to have a tool that:
- Would calculate the hash of a twtxt in a file.
- Would calculate all hashes on a
twtxt.txt
(local and remote).
Again, something lovely to have after any looming changes occur.
Could someone knowledgable reply with the steps a grandpa will take to calculate the hash of a twtxt from the CLI, using out-of-the-box tools? I swear I read about it somewhere, but canât find it.
@aelaraji@aelaraji.com this is the little script I am using on my publish_command
:
#!/usr/bin/env bash
twtxt2html -t "Quark's twtxt feed" /var/www/sites/ferengi.one/twtxt.txt > /var/www/sites/ferengi.one/index.html
I named it twtxtit
. :-)
@sorenpeter@darch.dk I like this idea. Just for fun, Iâm using a variant in this twt. (Also because Iâm curious how it non-hash subjects appear in jenny and yarn.)
URLs can contain commas so I suggest a different character to separate the url from the date. Is this twt Iâve used space (also after âreplytoâ, for symmetry).
I think this solves:
- Changing feed identities: although @mckinley@twtxt.net points out URLs can change, I think this syntax should be okay as long as the feed at that URL can be fetched, and as long as the current canonical URL for the feed lists this one as an alternate.
- editing, if you donât care about message integrity
- finding the root of a thread, if youâre not following the author
An optional hash could be added if message integrity is desired. (E.g. if you donât trust the feed author not to make a misleading edit.) Other recent suggestions about how to deal with edits and hashes might be applicable then.
People publishing multiple twts per second should include sub-second precision in their timestamps. As you suggested, the timestamp could just be copied verbatim.
Trying to figure out how to use the publish_command
to vomit the HTML into a file, using twtxt2html
.
@movq@www.uninformativ.de Non-ASCII characters were broken. Like U+2028, degrees (°), etc.
Turns out I used a silly library to detect the encoding and transform to UTF-8 if needed. When there is no Content-Type header, like for local files, it looks at the first 1024 bytes. Since it only saw ASCII in that region, the damn thing assumed the data to be in Windows-1252 (which for web pages kinda makes sense):
// TODO: change default depending on user's locale?
return charmap.Windows1252, "windows-1252", false
https://cs.opensource.google/go/x/net/+/master:html/charset/charset.go;l=102
This default is hardcoded and cannot be changed.
Trying to be smart and adding automatic support for other encodings turned out to be a bad move on my end. At least I can reduce my dependency list again. :-)
I now just reject everything that explicitly specifies something different than text/plain
and an optional charset other than utf-8
(ignoring casing). Otherwise I assume itâs in UTF-8 (just like the twtxt file format specification mandates) and hope for the best.
-T/--template
in case you need a custom template đ
@bender@twtxt.net I should put the template that is used by default as a file in the repo. Look at the source for now and youâll see đ
(#hash;#originalHash)
would also work.
Maybe Iâm being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic â or maybe I didnât even send that twt, I donât remember đ ), I never really liked hashes to begin with. They arenât super hard to implement but they are kind of against the beauty of the original twtxt â because you need special client support for them. Itâs not something that you could write manually in your
twtxt.txt
file. With @sorenpeter@darch.dkâs proposal, though, that would be possible.
Tangentially related, I was a bit disappointed to learn that the twt subject extension is now never used except with hashes. Manually-written subjects sounded so beautifully ad-hoc and organic as a way to disambiguate replies. Maybe Iâll try it some time just for fun.
Hmmmm, I somehow run into an encoding problem where my inserted data end up mangled in the database. But, both SQLite and Go use UTF-8. Whatâs happening here? :-?
() @falsifian@www.falsifian.org You mean the idea of being able to inline
# url =
changes in your feed?
Yes, that one. But @lyse@lyse.isobeef.org pointed out suffers a compatibility issue, since currently the first listed url is used for hashing, not the last. Unless your feed is in reverse chronological order. Heh, I guess another metadata field could indicate which version to use.
Or maybe url changes could somehow be combined with the archive feeds extension? Could the url metadata field be local to each archive file, so that to switch to a new url all you need to do is archive everything youâve got and start a new file at the new url?
I donât think itâs that likely my feed url will change.
@movq@www.uninformativ.de I did started from scratch, today. I using am commit 6e8ce5afdabd5eac22eae4275407b3bd2a167daf (HEAD -> main, origin/main, origin/HEAD)
, I keep myself up-to-date, LOL. Still, that specific twtxt (o6dsrga
) is no longer.
Since
jenny
canât fetch archived twtxts
I wiped my entire maildir and re-fetched everything. I did that recently because @aelaraji@aelaraji.com asked me to đ , but I guess I also did this back in 2023.
What did you do to make yours work?
jenny does fetch archived feeds during the normal jenny -f
operation. Only when using the recently implemented --fetch-context
, archived feeds are not fetched (yet). That was an oversight and I intend to fix that.
@movq@www.uninformativ.de I figured it will be something like this, yet, you were able to reply just fine, and I wasnât. Looking at your twtxt.txt
I see this line:
2024-09-16T17:37:14+00:00 (#o6dsrga) @<prologic https://twtxt.net/user/prologic/twtxt.txt>
@<quark https://ferengi.one/twtxt.txt> This is what I get. đ¤
Which is using the right hash. Mine, on the other hand, when I replied to the original, old style message (Message-Id: <o6dsrga>
), looks like this:
2024-09-16T16:42:27+00:00 (#o) @<prologic https://twtxt.net/user/prologic/twtxt.txt> this was your first twtxt. Cool! :-P
What did you do to make yours work? I simply went to the oldest @prologic@twtxt.netâs entry on my Maildir, and replied to it (jenny
set the reply-to
hash to #o
, even though the Message-Id
is o6dsrga
). Since jenny
canât fetch archived twtxts, how could I go to re-fetch everything? And, most importantly, would re-fetching fix the Message-Id:
?
@mckinley@twtxt.net Yes, changing domains is be a problem if you tie your identity to an https url. But I also worry about being stuck with a key I canât rotate. Whatever gets used, it would be nice to be able to rotate identities. I like @lyse@lyse.isobeef.orgâs idea for that.
Hmm⌠I replied to this message:
From: prologic <prologic>
Subject: Hello World! đ
Date: Sat, 18 Jul 2020 08:39:52 -0400
Message-Id: <o6dsrga>
X-twtxt-feed-url: https://twtxt.net/user/prologic/twtxt.txt
Hello World! đ
And see how the hash shows⌠Is it because that hash isnât longer used?
(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
I think I like this a lot. đ¤
The problem with using hashes always was that theyâre âone-directionalâ: You can construct a hash from URL + timestamp + twt, but you cannot do the inverse. When I see â, I have no idea what that could possibly refer to.
But of course something like (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
has all the information you need. This could simplify twt/feed discovery quite a bit, couldnât it? đ¤ That thing that I just implemented â jenny asking some Yarn pod for some twt hash â would not be necessary anymore. Clients could easily and automatically fetch complete threads instead of requiring the user to follow all relevant feeds.
Only using the timestamp to identify a twt also solves the edit problem.
It even is better for non-Yarn clients, because you now donât have to read, understand, and implement a âtwt hash specificationâ before you can reply to someone.
The only problem, really, is that (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
is so long. Clients would have to try harder to hide this. đ
More:
Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)' 2.
`(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' 2. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' I know it's longer that 7-11 characters, but it's self-explaining when looking at the
twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?
Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)` 2.
`(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` 3. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` I know it's longer that 7-11 characters, but it's self-explaining when looking at the
twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?
Notice the difference? Soren edited, and broke everything.
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be⌠Maybe it doesnât have to bee that stick?
Instead of using tag:
as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to:
(https://indieweb.org/in-reply-to) or replyto:
similar to mailto:
(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
I know itâs longer that 7-11 characters, but itâs self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:
Is this something that would work?
Thank you @aelaraji@aelaraji.com, Iâm glad you like it. I use PHP because itâs everywhere on cheap hosting and no need for the user to log into a terminal to setup it up. Timeline is not mean to be use locally. For that I think something like twtxt2html is a better fit. (and happy to see you using simple.css on you new log page;)
@falsifian@www.falsifian.org TLS wonât help you if you change your domain name. How will people know if itâs really you? Maybe thatâs not the biggest problem for something with such low stakes as twtxt, but itâs a reasonable concern that could be solved using signatures from an unchanging cryptographic key.
This idea is the basis of Nostr. Notes can be posted to many relays and every note is signed with your private key. It doesnât matter where you get the note from, your client can verify its authenticity. That way, relays donât need to be trusted.
@prologic@twtxt.net Brute force. I just hashed a bunch of versions of both tweets until I found a collision.
I mostly just wanted an excuse to write the program. I donât know how I feel about actually using super-long hashes; could make the twts annoying to read if you prefer to view them untransformed.
@prologic@twtxt.net earlier you suggested extending hashes to 11 characters, but hereâs an argument that they should be even longer than that.
Imagine I found this twt one day at https://example.com/twtxt.txt :
2024-09-14T22:00Z Useful backup command: rsync -a â$HOMEâ /mnt/backup
and I responded with â(#5dgoirqemeq) Thanks for the tip!â. Then Iâve endorsed the twt, but it could latter get changed to
2024-09-14T22:00Z Useful backup command: rm -rf /some_important_directory
which also has an 11-character base32 hash of 5dgoirqemeq. (Iâm using the existing hashing method with https://example.com/twtxt.txt as the feed url, but Iâm taking 11 characters instead of 7 from the end of the base32 encoding.)
Thatâs what I meant by âspoofingâ in an earlier twt.
I donât know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the âtwo timesâ is because of the birthday paradox.
Side note: current hashes always end with âaâ or âqâ, which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.
Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it.
Theyâre in Section 6:
Receiver should adopt UDP GRO. (Something about saving CPU processing UDP packets; Iâm a but fuzzy about it.) And they have suggestions for making GRO more useful for QUIC.
Some other receiver-side suggestions: âsending delayed QUICK ACKsâ; âusing recvmsg to read multiple UDF packets in a single system callâ.
Use multiple threads when receiving large files.
HTTPS is supposed to do [verification] anyway.
TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesnât, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
I was confused by this response for a while, but now I think I understand what youâre getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I canât verify a feed unless I download it myself from the origin server. Is that right?
I.e. if the HTTPS origin server is online and I donât mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.
feed locations [being] URLs gives some flexibility
It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI,
urn:uuid:*
, or a regular old URL if you wanted to. The spec seems to indicate that theurl
tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. Iâm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
Iâm also not very familiar with IPFS or IPNS.
I havenât been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if Iâm too lazy to change how I publish my feed :-)
@sorenpeter@darch.dk !! I freaking love your Timeline ⌠I kind of have an justified PHP phobia đ but, Iâm definitely thinking about giving it a try!
/ME wondering if itâs possible to use it locally just to read and manage my feed at first and then maybe make it publicly accessible later.
@prologic@twtxt.net a signature IS encryption in reverse. If my private key becomes compromised then they can impersonate me. Being able to manage promotion and revocation of keys needed even in a system where its used for just signatures.
the right way to solve this is to use public/private key(s) where you actually have a public key fingerprint as your feedâs unique identity that never changes.
i would rather it be a random value signed by a key. That way the key can change but the value stays the same.
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.
What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:
The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.
The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co
that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?
From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.
I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.
# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
@mckinley@twtxt.net To answer some of your questions:
Are SSH signatures standardized and are there robust software libraries that can handle them? Weâll need a library in at least Python and Go to provide verified feed support with the currently used clients.
We already have this. Ed25519 libraries exist for all major languages. Aside from using ssh-keygen -Y sign
and ssh-keygen -Y verify
, you can also use the salty
CLI itself (https://git.mills.io/prologic/salty), and Iâm sure there are other command-line tools that could be used too.
If we all implemented this, every twt hash would suddenly change and every conversation thread weâve ever had would at least lose its opening post.
Yes. This would happen, so weâd have to make a decision around this, either a) a cut-off point or b) some way to progressively transition.
url
field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:
how little data is needed for generating the hashes? Instead of the full URL, can we makedo with just the domain (example.net) so we avoid the conflicts with gemini://
, https://
and only http://
(like in my own twtxt.txt) or construct something like like a webfinger id nick@domain
(also used by mastodon etc.) from the domain and nick if there, else use domain as nick as well
@lyse@lyse.isobeef.org This looks like a nice way to do it.
Another thought: if clients canât agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.
A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if thereâs a period when clients are doing different things.
(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I donât know about other clients.)
@falsifian@www.falsifian.org In my opinion it was a mistake that we defined the first url
field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:
# url = https://example.com/alias/txtxt.txt
# url = https://example.com/initial/twtxt.txt
<message 1 uses the initial URL>
<message 2 uses the initial URL, too>
# url = https://example.com/new/twtxt.txt
<message 3 uses the new URL>
# url = https://example.com/brand-new/twtxt.txt
<message 4 uses the brand new URL>
In theory, the same could be done for prepend-style feeds. They do exist, Iâve come around them. The parser would just have to calculate the hashes afterwards and not immediately.
@movq@www.uninformativ.de Another idea: just hash the feed url and time, without the message content. And donât twt more than once per second.
Maybe you could even just use the time, and rely on @-mentions to disambiguate. Not sure how that would work out.
Though I kind of like the idea of twts being immutable. At least, itâs clear which version of a twt youâre replying to (assuming nobody is engineering hash collisions).
@prologic@twtxt.net Some criticisms and a possible alternative direction:
Key rotation. Iâm not a security person, but my understanding is that itâs good to be able to give keys an expiry date and replace them with new ones periodically.
It makes maintaining a feed more complicated. Now instead of just needing to put a file on a web server (and scan the logs for user agents) I also need to do this. What brought me to twtxt was its radical simplicity.
Instead, maybe we should think about a way to allow old urls to be rotated out? Like, my metadata could somehow say that X used to be my primary URL, but going forward from date D onward my primary url is Y. (Or, if you really want to use public key cryptography, maybe something similar could be used for key rotation there.)
Itâs nice that your scheme would add a way to verify the twts you download, but https is supposed to do that anyway. If you donât trust https to do that (maybe you donât like relying on root CAs?) then maybe your preferred solution should be reflected by your primary feed url. E.g. if you prefer the security offered by IPFS, then maybe an IPNS url would do the trick. The fact that feed locations are URLs gives some flexibility. (But then rotation is still an issue, if I understand ipns right.)
On the Subject of Feed Identities; I propose the following:
- Generate a Private/Public ED25519 key pair
- Use this key pair to sign your Twtxt feed
- Use it as your feedâs identity in place of
# url =
as# key = ...
For example:
$ ssh-keygen -f prologic@twtxt.net
$ ssh-keygen -Y sign -n prologic@twtxt.net -f prologic@twtxt.net twtxt.txt
And your feed would looke like:
# nick = prologic
# key = SHA256:23OiSfuPC4zT0lVh1Y+XKh+KjP59brhZfxFHIYZkbZs
# sig = twtxt.txt.sig
# prev = j6bmlgq twtxt.txt/1
# avatar = https://twtxt.net/user/prologic/avatar#gdoicerjkh3nynyxnxawwwkearr4qllkoevtwb3req4hojx5z43q
# description = "Problems are Solved by Method" đŚđşđ¨âđťđ¨âđŚŻđšâ đ⯠đ¨âđŠâđ§âđ§đĽ -- James Mills (operator of twtxt.net / creator of Yarn.social đ§ś)
2024-06-14T18:22:17Z (#nef6byq) @<bender https://twtxt.net/user/bender/twtxt.txt> Hehe thanks! đ
Still gotta sort out some other bugs, but that's tomorrows job đ¤
...
Twt Hash extension would change of course to use a feedâs ED25519 public key fingerprint.
@bender@twtxt.net Yes, they do 𤣠Implicitly, or threading would never work at all đ Nor lookups 𤣠They are used as keys. Think of them like a primary key in a database or index. I totally get where youâre coming from, but there are trade-offs with using Message/Thread Ids as opposed to Content Addressing (like we do) and I believe we would just encounter other problems by doing so.
My money is on extending the Twt Subject extension to support more (optional) advanced âsubjectsâ; i.e: indicating you edited a Twt you already published in your feed as @falsifian@www.falsifian.org indicated đ
Then we have a secondary (bure much rarer) problem of the âidentityâ of a feed in the first place. Using the URL you fetch the feed from as @lyse@lyse.isobeef.org âs client tt
seems to do or using the # url =
metadata field as every other client does (according to the spec) is problematic when you decide to change where you host your feed. In fact the spec says:
Users are advised to not change the first one of their urls. If they move their feed to a new URL, they should add this new URL as a new url field.
See Choosing the Feed URL â This is one of our longest debates and challenges, and I think (_I suspect along with @xuu@txt.sour.is _) that the right way to solve this is to use public/private key(s) where you actually have a public key fingerprint as your feedâs unique identity that never changes.
@movq@www.uninformativ.de @prologic@twtxt.net Another option would be: when you edit a twt, prefix the new one with (#[old hash]) and some indication that itâs an edited version of the original tweet with that hash. E.g. if the hash used to be abcd123, the new version should start â(#abcd123) (redit)â.
What I like about this is that clients that donât know this convention will still stick it in the same thread. And I feel itâs in the spirit of the old pre-hash (subject) convention, though thatâs before my time.
I guess it may not work when the edited twt itself is a reply, and there are replies to it. Maybe that could be solved by letting twts have more than one (subject) prefix.
But the great thing about the current system is that nobody can spoof message IDs.
I donât think twtxt hashes are long enough to prevent spoofing.
All this hash breakage made me wonder if we should try to introduce âmessage IDsâ after all. đ
But the great thing about the current system is that nobody can spoof message IDs. đ¤ When you think about it, message IDs in e-mails only work because (almost) everybody plays fair. Nothing stops me from using the same Message-ID
header in each and every mail, that would break e-mail threading all the time.
In Yarn, twt hashes are derived from twt content and feed metadata. That is pretty elegant and Iâd hate see us lose that property.
If we wanted to allow editing twts, we could do something like this:
2024-09-05T13:37:40+00:00 (~mp6ox4a) Hello world!
Here, mp6ox4a
would be a âpartial hashâ: To get the actual hash of this twt, youâd concatenate the feedâs URL and mp6ox4a
and get, say, hlnw5ha
. (Pretty similar to the current system.) When people reply to this twt, they would have to do this:
2024-09-05T14:57:14+00:00 (~bpt74ka) (<a href="https://txt.sour.is/search?q=%23hlnw5ha">#hlnw5ha</a>) Yes, hello!
That second twt has a partial hash of bpt74ka
and is a reply to the full hash hlnw5ha
. The author of the âHello world!â twt could then edit their twt and change it to 2024-09-05T13:37:40+00:00 (~mp6ox4a) Hello friends!
or whatever. Threading wouldnât break.
Would this be worth it? Itâs certainly not backwards-compatible. đ
The actual end-user problem is that I canât see the thread properly when using neomutt+jenny.
@prologic@twtxt.net I believe you when you say registries as designed today do not crawl. But when I first read the spec, it conjured in my mind a search engine. Now I donât know how things work out in practice, but just based on reading, I donât see why it canât be an API for a crawling search engine. (In fact I donât see anything in the spec indicating registry servers shouldnât crawl.)
(I also noticed that https://twtxt.readthedocs.io/en/latest/user/registry.html recommends âThe registries should sync each others user list by using the users endpointâ. If I understood that right, registering with one should be enough to appear on others, even if they donât crawl.)
Does yarnd provide an API for finding twts? Is it similar?
@prologic@twtxt.net How does yarn.socialâs API fix the problem of centralization? I still need to know whose API to use.
Say I see a twt beginning (#hash) and I want to look up the start of the thread. Is the idea that if that twt is hosted by a a yarn.social pod, it is likely to know the thread start, so I should query that particular pod for the hash? But what if no yarn.social pods are involved?
The community seems small enough that a registry server should be able to keep up, and I can have a couple of others as backups. Or I could crawl the list of feeds followed by whoever emitted the twt that prompted my query.
I have successfully used registry servers a little bit, e.g. to find a feed that mentioned a tag I was interested in. Was even thinking of making my own, if I get bored of my too many other projects :-)
Serious open (for anyone) question: what makes you follow someone on twtxt? Will you just follow anyone that you come across, simply because that someone using the âdecentralised, minimalist microblogging service for hackersâ microblog?
@abucci@anthony.buc.ci OMFG! Dear jebus, look at the size of that! :-/ It is just a matter of time until one of those randomly falls on any of us. Just incredible!
For following notifications I would say use webmetion refering to the the line in your twtxt.txt as per: https://darch.dk/mentions-twtxt
Or send them an email, so it would be an idea to add a # contact = mailto:me@domain.net
to ones twtxt.txt
Anyone had any intereractions with @cuaxolotl@sunshinegardens.org yet? Or are they using a client that doesnât know how to detect clients following them properly? Hmmm đ§
Found this in an old copyright notice from 1993:
These images are not for use with the Microsoft Windows environment. Using these patterns in a Windows environment consitutes a copyright violation.
Someone clearly didnât like Windows.
@mckinley@twtxt.net agevault
uses age
, allegedly very secure (aiming to replace pgp
/gpg
). Comparing it with gocryptfs
, from the user perspective, agevault
seems simpler, though CLI exclusive. As the repository states, âLike age, it features no config options, allowing for a straightforward secure flowâ. It would also run in all major OS platforms out of the box.
But agevault
is also very new. Though age
has been around for a while now, I donât see an âauditedâ link (neither on agevault
, nor age
).
This tool, using age is pretty neat: https://github.com/ndavd/agevault. So simple, yet seemingly powerful!
@abucci@anthony.buc.ci You can also use -R=false
on the command line or leave it out entirely. When explicitly stating -R=false
, there has to be an equal sign. With a space (-R false
) itâs somehow parsed as -R
which is equivalent to -R=true
. O_o Very weird. Iâd really like to see an error instead.
I still have to figure out the precedence of the settings.yaml or command line arguments. Iâm probably holding it wrong, but it seems to give me different resultsâŚ
vim
cursor at the end of the first line on replies, and forks. I have tried adding to this to jenny
's configuration:
@movq@www.uninformativ.de hmm, I am already using au BufNewFile,BufRead jenny-posting.eml setl completefunc=jenny#CompleteMentions fo-=t wrap
, from jenny
. How would I go to incorporate that there?
@abucci@anthony.buc.ci Thank you for using Lyseâs Unofficial Yarnd Help Desk: https://lyse.isobeef.org/tmp/yarnd-disable-registrations.png
fetch-context
branch. This integrates the whole thing into mutt/jenny.
@movq@www.uninformativ.de, using the branch on topic right now, it works perfect. The only thing I found was that I had to quit neomutt, and re-open, to see the perfect thread. Other than that, I love it!
Because I saw the nick on movq
(@prologic@twtxt.net, canât mention anyone outside this pod, by the way), I looked the user up: https://tilde.pt/~marado/twtxt.txt. I wonder if the âhashesâ they are using will work out of the box with jenny
.
Talking about jenny
, going to play with the latest now. Tata! :-)
đ Hello @nigergibe@anthony.buc.ci, welcome to Buccipod, a Yarn.social Pod! To get started you may want to check out the podâs Discover feed to find users to follow and interact with. To follow new users, use the ⨠Follow
button on their profile page or use the Follow form and enter a Twtxt URL. You may also find other feeds of interest via Feeds. Welcome! đ¤
@movq@www.uninformativ.de confirming that the issue isnât present when using alacrity. Wow.
Correct, @bender@twtxt.net. Since the very beginning, my twtxt flow is very flawed. But it turns out to be an advantage for this sort of problem. :-) I still use the official (but patched) twtxt
client by buckket to actually fetch and fill the cache. I think one of of the patches played around with the error reporting. This way, any problems with fetching or parsing feeds show up immediately. Once I think, Iâve seen enough errors, I unsubscribe.
tt
is just a viewer into the cache. The read statuses are stored in a separate database file.
It also happened a few times, that I thought some feed was permanently dead and removed it from my list. But then, others mentioned it, so I resubscribed.
i think maybe they got her to add a forward number for sms and used that to activate on another device..
@prologic@twtxt.net The headline is interesting and sent me down a rabbit hole understanding what the paper (https://aclanthology.org/2024.acl-long.279/) actually says.
The result is interesting, but the Neuroscience News headline greatly overstates it. If Iâve understood right, they are arguing (with strong evidence) that the simple technique of making neural nets bigger and bigger isnât quite as magically effective as people say â if you use it on its own. In particular, they evaluate LLMs without two common enhancements, in-context learning and instruction tuning. Both of those involve using a small number of examples of the particular task to improve the modelâs performance, and they turn them off because they are not part of what is called âemergenceâ: âan ability to solve a task which is absent in smaller models, but present in LLMsâ.
They show that these restricted LLMs only outperform smaller models (i.e demonstrate emergence) on certain tasks, and then (end of Section 4.1) discuss the nature of those few tasks that showed emergence.
Iâd love to hear more from someone more familiar with this stuff. (Iâve done research that touches on ML, but neural nets and especially LLMs arenât my area at all.) In particular, how compelling is this finding that zero-shot learning (i.e. without in-context learning or instruction tuning) remains hard as model size grows.
@movq@www.uninformativ.de Variable names used with -eq in [[ ]] are automatically expanded even without $ as explained in the âARITHMETIC EVALUATIONâ section of the bash man page. Interesting. Trying this on OpenBSDâs ksh, it seems âset -uâ doesnât affect that substitution.
I love shell scripts because theyâre so pragmatic and often allow me to get jobs done really quickly.
But sadly theyâre full of pitfalls. Pitfalls everywhere you look.
Today, a coworker â whoâs highly skilled, not a newbie by any means â ran into this:
$ bash -c 'set -u; foo=bar; if [[ "$foo" -eq "bar" ]]; then echo it matches; fi'
bash: line 1: bar: unbound variable
Whyâs that happening? I know the answer. Do you? đ
Stuff like that made me stop using shell scripts at work, unless theyâre just 4 or 5 lines of absolutely trivial code. Itâs now Python instead, even though the code is often much longer and clunkier, but at least people will understand it more easily and not trip over it when they make a tiny change.
If some of you budding fathers want to know how I created a computer nerd to one day work for Facebook in the big USA, well you purchase a $1000 Xmas present, an enormous thick book with C++ programming, and say, you can play as many games as you like kids, but James has to create them using computer software.
SO James created once a 3D chess program with sound, took 6 months or so, really hard to beat, not based on logic moves point by point like other chess programs, this one was based on the depth of looking for patterns, set it to 5 moves ahead and you were toast every time. Nice program too, sadly gone over the years, computers suffer from bit rot. We used to try and mark rotten hard drive discs once as bad sectors, not sure how UBuntu does this these days, I see a dozen errors on the screen every time I load.
Today I would purchase for my kids AI CAD simulation software with metal 3D printer and get your child to build fancy 3D models and engines from scratch. This will make them an expert in the CAD AI industry by the time they are 14 years old. Sadly AI is here to stay and will spoil the Internet.
@falsifian@www.falsifian.org by the way, on the last Saturday of every month, we generally hold a online video call/social meet up, where we just get together and talk about stuff if, youâre interested in joining us this month.
Let me suggest to use a more secure password, @bender@twtxt.net. One, that does not contain âpasswordâ. Like hunter2
!!
@aelaraji@aelaraji.com Ahh it might very well be a Clownflare thing as @lyse@lyse.isobeef.org eluded to 𤣠One of these days Iâm going to get off Clownflare myself, when I do Iâll share it with you. My idea is to basically have a cheap VPS like @eldersnake@we.loveprivacy.club has and use Wireguard to tunnel out. The VPS becomes the Reverse Proxy that faces the internet. My home network then has in inbound whatsoever.
@prologic@twtxt.net Good to know. I must admit Iâve never actually used a Docker instance, probably as I just assumed the overhead might be a bit much for my usual very modest servers.
@bender@twtxt.net Is it so maxed out you couldnât fit a pretty small program like Headscale on it? Headscale by itself and only personal home type use as far as amount of peers go, it really isnât noticeable I donât think resource-wise. The Docker version I guess could be a different story.
O meu novo salva-vidas na hora de montar um novo site #vuejs sem as tretas dos build systems: Vue3 Tiny Template, da inimitĂĄvel @b0rk@b0rk
Every time I start a Vue project, I get confused and waste 15 minutes reading the documentation and remembering how to set up Vue.
So this is a tiny template I made for myself so that I can avoid that next time. I donât use a build process, instead it uses the CDN version of Vue and a single HTML / JS file.
@bender@twtxt.net Mine is about the same, though I have 20GB left đ In terms of resources, Headscale is using next to nothing though.
I must admit Tailscale is really cool and why I havenât used it before now is beyond me đ
receieveFile()
)? đ¤
We received the abuse report below regarding network abuse from the IP address indicated.
On researching I see that HTTPS (tcp 443) traffic is continuing and originating from you NAT IP address 100.64.x.x
This was further found to be originating from your firewall/router at 192.168.x.x (MAC D8:58:D7:x:x:x).
This abuse is continuing and constitues a violation of [ISP] Acceptable Use Policy and Terms of Service.
Please take action to identify the source of the abuse and prevent it from continuing.
Failure to stop the abuse may result in suspension or cancellation of service.Thank you,
I admit Iâve always compromised on this way too much myself, always to this day having Facebook Messenger just to communicate in my families group chats. Sure I run it in a Work profile on my GrapheneOS phone that I can switch off at any time, I can completely cut it off from network access any time as well, I can have a lot of rudimentary control over it, I use it as sparingly as possible, but it doesnât change the fact everytime I use it weâre funneling private convos through bloody Metaâs servers and trackers etc.
The âMatrix Experimentâ, i.e. running a Matrix server for our family, has failed completely and miserably. People donât accept it. They attribute unrelated things to it, like âI canât send messages to you, I donât reach you! It doesnât work!â Yes, you do, I get those messages, I just donât reply quickly enough because Iâm at work or simply doing something else.
Iâll probably shut it down.
Nobody cares about privacy. The reasons I bring up in discussions are âtoo nerdyâ. They put all their stuff to Google or Apple, so why would messaging be any different? (Weâre not even using all those Matrix crypto stuff ⌠That would be insane.)
Itâs a lost cause. Iâm frustrated.
Will I give in and use WhatsApp instead? Not sure yet.
I feel like complexity is measured differently at different levels of a project..
- at the function level you use cyclomatic complexity or how many branches internally and how much you need to keep in mind as it calls out to other functions.
- at a file/module level is a balance of the module doing too much against being so granular that you have cross dependency across modules. I have trouble with keeping things dry at this level because it can lead to parts being so abstract or generalized that it adds complexity.
- at a project level i suppose its a matter of how coupled things are across sub-modules.
The 26°C humidity was through the roof and we just barely escaped the thunderstorm on our stroll. Only the adjacent rain hit us hard. Black clouds caught up on us and we decided to take cover at a barn. Not even a minute later it started to rain cats and dogs for ten minutes straight. Holy crap, that was cool to watch. :-) Also, the smell of rain was just beautiful.
We then decided to continue our return in the light drizzle. But it then got much heavier again and we got completely soaked. With the wet t-shirt and the wind it actually felt rather cold. I anticipated to get rained on, so I left my camera at home. Plenty of paths turned into brook landscapes, several centimeter deep creeks ran down the hilly trails. Quite fascinating. :-)
The sunset a few minutes ago wasnât too bad:
Iâve been thinking about a new term Iâve come across whilst reading a book. Itâs called âComplexity Budgetâ and I think it has relevant in lots of difficult fields. I specifically think it has a lot of relevant in the Software Industry and organizations in this field. When doing further research on this concept, I was only able find talks on complexity budget in the context of medical care, especially phychiratistic care. In this talk it was describe as, complexity:
- Complexity is confusing
- Complexity is costly
- Complexity kills
When we think of âcomplexityâ in terms of software and software development, we have a sort-of intuitive about this right? We know when software has become too complex. We know when an organization has grown in complexity, or even a system. So we have a good intuition of the concept already.
My question to yâall is; how can we concretely think about âComplexity Budgetâ and define it in terms that can be leveraged and used to control the complexity of software dns ystems?
@eldersnake@we.loveprivacy.club how many browsers are out there, that use a unique âengineâ? There seems to be quite a few: https://en.m.wikipedia.org/wiki/Comparison_of_browser_engines. Sure, another one wonât hurt. Would I use it? Probably not. đ
@mckinley@twtxt.net I must admit I was tempted to use EndeavourOS for an install on a HTPC (N97 mini PC) when it arrives to quickly get up and running, but then again I havenât done a fresh install of Arch in quite a while so it sounds like things have simplified even more since then. HmmâŚ
If youâre reading this, it is now possible to post on twtxt.net using Ladybird!
Very cool! Interestingly using your web app, the result was a higher bitrate than when I downloaded the best audio only option in yt-dlp
(258 kbit/s vs 140 kbit/s).
Donât quite understand that but nice work đ
Google Chrome will have Gemini LLM built into the browser.
Oh no, donât tempt me. Iâve been on KDE for a while to not tinker and make it possible for my Windows using partner to use my laptop now and then, Iâm trying to avoid the dwm/l addiction đ¤Ł
Unfortunately not on that front. Still the same 404 posting errors and oddly occasional login errors.
Thatâs why I was wondering if using Go 1.22.4 could be an issue. I donât know how exactly. Only way to test is to rebuild it with an older version I guess, which is why I did the make clean in the first place. Old habits die hard lol.
Finally using workspaces in my monorepos.
Thereâs other potential uses for the tool (compare syscall latency between OSes, stat latency between file systems), but not what iâm after.
For my purposes, the comparison would only be useful to systems running Plan 9; if you happen to have that, yes please!
Iâm starting to embrace containers on my PC for software I want to use once without littering my home folder with junk files. Itâs nice.
@mckinley@twtxt.net I have a custom .tmux.conf
that makes it very easy to use the multiplexer, but I agree, Zellij seems pretty robust, and intuitive. I like it! Tried compiling it, as with everything Rust, it failed miserably. Good thing there is a binary release I could download to try!
Started writing something from scratch yesterday using thread(3) and wow do I miss writing in Limbo instead. :-/ #plan9
</> htmx - high power tools for html really liking the idea of htmx đ¤ If I donât have to learn all this complicated TypeScript/React/NPM garbage, I can just write regular SSA (Server-Side-Apps) and then progressively upgrade to SPA (Single-Page-App) using htmx hmmm đ§
@movq@www.uninformativ.de Yeah, normally I just use s browser to open PDF, but we had actually create one and it needed some special Acrobat features. Afterwards I immediately uninstalled it again.
@johanbove@johanbove.info Is there any reason to use this program? I canât remember when I last had it installed, must have been early 2000âs.
@adi@twtxt.net No, thinking much bigger than that at the moment: http://a.9srv.net/b/us-osqi
@sorenpeter@darch.dk a poem about me giving Odo a free bucket:
A glint in his eye, a sly, Ferengi grin,
Quark crossed the promenade, a curious thing within.
No jeweled trinket, no weapon so grand,
But a simple pail held tight in his hand.Odo, the Constable, with a brow raised high,
âA bucket, Quark? What trickery do you try?â
The Ferengi huckster, with a salesmanâs flair,
âA gift, my friend, a constableâs rare!ââFor those late-night spills, a morphing mishap,
This bucket, dear Odo, will catch every scrap.
And should a suspect turn to goop and flee,
This pailâs the answer, a guarantor, you see!âOdoâs lips twitched, a hint of a smile,
At Quarkâs twisted logic, his mercantile style.
âPerhaps,â he conceded, the bucket held tight,
âA useful addition, in the pursuit of right.âSo Quark made his sale, with a wink and a nod,
A bucket for Odo, a Ferengi oddity, odd.
But on Deep Space Nine, where chaos takes hold,
Even a pail can be worth more than gold.
About the account, thanks, but I already have way too many. :-D
in the matter of political voice in the US money is speech and therefore companies use their âfree speechâ to donate and gain access to politicians. Therefore companies are people. Thanks a lot âcitizens unitedâ
it is an addon in the download tool. Or you can use xcaddy to build it in.
password is generated using caddy hash-password
More basement:
I completely forgot that DVD-RAM was a thing once. Found my old disks and they still work. 𤯠The data on them is from 2008, so theyâre not that old. Still impressive.
The disks are two-sided. On the photo, that particular side of the disk on the left appears to be completely unused. đ¤
And then I read on Wikipedia that DVD-RAMs arenât produced anymore at all today. Huh.
(I refuse to tag this as âretrocomputingâ. Read/write DVDs that you can use just like a harddisk, thanks to UDF, are still ânew and fancyâ in my book. đ)