(#2024-09-24T12:39:32Z) @prologic@twtxt.net It might be simple for you to run echo -e "\t\t" | sha256sum | base64, but for people who are not comfortable in a terminal and got their dev env set up, then that is magic, compared to the simplicity of just copy/pasting what you see in a textfile into another textfile ā Basically what @movq@www.uninformativ.de also said. Iām also on team extreme minimalism, otherwise we could just use mastodon etc. Replacing line-breaks with a tab would also make it easier to handwrite your twtxt. You donāt have to hardwrite it, but at least you should have the option to. Just as i do with all my HTML and CSS.
yarnd supports the use of WebMentions, it's very rarely used in practise (if ever) -- In fact I should just drop the feature entirely.
(#2024-09-24T12:34:31Z) WebMentions does would work if we agreed to implement it correctly. I never figured out how yarndās WebMentions work, so I decide to make my own, which Iām the only one usingā¦
I had a look at WebSub, witch looks way more complex than WebMentions, and seem to need a lot more overhead. We donāt need near realtime. We just need a way to notify someone that someone they donāt know about mentioned or replied to their post.
A weekend with my family
This past weekend, I visited my family in the south of Germany. I wasnāt there for quite some time. On one day, we went to Biel in Switzerland, walking through the Taubenloch (āpigeonholeā, a canyon right next to the city) and sitting on a boat that took us across Lake Biel. It was quite picturesque. ā Read more
Starting a couple of new projects (geez where do I find the time?!):
HomeTunnel:
HomeTunnel is a self-hosted solution that combines secure tunneling, proxying, and automation to create your own private cloud. Utilizing Wireguard for VPN, Caddy for reverse proxying, and Traefik for service routing, HomeTunnel allows you to securely expose your home network services (such as Gitea, Poste.io, etc.) to the Internet. With seamless automation and on-demand TLS, HomeTunnel gives you the power to manage your own cloud-like environment with the control and privacy of self-hosting.
CraneOps:
craneops is an open-source operator framework, written in Go, that allows self-hosters to automate the deployment and management of infrastructure and applications. Inspired by Kubernetes operators, CraneOps uses declarative YAML Custom Resource Definitions (CRDs) to manage Docker Swarm deployments on Proxmox VE clusters.
rsync(1) but, whenever I Tab for completion and get this:
@aelaraji@aelaraji.com @mckinley@twtxt.net rsync -avzr with an optional --progress is what I always use. Ah, I could use the shorter -P, thanks @movq@www.uninformativ.de.
rsync(1) but, whenever I Tab for completion and get this:
@aelaraji@aelaraji.com rsync -zaXAP is what I use all the time. But thatās all ā for the rest, I have to consult the manual. š
There is also a ~5x increase cost in memory utilization for any implementations or implementors that use or wish to use in-memory storage (yarnd does for example) and equally a 5x increase in on-disk storage as well. This is based on the Twt Hash going from a 13 bytes (content-addressing) to 63 bytes (on average for location-based addressing). There is roughly a ~20-150% increase in the size of individual feeds as well that needs to be taken into consideration (on the average case).
@sorenpeter@darch.dk Points 2 & 3 arenāt really applicable here in the discussion of the threading model really Iām afraid. WebMentions is completely orthogonal to the discussion. Further, no-one that uses Twtxt really uses WebMentions, whilst yarnd supports the use of WebMentions, itās very rarely used in practise (if ever) ā In fact I should just drop the feature entirely.
The use of WebSub OTOH is far more useful and is used by every single yarnd pod everywhere (no that thereās that many around these days) to subscribe to feed updates in ~near real-time without having the poll constantly.
Some more arguments for a local-based treading model over a content-based one:
The format:
(#<DATE URL>)or(@<DATE URL>)both makes sense: # as prefix is for a hashtag like we allredy got with the(#twthash)and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.Having something like
(#<DATE URL>)will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the#twthash. This will also make it possible to make 3th part twt-mentions services.Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you donāt follow.
GitHub Enterprise Cloud with data residency: How we built the next evolution of GitHub Enterprise using GitHub
How we used GitHub to build GitHub Enterprise Cloud with data residency.
The post GitHub Enterprise Cloud with data residency: How we built the next evolution of GitHub Enterprise using GitHub appeared first on The GitHub Blog. ā Read more
Using an AI Assistant to Read Tool Documentation
Explore how to use Docker and LLMs to streamline workflows for command-line tools to enhance the process of reading docs, troubleshooting errors, and running commands. ā Read more
Sorry, youāre right, I should have used numbers!
Iām donāt understand what āpreserve the original hashā could mean other than āmake sure thereās still a twt in the feed with that hashā. Maybe the text could be clarified somehow.
Iām also not sure what you mean by markdown already being part of it. Of course people can already use Markdown, just like presumably nothing stopped people from using (twt subjects) before they were formally described. But itās not universal; e.g. as a jenny user I just see the plain text.
@lyse@lyse.isobeef.org Iād suggest making the whole content-type thing a SHOULD, to accommodate people just using some hosting service they donāt have much control over. (The same situation could make detecting followers hard, but IMO āplease email me if you follow meā is still legit twtxt, even if inconvenient.)
@prologic@twtxt.net Thanks for writing that up!
I hope it can remain a living document (or sequence of draft revisions) for a good long time while we figure out how this stuff works in practice.
I am not sure how I feel about all this being done at once, vs. letting conventions arise.
For example, even today I could reply to twt abc1234 with ā(#abc1234) Edit: ā¦ā and I think all you humans would understand it as an edit to (#abc1234). Maybe eventually it would become a common enough convention that clients would start to support it explicitly.
Similarly we could just start using 11-digit hashes. We should iron out whether itās sha256 or whatever but thereās no need get all the other stuff right at the same time.
I have similar thoughts about how some users could try out location-based replies in a backward-compatible way (append the replyto: stuff after the legacy (#hash) style).
However I recognize that Iām not the one implementing this stuff, and itās less work to just have everything determined up front.
Misc comments (I havenāt read the whole thing):
Did you mean to make hashes hexadecimal? You lose 11 bits that way compared to base32. Iād suggest gaining 11 bits with base64 instead.
āClients MUST preserve the original hashā ā do you mean they MUST preserve the original twt?
Thanks for phrasing the bit about deletions so neutrally.
I donāt like the MUST in āClients MUST follow the chain of reply-to referencesā¦ā. If someone writes a client as a 40-line shell script that requires the user to piece together the threading themselves, IMO we shouldnāt declare the client non-conforming just because they didnāt get to all the bells and whistles.
Similarly I donāt like the MUST for user agents. For one thing, you might want to fetch a feed without revealing your identty. Also, it raises the bar for a minimal implementation (Iām again thinking again of the 40-line shell script).
For āwho followsā lists: why must the long, random tokens be only valid for a limited time? Do you have a scenario in mind where they could leak?
Why canāt feeds be served over HTTP/1.0? Again, thinking about simple software. I recently tried implementing HTTP/1.1 and it wasnāt too bad, but 1.0 would have been slightly simpler.
Why get into the nitty-gritty about caching headers? This seems like generic advice for HTTP servers and clients.
Iām a little sad about other protocols being not recommended.
I donāt know how I feel about including markdown. I donāt mind too much that yarn users emit twts full of markdown, but Iām more of a plain text kind of person. Also it adds to the length. I wonder if putting a separate document would make more sense; that would also help with the length.
š Reminder folks of the upcoming Yarn.social monthly online meetup:
I hope to see @david@collantes.us @movq@www.uninformativ.de @lyse@lyse.isobeef.org @xuu@txt.sour.is @sorenpeter@darch.dk and hopefully others too @aelaraji@aelaraji.com @falsifian@www.falsifian.org and anyone else that sees this! š Weāre hopefully going to primarily discuss the future of Twtxt and the last few weeks of discussions š¤£
- Event: Yarn.social Online Meetup
- When: 28th September 2024 at 12:00pm UTC (midday)
- Where: Mills Meet : Yarn.social
- Cadence: 4th Saturday of every Month
Agenda:
- Letās talk about the upcoming changes to the Twtxt spec(s)
- See #xgghhnq
- See #xgghhnq
@aelaraji@aelaraji.com This is one of the reasons why yarnd has a couple of settings with some sensible/sane defaults:
I could already imagine a couple of extreme cases where, somewhere, in this peaceful world oneās exercise of freedom of speech could get them in Real trouble (if not danger) if found out, it wouldnāt necessarily have to involve something to do with Law or legal authorities. So, If someone asks, and maybe fearing fearing for⦠letās just say āTheir well beingā, would it heart if a pod just purged their content if itās serving it publicly (maybe relay the info to other pods) and call it a day? It doesnāt have to be about some law/convention somewhere ⦠𤷠I know! Too extreme, but Iāve seen news of people whoād gone to jail or got their lives ruined for as little as a silly joke. And it doesnāt even have to be about any of this.
There are two settings:
$ ./yarnd --help 2>&1 | grep max-cache
--max-cache-fetchers int set maximum numnber of fetchers to use for feed cache updates (default 10)
-I, --max-cache-items int maximum cache items (per feed source) of cached twts in memory (default 150)
-C, --max-cache-ttl duration maximum cache ttl (time-to-live) of cached twts in memory (default 336h0m0s)
So yarnd pods by default are designed to only keep Twts around publicly visible on either the anonymous Frontpage or Discover View or your Timeline or the feedās Timeline for up to 2 weeks with a maximum of 150 items, whichever get exceeded first. Any Twts over this are considered āoldā and drop off the active cache.
Itās a feature that my old man @off_grid_living@twtxt.net was very strongly in support of, as was I back in the day of yarndās design (nothing particularly to do with Twtxt per se) that Iāve to this day stuck by ā Even though there are some š that have different views on this š¤£
@movq@www.uninformativ.de @falsifian@www.falsifian.org @prologic@twtxt.net Maybe I donāt know what Iām talking about and Youāve probably already read this: Everything you need to know about the āRight to be forgottenā coming straight out of the EUās GDPR Website itself. It outlines the specific circumstances under which the right to be forgotten applies as well as reasons that trump the oneās right to erasure ā¦etc.
Iām no lawyer, but my uneducated guess would be that:
A) twts are already publicly available/public knowledge and such⦠just donāt process childrenās personal data and MAYBE youāre good? Since thereās this:
⦠an organizationās right to process someoneās data might override their right to be forgotten. Here are the reasons cited in the GDPR that trump the right to erasure:
- The data is being used to exercise the right of freedom of expression and information.
- The data is being used to perform a task that is being carried out in the public interest or when exercising an organizationās official authority.
- The data represents important information that serves the public interest, scientific research, historical research, or statistical purposes and where erasure of the data would likely to impair or halt progress towards the achievement that was the goal of the processing.
B) What I love about the TWTXT sphere is itās Human/Humane element! No deceptive algorithms, no Corpo B.S ā¦etc. Just Humans. So maybe ⦠If we thought about it in this way, it wouldnāt heart to be even nicer to others/offering strangers an even safer space.
I could already imagine a couple of extreme cases where, somewhere, in this peaceful world oneās exercise of freedom of speech could get them in Real trouble (if not danger) if found out, it wouldnāt necessarily have to involve something to do with Law or legal authorities. So, If someone asks, and maybe fearing fearing for⦠letās just say āTheir well beingā, would it heart if a pod just purged their content if itās serving it publicly (maybe relay the info to other pods) and call it a day? It doesnāt have to be about some law/convention somewhere ⦠𤷠I know! Too extreme, but Iāve seen news of people whoād gone to jail or got their lives ruined for as little as a silly joke. And it doesnāt even have to be about any of this.
P.S: Maybe make X tool check out robots.txt? Or maybe make long-term archives Opt-in? Opt-out?
P.P.S: Already Way too many MAYBEās in a single twt! So Iāll just shut up. š
āLand of dreamsā: The Australian high-flyers on edge as Trump and Harris duke it out
Australiaās tech insiders are enjoying a surge in optimism from the Fedās bumper rate cut, but all eyes are now on the US election. ā Read more
And they have arrived (well, they did around 3 hours ago, LOL). Buttery smooth, my 16 Pro (one with dark cover). It took a bit over an hour to transfer all my data.
@lyse@lyse.isobeef.org yeah, tell us, @prologic@twtxt.net, what isnāt true? š¤ You canāt just go around, āthatās not true, and thatās not true; and that, and that!ā without spelling out exactly what isnāt, and why? For the love of god, why?! š
@david@collantes.us Thanks, thatās good feedback to have. I wonder to what extent this already exists in registry servers and yarn pods. I havenāt really tried digging into the past in either one.
How interested would you be in changes in metadata and other comments in the feeds? Iām thinking of just permanently saving every version of each twtxt file that gets pulled, not just the twts. It wouldnāt be hard to do (though presenting the information in a sensible way is another matter). Compression should make storage a non-issue unless someone does something weird with their feed like shuffle the comments around every time I fetch it.
@falsifian@www.falsifian.org āI was actually thinking about making an Internet Archive style twtxt archiver, letting you explore past twtsā ā thatās an awesome idea for a project. Something I would certainly use!
6 Features in macOS Sequoia You Will Actually Use
Now that MacOS Sequoia is available for all Mac users to update and install, you might be wondering which of the many new features and changes are particularly enticing, and that you might actually use. Rather than overwhelm you with a list of twenty seven trillion new things that you will quickly forget about, here ⦠Read More ā Read more
@david@collantes.us Well, I wouldnāt recommend using my code for your main jenny use anyway. If you want to try it out, set XDG_CONFIG_HOME and XDG_CACHE_HOME to some sandbox directories and only run my code there. If @movq@www.uninformativ.de is interested in any of this getting upstreamed, Iād be happy to try rebasing the changes, but otherwise itās a proof of concept and fun exercise.
@david@collantes.us Hello!
BTW this code doesnāt incorporate existing twts into jennyās database. Itās best used starting from scratch. Iāve been testing it using a custom XDG_CACHE_HOME and XDG_CONFIG_HOME to avoid messing with my ārealā jenny data.
I wrote some code to try out non-hash reply subjects formatted as (replyto ), while keeping the ability to use the existing hash style.
I donāt think we need to decide all at once. If clients add support for a new method then people can use it if they like. The downside of course is that this costs developer time, so I decided to invest a few hours of my own time into a proof of concept.
With apologies to @movq@www.uninformativ.de for corrupting jennyās beautiful code. I donāt write this expecting you to incorporate the patch, because it does complicate things and might not be a direction you want to go in. But if you like any part of this approach feel free to use bits of it; I release the patch under jennyās current LICENCE.
Supporting both kinds of reply in jenny was complicated because each email can only have one Message-Id, and because itās possible the target twt will not be seen until after the twt referencing it. The following patch uses an sqlite database to keep track of known (url, timestamp) pairs, as well as a separate table of (url, timestamp) pairs that havenāt been seen yet but are wanted. When one of those āwantedā twts is finally seen, the mail file gets rewritten to include the appropriate In-Reply-To header.
Patch based on jenny commit 73a5ea81.
https://www.falsifian.org/a/oDtr/patch0.txt
Not implemented:
- Composing twts using the (replyto ā¦) format.
- Probably other important things Iām forgetting.
compressed_subject(msg_singlelined) be configurable, so only a certain number of characters get displayed, ending on ellipses? Right now the entire twtxt is crammed into the Subject:. This request aims to make twtxts display on mutt/neomutt, etc. more like emails do.
I mean, really, it couldnāt get any better. I love it!
@eldersnake@we.loveprivacy.club I wanted to ask you, are you running Headscale and WireGuard on the same VPS? I want to test Headscale, but currently run a small container with WireGuard, and I wonder if I need to stop (and eventually get rid of) the container to get Headscale going. Did you use the provided .deb to install Headscale, or some other method?
10 Docker Myths Debunked
We debunk common Docker myths and explain the capabilities and benefits of this widely used container technology. ā Read more
Speaking of AI tech (sorry!); Just came across this really cool tool built by some engineers at Google⢠(currently completely free to use without any signup) called NotebookLM š Looks really good for summarizing and talking to document š
@eldersnake@we.loveprivacy.club there has to be less reliance on a single point of failure. It is not so much about creating jobs in the US (which come with it, anyway), but about the ability to produce whatās needed at home too. Whatās the trade off? Is it going to be a little bit more expensive to manufacture, perhaps?
@quark@ferengi.one It does not. That is why Iām advocating for not using hashes for treads, but a simpler link-back scheme.
the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.
if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.
the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.
if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.
@prologic@twtxt.net Wikipedia claims sha1 is vulnerable to a āchosen-prefix attackā, which I gather means I can write any two twts I like, and then cause them to have the exact same sha1 hash by appending something. I guess a twt ending in random junk might look suspcious, but perhaps the junk could be worked into an image URL like
. If thatās not possible now maybe it will be later.git only uses sha1 because theyāre stuck with it: migrating is very hard. There was an effort to move git to sha256 but I donāt know its status. I think there is progress being made with Game Of Trees, a git clone that uses the same on-disk format.
I canāt imagine any benefit to using sha1, except that maybe some very old software might support sha1 but not sha256.
Kuo: iPhone 17 to Use 3nm Chip Tech, Some iPhone 18 Models to Use 2nm
Next yearās iPhone 17 series will feature processors made using TSMCās 3-nonometer chip technology, but only some iPhone 18 models in 2026 are anticipated to use the Taiwanese chipmakerās next-generation 2nm processor technology because of cost concerns, according to Apple analyst Ming-Chi Kuo. ⦠ā Read more
Alright. My first mentionsāwhich were picked not so randomly, LOLāare @prologic@twtxt.net, @lyse@lyse.isobeef.org, and @movq@www.uninformativ.de. I am also posting my first image too, which you see below. Thatās my neighbourhood, in a āwinterā day. Hopefully @prologic@twtxt.net will add my domain to his allowed list, so that the image (and any other further) renders.
@movq@www.uninformativ.de Agreed that hashes have a benefit. I came up with a similar example where when I twted about an 11-character hash collision. Perhaps hashes could be made optional somehow. Like, you could use the āreplytoā idea and then additionally put a hash somewhere if you want to lock in which version of the twt you are replying to.
There is nothing wrong with how we currently run a diff to see what has been removed. if i build a merkle tree off all the twt hashes in a feed i can use that to verify a twt should be in a feed or not. and gossip that to my peers.
There is nothing wrong with how we currently run a diff to see what has been removed. if i build a merkle tree off all the twt hashes in a feed i can use that to verify a twt should be in a feed or not. and gossip that to my peers.
isnāt the benefit of blake2b that it is a more efficient algo than sha1 and has the same or similar entropy to sha3? i thought we had partially solved this with some type of expanding hash size? additionally we could increase bit density by using base36 or base64/url-safeā¦
isnāt the benefit of blake2b that it is a more efficient algo than sha1 and has the same or similar entropy to sha3? i thought we had partially solved this with some type of expanding hash size? additionally we could increase bit density by using base36 or base64/url-safeā¦
Iām not advocating in either direction, btw. I havenāt made up my mind yet. š Just braindumping here.
The (replyto:ā¦) proposal is definitely more in the spirit of twtxt, Iād say. Itās much simpler, anyone can use it even with the simplest tools, no need for any client code. That is certainly a great property, if you ask me, and itās things like that that brought me to twtxt in the first place.
Iād also say that in our tiny little community, message integrity simply doesnāt matter. Signed feeds donāt matter. I signed my feed for a while using GPG, someone else did the same, but in the end, nobody cares. The community is so tiny, thereās enough āimplicit trustā or whatever you want to call it.
If twtxt/Yarn was to grow bigger, then this would become a concern again. But even Mastodon allows editing, so how much of a problem can it really be? š
I do have to āadmitā, though, that hashes feel better. It feels good to know that we can clearly identify a certain twt. It feels more correct and stable.
Hm.
I suspect that the (replyto:ā¦) proposal would work just as well in practice.
Hey, @movq@www.uninformativ.de, a tiny thing to add to jenny, a -v switch. That way when you twtxt āThatās an older format that was used before jenny version v23.04ā, I can go and run jenny -v, and āduh!ā myself on the way to a git pull. :-D
@movq@www.uninformativ.de to paraphrase US Presidents speech on each State of the Union, āthe State of the Jenny is strong!ā :-D As for the potential upcoming changes, there has to be a knowledgeable head honcho that will agglomerate and coalesce, and guide onto the direction that will be taken. All that with the strong input from the developers that will be implementing the changes, and a lesser (but not less valuable) input from users.
Thereās a simple reason all the current hashes end in a or q: the hash is 256 bits, the base32 encoding chops that into groups of 5 bits, and 256 isnāt divisible by 5. The last character of the base32 encoding just has that left-over single bit (256 mod 5 = 1).
So I agree with #3 below, but do you have a source for #1, #2 or #4? I would expect any lack of variability in any part of a hash functionās output would make it more vulnerable to attacks, so designers of hash functions would want to make the whole output vary as much as possible.
Other than the divisible-by-5 thing, my current intuition is it doesnāt matter what part you take.
Hash Structure: Hashes are typically designed so that their outputs have specific statistical properties. The first few characters often have more entropy or variability, meaning they are less likely to have patterns. The last characters may not maintain this randomness, especially if the encoding method has a tendency to produce less varied endings.
Collision Resistance: When using hashes, the goal is to minimize the risk of collisions (different inputs producing the same output). By using the first few characters, you leverage the full distribution of the hash. The last characters may not distribute in the same way, potentially increasing the likelihood of collisions.
Encoding Characteristics: Base32 encoding has a specific structure and padding that might influence the last characters more than the first. If the data being hashed is similar, the last characters may be more similar across different hashes.
Use Cases: In many applications (like generating unique identifiers), the beginning of the hash is often the most informative and varied. Relying on the end might reduce the uniqueness of generated identifiers, especially if a prefix has a specific context or meaning.
@prologic@twtxt.net I just realised the jenny also does what I want, as of latest commit. Simply use jenny --debug-feed <feed url>, and it will do what I wanted too!
I came across this Gallery Theme for Hugo, and @lyse@lyse.isobeef.org immediately came to mind. I think it would be a very fitting theme to use for all your photos, Lyse!
KubeCon + CloudNativeCon North America 2024 co-located event deep dive: OpenFeature Summit
Co-chairs: David Hirsch, Michael BeemerNovember 12, 2024Salt Lake City, Utah The Open Feature Summit focuses on the use of feature flags and experimentation in cloud-native environments. Itās an event designed to help developers, architects, and decision-makers leverage feature⦠ā Read more
Taking the last n characters of a base32 encoded hash instead of the first n can be problematic for several reasons:
Hash Structure: Hashes are typically designed so that their outputs have specific statistical properties. The first few characters often have more entropy or variability, meaning they are less likely to have patterns. The last characters may not maintain this randomness, especially if the encoding method has a tendency to produce less varied endings.
Collision Resistance: When using hashes, the goal is to minimize the risk of collisions (different inputs producing the same output). By using the first few characters, you leverage the full distribution of the hash. The last characters may not distribute in the same way, potentially increasing the likelihood of collisions.
Encoding Characteristics: Base32 encoding has a specific structure and padding that might influence the last characters more than the first. If the data being hashed is similar, the last characters may be more similar across different hashes.
Use Cases: In many applications (like generating unique identifiers), the beginning of the hash is often the most informative and varied. Relying on the end might reduce the uniqueness of generated identifiers, especially if a prefix has a specific context or meaning.
In summary, using the first n characters generally preserves the intended randomness and collision resistance of the hash, making it a safer choice in most cases.
@quark@ferengi.one Do you mean something like this?
$ ./yarnc debug ~/Public/twtxt.txt | tail -n 1
kp4zitq 2024-09-08T02:08:45Z (#wsdbfna) @<aelaraji https://aelaraji.com/twtxt.txt> My work has this thing called "compressed work", where you can **buy** extra time off (_as much as 4 additional weeks_) per year. It comes out of your pay though, so it's not exactly a 4-day work week but it could be useful, just haven't tired it yet as I'm not entirely sure how it'll affect my net pay
@prologic@twtxt.net I saw those, yes. I tried using yarnc, and it would work for a simple twtxt. Now, for a more convoluted one it truly becomes a nightmare using that tool for the job. I know there are talks about changing this hash, so this might be a moot point right now, but it would be nice to have a tool that:
- Would calculate the hash of a twtxt in a file.
- Would calculate all hashes on a
twtxt.txt(local and remote).
Again, something lovely to have after any looming changes occur.
Could someone knowledgable reply with the steps a grandpa will take to calculate the hash of a twtxt from the CLI, using out-of-the-box tools? I swear I read about it somewhere, but canāt find it.
Instagram locks down teens: How the new feature will work
All teens using Instagram will automatically have strong restrictions applied on their accounts, rather than putting the onus on parents. ā Read more
@aelaraji@aelaraji.com this is the little script I am using on my publish_command:
#!/usr/bin/env bash
twtxt2html -t "Quark's twtxt feed" /var/www/sites/ferengi.one/twtxt.txt > /var/www/sites/ferengi.one/index.html
I named it twtxtit. :-)
@sorenpeter@darch.dk I like this idea. Just for fun, Iām using a variant in this twt. (Also because Iām curious how it non-hash subjects appear in jenny and yarn.)
URLs can contain commas so I suggest a different character to separate the url from the date. Is this twt Iāve used space (also after āreplytoā, for symmetry).
I think this solves:
- Changing feed identities: although @mckinley@twtxt.net points out URLs can change, I think this syntax should be okay as long as the feed at that URL can be fetched, and as long as the current canonical URL for the feed lists this one as an alternate.
- editing, if you donāt care about message integrity
- finding the root of a thread, if youāre not following the author
An optional hash could be added if message integrity is desired. (E.g. if you donāt trust the feed author not to make a misleading edit.) Other recent suggestions about how to deal with edits and hashes might be applicable then.
People publishing multiple twts per second should include sub-second precision in their timestamps. As you suggested, the timestamp could just be copied verbatim.
Trying to figure out how to use the publish_command to vomit the HTML into a file, using twtxt2html.
@movq@www.uninformativ.de Non-ASCII characters were broken. Like U+2028, degrees (°), etc.
Turns out I used a silly library to detect the encoding and transform to UTF-8 if needed. When there is no Content-Type header, like for local files, it looks at the first 1024 bytes. Since it only saw ASCII in that region, the damn thing assumed the data to be in Windows-1252 (which for web pages kinda makes sense):
// TODO: change default depending on user's locale?
return charmap.Windows1252, "windows-1252", false
https://cs.opensource.google/go/x/net/+/master:html/charset/charset.go;l=102
This default is hardcoded and cannot be changed.
Trying to be smart and adding automatic support for other encodings turned out to be a bad move on my end. At least I can reduce my dependency list again. :-)
I now just reject everything that explicitly specifies something different than text/plain and an optional charset other than utf-8 (ignoring casing). Otherwise I assume itās in UTF-8 (just like the twtxt file format specification mandates) and hope for the best.
-T/--template in case you need a custom template š
@bender@twtxt.net I should put the template that is used by default as a file in the repo. Look at the source for now and youāll see š
(#hash;#originalHash) would also work.
Maybe Iām being a bit too purist/minimalistic here. As I said before (in one of the 1372739 posts on this topic ā or maybe I didnāt even send that twt, I donāt remember š ), I never really liked hashes to begin with. They arenāt super hard to implement but they are kind of against the beauty of the original twtxt ā because you need special client support for them. Itās not something that you could write manually in your
twtxt.txtfile. With @sorenpeter@darch.dkās proposal, though, that would be possible.
Tangentially related, I was a bit disappointed to learn that the twt subject extension is now never used except with hashes. Manually-written subjects sounded so beautifully ad-hoc and organic as a way to disambiguate replies. Maybe Iāll try it some time just for fun.
Hmmmm, I somehow run into an encoding problem where my inserted data end up mangled in the database. But, both SQLite and Go use UTF-8. Whatās happening here? :-?
Artifact Hub becomes a CNCF incubating project
The CNCF Technical Oversight Committee (TOC) has voted to accept Artifact Hub as a CNCF incubating project.Ā Artifact Hub is a web-based application that enables finding, installing, and publishing cloud native packages and configurations. Discovering useful cloud native⦠ā Read more
() @falsifian@www.falsifian.org You mean the idea of being able to inline
# url =changes in your feed?
Yes, that one. But @lyse@lyse.isobeef.org pointed out suffers a compatibility issue, since currently the first listed url is used for hashing, not the last. Unless your feed is in reverse chronological order. Heh, I guess another metadata field could indicate which version to use.
Or maybe url changes could somehow be combined with the archive feeds extension? Could the url metadata field be local to each archive file, so that to switch to a new url all you need to do is archive everything youāve got and start a new file at the new url?
I donāt think itās that likely my feed url will change.
@movq@www.uninformativ.de I did started from scratch, today. I using am commit 6e8ce5afdabd5eac22eae4275407b3bd2a167daf (HEAD -> main, origin/main, origin/HEAD), I keep myself up-to-date, LOL. Still, that specific twtxt (o6dsrga) is no longer.
Since
jennycanāt fetch archived twtxts
I wiped my entire maildir and re-fetched everything. I did that recently because @aelaraji@aelaraji.com asked me to š , but I guess I also did this back in 2023.
What did you do to make yours work?
jenny does fetch archived feeds during the normal jenny -f operation. Only when using the recently implemented --fetch-context, archived feeds are not fetched (yet). That was an oversight and I intend to fix that.
@movq@www.uninformativ.de I figured it will be something like this, yet, you were able to reply just fine, and I wasnāt. Looking at your twtxt.txt I see this line:
2024-09-16T17:37:14+00:00 (#o6dsrga) @<prologic https://twtxt.net/user/prologic/twtxt.txt>
@<quark https://ferengi.one/twtxt.txt> This is what I get. š¤
Which is using the right hash. Mine, on the other hand, when I replied to the original, old style message (Message-Id: <o6dsrga>), looks like this:
2024-09-16T16:42:27+00:00 (#o) @<prologic https://twtxt.net/user/prologic/twtxt.txt> this was your first twtxt. Cool! :-P
What did you do to make yours work? I simply went to the oldest @prologic@twtxt.netās entry on my Maildir, and replied to it (jenny set the reply-to hash to #o, even though the Message-Id is o6dsrga). Since jenny canāt fetch archived twtxts, how could I go to re-fetch everything? And, most importantly, would re-fetching fix the Message-Id:?
@mckinley@twtxt.net Yes, changing domains is be a problem if you tie your identity to an https url. But I also worry about being stuck with a key I canāt rotate. Whatever gets used, it would be nice to be able to rotate identities. I like @lyse@lyse.isobeef.orgās idea for that.
Hmm⦠I replied to this message:
From: prologic <prologic>
Subject: Hello World! š
Date: Sat, 18 Jul 2020 08:39:52 -0400
Message-Id: <o6dsrga>
X-twtxt-feed-url: https://twtxt.net/user/prologic/twtxt.txt
Hello World! š
And see how the hash shows⦠Is it because that hash isnāt longer used?
Artifact Hub becomes a CNCF incubating project
The CNCF Technical Oversight Committee (TOC) has voted to accept Artifact Hub as a CNCF incubating project.Ā Artifact Hub is a web-based application that enables finding, installing, and publishing cloud native packages and configurations. Discovering useful cloud native⦠ā Read more
(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
I think I like this a lot. š¤
The problem with using hashes always was that theyāre āone-directionalā: You can construct a hash from URL + timestamp + twt, but you cannot do the inverse. When I see ā, I have no idea what that could possibly refer to.
But of course something like (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z) has all the information you need. This could simplify twt/feed discovery quite a bit, couldnāt it? š¤ That thing that I just implemented ā jenny asking some Yarn pod for some twt hash ā would not be necessary anymore. Clients could easily and automatically fetch complete threads instead of requiring the user to follow all relevant feeds.
Only using the timestamp to identify a twt also solves the edit problem.
It even is better for non-Yarn clients, because you now donāt have to read, understand, and implement a ātwt hash specificationā before you can reply to someone.
The only problem, really, is that (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z) is so long. Clients would have to try harder to hide this. š
Apple Releases tvOS 18 With InSight, New Screen Savers and More
Apple today released tvOS 18, the newest version of the tvOS operating system that runs on the Apple TV 4K and āApple TVā HD models.
tvOS 18 can be downloaded using the Settings app on the āāApple TVāā. Open up Settings and go to System > Software Update to get the new software. āāApple TVāā owners who have automatic softwa ⦠ā Read more
More:
Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)' 2.
`(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' 2. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)' I know it's longer that 7-11 characters, but it's self-explaining when looking at the
twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?
Subject: The [tag URI scheme](https://en.wikipedia.org/wiki/Tag_URI_scheme) looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be
somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick? Instead of using `tag:` as the prefix/protocol, it would more it clear
what we are talking about by using `in-reply-to:` (https://indieweb.org/in-reply-to) or `replyto:` similar to `mailto:` 1. `(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)` 2.
`(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` 3. `(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)` I know it's longer that 7-11 characters, but it's self-explaining when looking at the
twtxt.txt in the raw, and the cases above can all be caught with this regex: `\([\w-]*reply[\w-]*\:` Is this something that would work?
Notice the difference? Soren edited, and broke everything.
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be⦠Maybe it doesnāt have to bee that stick?
Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:
(reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
(in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
(replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
I know itās longer that 7-11 characters, but itās self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:
Is this something that would work?
Thank you @aelaraji@aelaraji.com, Iām glad you like it. I use PHP because itās everywhere on cheap hosting and no need for the user to log into a terminal to setup it up. Timeline is not mean to be use locally. For that I think something like twtxt2html is a better fit. (and happy to see you using simple.css on you new log page;)
Milk-V DuoModule Eval Board with RISC-V Core, 8051 Core, and Linux Support
The Milk-V DuoModule 01 Evaluation Board offers a versatile platform for evaluating the Duo Module 01, featuring Wi-Fi 6, Bluetooth 5.4, and eMMC storage. It enables developers and makers to prototype solutions using the SG2000 SoC, with open-source documentation to streamline development. Like the Milk-V Duo S and Oz64, this board features the SG2000 SoC, [ā¦] ā Read more
@falsifian@www.falsifian.org TLS wonāt help you if you change your domain name. How will people know if itās really you? Maybe thatās not the biggest problem for something with such low stakes as twtxt, but itās a reasonable concern that could be solved using signatures from an unchanging cryptographic key.
This idea is the basis of Nostr. Notes can be posted to many relays and every note is signed with your private key. It doesnāt matter where you get the note from, your client can verify its authenticity. That way, relays donāt need to be trusted.
@prologic@twtxt.net Brute force. I just hashed a bunch of versions of both tweets until I found a collision.
I mostly just wanted an excuse to write the program. I donāt know how I feel about actually using super-long hashes; could make the twts annoying to read if you prefer to view them untransformed.
@prologic@twtxt.net earlier you suggested extending hashes to 11 characters, but hereās an argument that they should be even longer than that.
Imagine I found this twt one day at https://example.com/twtxt.txt :
2024-09-14T22:00Z Useful backup command: rsync -a ā$HOMEā /mnt/backup
and I responded with ā(#5dgoirqemeq) Thanks for the tip!ā. Then Iāve endorsed the twt, but it could latter get changed to
2024-09-14T22:00Z Useful backup command: rm -rf /some_important_directory
which also has an 11-character base32 hash of 5dgoirqemeq. (Iām using the existing hashing method with https://example.com/twtxt.txt as the feed url, but Iām taking 11 characters instead of 7 from the end of the base32 encoding.)
Thatās what I meant by āspoofingā in an earlier twt.
I donāt know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the ātwo timesā is because of the birthday paradox.
Side note: current hashes always end with āaā or āqā, which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.
Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it.
Theyāre in Section 6:
Receiver should adopt UDP GRO. (Something about saving CPU processing UDP packets; Iām a but fuzzy about it.) And they have suggestions for making GRO more useful for QUIC.
Some other receiver-side suggestions: āsending delayed QUICK ACKsā; āusing recvmsg to read multiple UDF packets in a single system callā.
Use multiple threads when receiving large files.
HTTPS is supposed to do [verification] anyway.
TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesnāt, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
I was confused by this response for a while, but now I think I understand what youāre getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I canāt verify a feed unless I download it myself from the origin server. Is that right?
I.e. if the HTTPS origin server is online and I donāt mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.
feed locations [being] URLs gives some flexibility
It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI,
urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that theurltag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. Iām not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
Iām also not very familiar with IPFS or IPNS.
I havenāt been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if Iām too lazy to change how I publish my feed :-)
Fun: Donāt Forget to Accept New iCloud Terms & Conditions
Apple has bestowed upon us some wonderful weekend reading, in the form of all new iCloud Terms and Conditions, which are required to accept if you wish to continue to use iCloud on your Apple devices. Itās iCloud, itās Terms, and itās Conditions⦠iCloud. Terms. Conditions⦠are you getting it yet? This is not three ⦠[Read More](https://osxdaily.com/2024/09/13/fun-dont-forget-to-accept-new-icloud-terms-conditions ⦠ā Read more
@sorenpeter@darch.dk !! I freaking love your Timeline ⦠I kind of have an justified PHP phobia š but, Iām definitely thinking about giving it a try!
/ME wondering if itās possible to use it locally just to read and manage my feed at first and then maybe make it publicly accessible later.
20° temperature drop in just a hand full of days. Ooof. We went on a stroll at 10°C today. I could have used a beanie, my ears were very cold. The sun was out, but hardly any people. Very nice. Also, no wind.
It was nice to finally hear a few birds singing again, although it was still fairly silent. The sun gave us a nice show. In hindsight, we should have stayed at the summit a bit longer. In the forest, we missed the very best, crazy red sky. We could only see parts shimmering through the tree lines.
Amazon Takes Up to $119 Off iPad Mini and 10th Gen iPad With All-Time Low Prices
Amazon today has a few all-time low prices on the 10th generation iPad and 6th generation iPad mini. Both of these discounts represent all-time low prices on each tablet, and prices start at $299.00 for the 64GB Wi-Fi iPad, down from $349.00.
 where you actually have a public key fingerprint as your feedās unique identity that never changes.
i would rather it be a random value signed by a key. That way the key can change but the value stays the same.
the right way to solve this is to use public/private key(s) where you actually have a public key fingerprint as your feedās unique identity that never changes.
i would rather it be a random value signed by a key. That way the key can change but the value stays the same.
Meta admits Australians cannot opt out of āpredatoryā AI data scrape
Senators are calling for stronger privacy laws to give Facebook users the ability to block the company from using their posts to train its AI models, as users can in the EU. ā Read more
Kubestronaut in Orbit: Daiki Takasao
Get to know Daiki This weekās Kubestronaut in Orbit, Daiki Takasao, is a Japanese IT infrastructure engineer at NRI. He works with CNCF technologies to build financial IT systems and has been using Kubernetes, Linkerd, and Prometheus since⦠ā Read more
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.
What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:
The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.
The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?
From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.
I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.
# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.
What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:
The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.
The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?
From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.
I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.
# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
As a Gen Z wanting to get off social media, I lived for a week using a ādumb phoneā
As a 19-year-old, Iām sceptical about the governmentās proposed social media ban. But a more effective alternative is gaining traction among Gen Zers. ā Read more
Deploy your first app on Kubernetes with GitOps
Member post originally published on the Taikun blog Introduction In the ever-evolving landscape of cloud-native technologies, managing deployments in Kubernetes clusters has become increasingly complex. Enter ArgoCD, a powerful tool that simplifies and automates the deployment process using⦠ā Read more
@mckinley@twtxt.net To answer some of your questions:
Are SSH signatures standardized and are there robust software libraries that can handle them? Weāll need a library in at least Python and Go to provide verified feed support with the currently used clients.
We already have this. Ed25519 libraries exist for all major languages. Aside from using ssh-keygen -Y sign and ssh-keygen -Y verify, you can also use the salty CLI itself (https://git.mills.io/prologic/salty), and Iām sure there are other command-line tools that could be used too.
If we all implemented this, every twt hash would suddenly change and every conversation thread weāve ever had would at least lose its opening post.
Yes. This would happen, so weād have to make a decision around this, either a) a cut-off point or b) some way to progressively transition.
url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:
how little data is needed for generating the hashes? Instead of the full URL, can we makedo with just the domain (example.net) so we avoid the conflicts with gemini://, https:// and only http:// (like in my own twtxt.txt) or construct something like like a webfinger id nick@domain (also used by mastodon etc.) from the domain and nick if there, else use domain as nick as well
@lyse@lyse.isobeef.org This looks like a nice way to do it.
Another thought: if clients canāt agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.
A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if thereās a period when clients are doing different things.
(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I donāt know about other clients.)