Context for those who donāt know: Epic Games is the company behind the hugely popular video game Fortnite. As far as I know, the core game is still free-to-play and supported by microtransactions. Itās available on Windows, consoles, and mobile platforms. They sued Apple a few years ago because they felt the 30% cut Apple takes for in-app purchases was unreasonable and that they should be allowed to distribute their software independently of the App Store. It didnāt turn out so well for them. https://en.wikipedia.org/wiki/Epic_Games_v._Apple
@slashdot@feeds.twtxt.net They must have spent such an ungodly amount in legal fees by now that I wonder if theyāll come out of this in the green if they get to keep all the money from in-app purchases. Donāt get me wrong, Iām glad theyāre doing it, but I think thereās a reason why Epic Games is the only one fighting for app store neutrality.
-P
is a life saver when running rsync
over spotty connections. In my very illiterate opinion, it should always be a default.
@lyse@lyse.isobeef.org If rsync is interrupted, it doesnāt delete any files that were transferred completely so it will āresumeā from that last complete transfer. However, it does delete any partially transferred file. --partial
keeps that partial file around on the destination machine so it can continue right where it left off.
rsync(1)
but, whenever I Tab
for completion and get this:
I usually end up using -rtz
because Iām usually not 100% sure all the permissions and ownership information are right and I hate littering directories with inconsistent permissions. For a big transfer, Iāll start with -rtvz --stats --dry-run
and make sure itās only transferring the files it should, then Iāll do -rtz --stats --info=progress2 --no-i-r
to get one progress bar to watch for the whole transfer.
@slashdot@feeds.twtxt.net This is exciting news! Two of the most important privacy tools joining forces. Now, if we could get a Monero wallet included in Tails alongside Electrum, weād really have something. :)
rsync(1)
but, whenever I Tab
for completion and get this:
@aelaraji@aelaraji.com Rsync has a ton of options and I probably still havenāt scratched the surface, but I was able to memorize the options I actually need for day-to-day work in a relatively short time. I guess Iām the opposite of you, because I donāt know any scp(1)
options.
@prologic@twtxt.net Youāve done extremely well for ~$125/month, but thatās not figuring in labor. Iām sure youāve put a lot of hours into maintenance in the last 10 years.
Can anyone recommend a decent Android ROM that strips out as much of the spyware as possible? Is GrapheneOS a good option? I need to get a new phone anyway so I donāt mind buying within a supported device list as long as I can get one on the used market for $300-$400 or less.
If anyone could recommend some learning resources for this stuff Iād really appreciate it.
@sorenpeter@darch.dk All valid points. Maybe the correct way to do it should be to start a new feed at the new URL rather than move the feed and break all the hashes.
switch a couple of twt timestamps
The hashes would change and your posts would become detached from their replies. Clients might still have the old one cached, so you might just create a duplicate without replies depending on an observerās client.
add in 3 different twts manually with the same time stamp
The existing hash system should be able to keep them separate as long as the content is different. Iām not sure if there are additional implementation-related caveats there.
@prologic@twtxt.net @bender@twtxt.net As someone who likes cryptocurrencies for their utility as money instead of an investment, Iām glad to see the hype train start to move on to the next thing.
@falsifian@www.falsifian.org @prologic@twtxt.net @sorenpeter@darch.dk @lyse@lyse.isobeef.org I think, maybe, the way forward here is to combine an unchanging feed identifier (e.g. a public key fingerprint) with a longer hash to create a ātwt hash v2ā spec. v1 hashes can continue to be used for old conversations depending on client support.
@sorenpeter@darch.dk That could work. There are a few things that jump out at me.
- Nicknames on twtxt have historically been set on the client end. The
nick
metadata field is an optional add-on to the spec. Iām not sure it should be in the reply tag because it could differ between clients.
- URLs are safer to use, and we use them in the hash currently, but they can still change and weāre back to square 1. Feeds ought to have some kind of persistent identifier for this reason, which is why weāve been discussing cryptographic keys and tag URIs in the first place.
- The current twt hash spec mandates collapsing the timestamp to seconds precision. If those rules are kept, two posts made within the same second will not be separate when someone replies.
@falsifian@www.falsifian.org TLS wonāt help you if you change your domain name. How will people know if itās really you? Maybe thatās not the biggest problem for something with such low stakes as twtxt, but itās a reasonable concern that could be solved using signatures from an unchanging cryptographic key.
This idea is the basis of Nostr. Notes can be posted to many relays and every note is signed with your private key. It doesnāt matter where you get the note from, your client can verify its authenticity. That way, relays donāt need to be trusted.
@falsifian@www.falsifian.org I agree completely about backwards compatibility.
@falsifian@www.falsifian.org tag:twtxt.net,2024-09-08:SHA256:23OiSfuPC4zT0lVh1Y+XKh+KjP59brhZfxFHIYZkbZs
? :)
Key rotation
Key rotation is useful for security reasons, but I donāt think itās necessary here because itās only used for verifying oneās identity. Itās no different (to me) than Nostr or a cryptocurrency. You change your key, you change your identity.
It makes maintaining a feed more complicated.
This is an additional step that youād have to perform, but I definitely wouldnāt want to require it for compatibility reasons. I donāt see it as any more complicated than computing twt hashes for each post, which already requires you to have a non-trivial client application.
Instead, maybeā¦allow old urls to be rotated out?
That could absolutely work and might be a better solution than signatures.
HTTPS is supposed to do [verification] anyway.
TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesnāt, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
feed locations [being] URLs gives some flexibility
It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*
, or a regular old URL if you wanted to. The spec seems to indicate that the url
tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. Iām not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
My first thought when reading this was to go to my typical response and suggest we use Nostr instead of introducing cryptography to Twtxt. The more I thought about it, however, the more it made sense.
- It solves the problem elegantly, because the feed can move anywhere and the twt hashes will remain the same.
- It provides proof that a post is made by the same entity as another post.
- It doesnāt break existing clients.
- Everyone already has SSH on their machine, so anyone creating feeds manually could adopt this easily.
There are a couple of elephants in the room that we ought to talk about.
- Are SSH signatures standardized and are there robust software libraries that can handle them? Weāll need a library in at least Python and Go to provide verified feed support with the currently used clients.
- If we all implemented this, every twt hash would suddenly change and every conversation thread weāve ever had would at least lose its opening post.
@prologic@twtxt.net Itās pretty hard, actually. There will either be more friction than people will accept (BitTorrent) or it wonāt be decentralized in practice (LBRY/Odysee).
@bender@twtxt.net , do you depend on first-party Bluesky servers for the client application?
@movq@www.uninformativ.de I was never aware of this. I see the utility but Iām glad they got rid of it.
@quark@ferengi.one Looks neat. How does this compare to gocryptfs? Same basic concept with a different backing file format?
@slashdot@feeds.twtxt.net Never connect a TV to the Internet and then it will work for even longer than 7 years.
@bender@twtxt.net The whole album, itās pretty good. Itās available on YouTube but itās missing from all the music streaming services (Spotify, Tidal, Qobuz, Deezer, etc). I especially like Tenth Avenue Breakdown.
@lyse@lyse.isobeef.org We have some native blackberry species but around here (Northern California) we have Himalayan blackberry bushes which are very invasive. They match your description but I donāt know much about the different species. If left unchecked in an area with plenty of sun, theyāll smother all the lower plants and expand until they canāt anymore.
@movq@www.uninformativ.de Right. I wonder if Usenet would have faded away earlier if it wasnāt for file sharing. Itās only still in use for that because the annoying parts have been papered over with easy-to-use software and the protocol offers unique characteristics that make it almost perfect for that sort of thing.
āØ Follow
button on their profile page or use the Follow form and enter a Twtxt URL. You may also find other feeds of interest via Feeds. Welcome! š¤
@abucci@anthony.buc.ci What did he do?
@movq@www.uninformativ.de Thereās a lot going on on Usenet, but itās all in alt.binaries and co.
@lyse@lyse.isobeef.org Nice. Thereās a park here in town with giant blackberry bushes everywhere. Theyāre my favorite invasive species.
@slashdot@feeds.twtxt.net This is an arms race the Brazilian government (or any government, for that matter) canāt win unless they effectively disconnect their entire country from the Internet.
@prologic@twtxt.net Off the top of my head, I donāt know the differences between 1.1 and 2 but I know HTTP/3 is the one that uses QUIC.
@off_grid_living@twtxt.net I use absolute paths for my links so I use a local Web server. I use darkhttpd, which is much simpler than Apache and has just enough features for me. I donāt think Iāve ever run into encoding issues because I make sure everything is UTF-8 like @lyse@lyse.isobeef.org.
@prologic@twtxt.net Do you really need FUSE for that? I think that could be done with a process watching a directory on a regular filesystem and deleting the oldest files as the combined size reaches that cap. Iām sure someoneās done that already.
shellcheck
being used here? It would have picked this (contrived) example up?
@bender@twtxt.net They must be statically compiling all those Haskell libraries on Ubuntu. This seems to be how it is with every Haskell package on Arch. Pandoc has 180 of its own un-shared dependencies on my system.
shellcheck
being used here? It would have picked this (contrived) example up?
@bender@twtxt.net Shellcheck is great but I hope you donāt care about a low package count for screenshots like some people.
This one got me. I try to stick to POSIX sh so Iām not super familiar with the behavior of [[]]
. I definitely should have gotten -eq
, though.
@bender@twtxt.net If anything was going to be an NFT, a domain name would probably make the most sense, but I donāt think that system would be any better than the current one and it would make domain squatting even worse.
@falsifian@www.falsifian.org I do on my other feed, @mckinley@mckinley.cc, but itās too hard to keep it under 140 characters when youāre using mentions.
@movq@www.uninformativ.de Weāve had .home.arpa
for a while but it just doesnāt feel natural to type. Iāve been using .internal
.
Side note: I didnāt realize the .box TLD was finally live. Looks like domains are super expensive and also NFTs for some reason. Shame. https://my.box/
@slashdot@feeds.twtxt.net Iām surprised this took so long to become standardized.
@prologic@twtxt.net No cloud at all. Healthchecks, which does have a hosted offering, is definitely designed for more serious organizations than āMcKinley Labsā. It has separate users, permissions, all kinds of crazy features I donāt need at all. I definitely wouldnāt be using it if there wasnāt a linuxserver.io image and Iād like to use something simpler but I donāt know of anything else thatās completely self hosted.
@bender@twtxt.net The status of the disks and the backup jobs from Scrutiny and Healthchecks respectively. Green means everything is fine, red or orange means it needs my attention.
I recently installed Scrutiny for disk health monitoring and Healthchecks for cron job monitoring. They both have nice Web UIs and alert functionality, but I hacked together a little status report that runs whenever I log into my server using their APIs.
@bender@twtxt.net Thatās great, actually, but itās a shame you have to opt in to it.
@prologic@twtxt.net Ah yes, the other Go reverse proxy. Caddy seems simpler to me, more like Nginx with better defaults and a built-in ACME client. Traefik seems to have way more bells and whistles for all kinds of crazy setups when I only need to map domain names to containername:port pairs.
All the āmagicā might be nice in the short term, but as it becomes the default it can paper over some really questionable decisions when itās too late to change them. This can be applied to a number of things in computing but the best example I can think of is networking. (Side note: Thatās one of my favorite blog posts ever.)
Things start out simple and got more complicated until someone figures out how to cover up the mess. Then, since nobody wants to get in there and fix it properly and everyone else has already moved on, we just ignore whatās behind the curtain and hope it all keeps working.
Definitely something going on here. Cloudflare is my main suspect.
@prologic@twtxt.net I thought you were one of the people telling me how great it was. It is a Go project, after all. What do you usually use? I always find myself spending a lot of time making Nginx do what I want and I donāt think Iāve ever had automatic certificate renewal work the first time.
Caddy just works. I have some self-hosted Web services with easy-to-remember subdomains that only exist on my Wireguard network with a valid Letās Encrypt (wildcard) certificate so browsers donāt complain. It should be automatically renewed without my input but weāll see what happens. It took shockingly little effort, even considering I need to customize the Docker image and create API keys so it can solve a DNS challenge using my provider.
Iām still not thrilled about using software that does magic for you (like Docker and Caddy) but it sure makes things easy.
@bender@twtxt.net What are you doing with it?
The end-to-end encryption means very little if you have your messages backed up in iCloud because the encryption keys are also stored with the messages in iCloud according to this FBI document. If thatās the case, Apple can definitely read your messages as well as (obviously) any government agency who can make a legal request to Apple.
@movq@www.uninformativ.de Group chat is still pretty rough around the edges, especially if you want encryption. I donāt use it with my friends. If you need group chat, itās probably better to use something else.