Woot! I got wolfssl and the wolfssl command-line tool compiled successfully and installed on µLinux 💪 Now I can do all sorts of crypto stuff, generate TLS keys, etc all from a tiny ~20MB Linux distro 🥳
Neat! 👌 I have been hacking on sshbox and refactored it a fair bit to have much more flexible auth methods. In addition I also toyed around with the idea of having a shared (free) unix environment like some of the ones around (whose name eludes me right now 🤦♂️), with a couple of key differences and differentiators:
- The environment you get is actually a Docker container
- Based on Alpine, but customized.
- Can persist/detach any background processes you want.
- And reverse proxy to port 8000 in the container.
There appears to be no standards in terms of boot loader on ARM 🤔 This makes things a bit hard™
Over the holiday break I was looking at one of my old projects, µLinux. Turns out I did a fine job realy and have decided to revive the project 🥳 – Just getting the build/tests woring on my Mac Studio (Apple Silicon). Check it out! 👌
Was just catching up on all the LinkedIN garbage that is well umm garbage 🗑️ One was from a candidate I interviewed, so I had to reply to that 😅 – Anyway…. Saw this random post in my “notifications”:
How do land that job with a Unicorn
First off, you’ll have to define what da fuq a “Unicorn” is! 🤣 My understanding a Unicorn is a mythical creature with a horn on its head and wings 🪽 🤦♂️
I’m considering becoming a gold or platinum sponsor of the ladybird project
The beach itself is very nicely maintained on a daily basis however, unfortunately the sea is full of plastic and rubbish 😢
followers
field is deprecated ==> https://git.mills.io/yarnsocial/twtxt.dev/pulls/6
Please sign this and share 🙏 https://www.change.org/p/oppose-australia-s-proposed-social-media-ban-for-under-16s
Test
This pod is now using the index for archive twts instead of the old (naive) disk-based index with that results in millions of files over a long time 🤣
The Australia Labor government, Albanese and the honourable Michelle Rowland federal member for parliament and communications minster are fucking clowns. It’s stupid shit like this that’s the real problem with “big tech” social media platforms. These morons just simply don’t understand basic economics and basic business.
Why would a company like Meta, X and TikTok give up a large multi-billion dollar segment of the market. That is, young children from the ages of ~3 to 16 (yes kids these days can use a computer or device from a pretty young age!)
The whole masquerade of “online saftey” and the new Australia legislation of the Online Safety Act 2021 is complete and utter bullshit.
You wanna fix this whole cybercrime and cyber bullying that goes on (which btw if you understood how these fucking platforms worked in the first place, you’d realise drives up engagement on the platforms by abusing human emotional and psychological weakness), then ban and make illegal with multi-Billion dollar fines the following:
- Profiting off data collected by users on your platform(s)
- Categorizing users on your platform and performing A/B tests
- Targeting users (of any age) for advertising
In fact just ban targeted advertising period.
😜
@eapl.me@eapl.me Also welcome back 🤗
Neycer Robalino vs Hayden Green – Brisbane Flexi Season (Week 3) Div 1 Final - YouTube This is Neycer one of our coaches at the table-tennis club 🏓 that I play at vs. Hayden a top-rated QLD player (well not anymore 🤣). What a match! 😱 Go #Brisbane #Table-Tennis #BTTA
The call is on! Come join us!
The real crux of the matter is this whole moving feeds around to different uri(s). This makes things hard. I think it’s worth revisiting @anth@a.9srv.net ’s UUID idea for its merits.
@Codebuzz@www.codebuzz.nl Welcome to Twtxt 🤗
Offen Fair Web Analytics This looks pretty good., might give this a try. Been using GoatCounter, but it’s pretty bland in that it doesn’t really tell me much 😅
👋 Reminder folks of the upcoming Yarn.social monthly online meetup:
- Event: Yarn.social Online Meetup
- When: 26th October 2024 at 12:00PM UTC (midday)
- Where: Mills Meet : Yarn.social
@asquare@asquare.srht.site By the way… It might be nice to set yourself up with an Avatar 👌
Let’s talk about #foo 🤣
Summary of Discussions (as best I can):
- @lyse@lyse.isobeef.org and @sorenpeter@darch.dk express simplicity. Both Lyse and Sorenpeter support location-based addressing.
- @falsifian@www.falsifian.org believes we should continue to develop ideas and extensions progressively over time like we’ve always done.
- @david@collantes.us @quark@ferengi.one and @bender@twtxt.net would like a better user experience, especially when threads break due to edits, deletions or feed location changes.
- @anth@a.9srv.net would like to see utf-8 mandated, and the threading model remain largely the same as it is today, which is primarily based on the convention of a Twt Subject anyway, Twt Hash(es) just make the threading “more precise”. Anth also states that format, client and server specification/recommendations should be kept separate.
- @movq@www.uninformativ.de @xuu sorry you two haven’t said too much really, so I’m not too sure?
Overall, the 22 votes we’ve had on the poll from the community (if you can call it a community?) have clearly shown that:
- We continue to support content-based addressing. (65/35)
- We think about formally supporting edits/deletes (60/40)
- We do not increase the use of cryptography (thworing things like authenticity and identity out the window) (70/30)
And overall the NPS (net promoter score) of “Would I recommend Twtxt to a friend” is a whopping 7/10 (which is crazy! 🤯)
Let’s have our monthly catch up soon™ (1hr) and discuss together. My own take on the direction we should take at this point is as follows:
- We continue to use hashing for the threading model.
- We think about changing this to SHA-256 for simplicity.
- We think about changing this to SHA-256 for simplicity.
- We either adopt @anth@a.9srv.net’s UUID approach or @lyse@lyse.isobeef.org Dynamic URL approach.
- We continue to incrementally/progressively improve things over time as @falsifian@www.falsifian.org suggested.
- We think about mandating utf-8 as @anth@a.9srv.net suggests which makes things so much easier for everyone.
- We further discuss the merits/ideas of supporting formal Edit/Delete requests or other ways to better support this in some way.
@doesnm@doesnm.p.psf.lt The useragent tool now natively supports the Caddy (JSON) logfile format. 🥳
This Facebook/Meta story on storing passwords in plain text it just wow 😮 – Like how da fuq does a company, or anyone for that matter in the business of software / technology even do this?! Like at least base64 encode the fuckers right?! (oh wait 🤦♂️)
Last chance to have your say before tomorrow’s meetup:
Last day to have your say before our monthly online meetup 👋
Hmm this question has a leading “Yes” in favor of so far with 13 votes:
Should we formally support edit and deletion requests?
Thanks y’all for voting (it’s all anonymous so I have no idea who’s voted for what!)
If you haven’t already had your say, please do so here: http://polljunkie.com/poll/xdgjib/twtxt-v2 – This is my feeble attempt at trying to ascertain the voice of the greater community with ideas of a Twtxt v2 specification (which I’m hoping will just be an improved specification of what we largely have already built to date with some small but important improvements 🤞)
Don’t forget about the upcoming Yarn.social online meetup coming up this Saturday! 😅 See #jjbnvgq for details! – Hope to see y’all there 💪
👋 Don’t forget to take the Twtxt v2 poll 🙏 if you haven’t done so already (sorry about the confusing question at the end!)
A new thing LLM(s) can’t do well. Write patches 🤣
Don’t forget about the upcoming Yarn.social meetup coming up this Saturday! See #jjbnvgq for details! Hope to see some/all of y’all there 💪
Just out of curiosity, I inspected the yarns database (the search engine//cralwer) to find the average length of a Twtxt URI:
$ inspect-db yarns.db | jq -r '.Value.URL' | awk '{ total += length; count++ } END { if (count > 0) print total / count }'
40.3387
Given an RFC3339 UTC timestamp has a length of 20 characters with seconds precision. We’re talking about Twt Subject taking up ~63 characters/bytes on average.
Let’s try this pill for Twtxt v2 (no account required)
My Position on the last few weeks of Twtxt spec discussions:
- We increase the Hash length from
7
to11
.
- We formalise the Update Commands extension.
- We amend the Twt Hash and Metadata extension to state:
Feed authors that wish to change the location of their feed (once Twts have been published) must append a new
# url =
comment to their feed to indicate the new location and thus change the “Hashing URI” used for Twts from that point onward.
This has implications of the “order” of a feed, and we should either do one of two things, either:
- Mandate that feeds are append-only.
- Or amend the Metadata spec with a new field that denotes the order of the feed so clients can make sense of “inline” comments in the feed. – This would also imply that the default order is (of course) append-only. Suggestion:
# direction = [append|prepend]
With a SHA1 encoding the probability of a hash collision becomes, at various k (number of twts):
>>> import math
>>>
>>> def collision_probability(k, bits):
... n = 2 ** bits # Total unique hash values based on the number of bits
... probability = 1 - math.exp(- (k ** 2) / (2 * n))
... return probability * 100 # Return as percentage
...
>>> # Example usage:
>>> k_values = [100000, 1000000, 10000000]
>>> bits = 44 # Number of bits for the hash
>>>
>>> for k in k_values:
... print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 44 bits: 0.0284%
Probability of collision for 1000000 hashes with 44 bits: 2.8022%
Probability of collision for 10000000 hashes with 44 bits: 94.1701%
>>> bits = 48
>>> for k in k_values:
... print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 48 bits: 0.0018%
Probability of collision for 1000000 hashes with 48 bits: 0.1775%
Probability of collision for 10000000 hashes with 48 bits: 16.2753%
>>> bits = 52
>>> for k in k_values:
... print(f"Probability of collision for {k} hashes with {bits} bits: {collision_probability(k, bits):.4f}%")
...
Probability of collision for 100000 hashes with 52 bits: 0.0001%
Probability of collision for 1000000 hashes with 52 bits: 0.0111%
Probability of collision for 10000000 hashes with 52 bits: 1.1041%
>>>
If we adopted this scheme, we could have to increase the no. of characters (first N) from 11
to 12
and finally 13
as we approach globally larger enough Twts across the space. I think at least full crawl/scrape it was around ~500k (maybe)? https://search.twtxt.net/ says only ~99k
I’ve been using Codeium too the last week or so ! It’s pretty good and like @xuu said is a pretty desent Junior assistant, it helps me write good docs and the tab completion is amazing!
It of course completely sucks at doing anything “intelligent” or complex, but if you just use it as a fancier auto complete it’s actually half way decent 👌
One thing that’s on my mind over the last few days about all this Twt editing and identity stuff we’ve been having hot debates over is this…
I don’t really have a problem with editing twts, or someone changing their feed’s URL.
Personally I think the folks that do are rightfully pedantic and like a good user experience, which I don’t blame ‘em. I would expect the same too. Anyway, just wanted to get that out there, I believe we can support editing and identity in a way that is still simple, as long as we bring clients along for the ride with us. The old/legacy original client though will have to remain well, ya know 😅
@cuaxolotl@sunshinegardens.org Did you recently change the url
metdata key of your feed?
# url = https://sunshinegardens.org/~xj9/twtxt/tw.txt
Was this at one point # url = https://sunshinegardens.org/users/xj9/twtxt/tw.txt
?
Introduction to JuiceFS | JuiceFS Document Center – Thinking about using JuiceFS to solve a long-running problem I’ve always had.
- Be able to run services on any node in my cluster and let Docker Swarm pick whatever node it likes (instead of now where I have to pin some workloads to specific nodes, as that’s where their local storage volume is)
- Manage the scalability of data and growth over time instead of what I do now which is to extend EXT4 filesystems on my Docker Swarm nodes every few years.
Time for work™, But I quickly hacked together a bit of a better solution here. Rolling it out to my pod so we’ll see how it actually goes. Still possible to abuse if you’re a logged in user, etc, but at least now we delete the invalid/bad feed afterwards if it a) was not even a text//plain
content-type or b) it errored out and was a new fetch of a HTTP feed.
I just realized, this is the last Saturday of the month. So Yarn.social meetup is up again tomorrow. Same time as last time if anyone is interested/around to join and hang out!
Dear OnlyDomains, part of Team Internet. Do you think you could stop being so incompetent when it comes to Domains, DNS and basic HTTP? I reported this to you on Friday, and you are still arguing with me over Support the legitimatecy of the claims? Seriously?! 😧
$ dig @1.1.1.1 +short onlydomains.com.au a
198.50.252.65
$ nc -vvv 198.50.252.65 443
nc: connectx to 198.50.252.65 port 443 (tcp) failed: Connection refused
@prologic@twtxt.net works
Hmmm 🧐
for url in $(jq -r '.Twters[].avatar' cache.json | sed '/^$/d' | grep -v -E '(twtxt.net|anthony.buc.ci|yarn.stigatle.no|yarn.mills.io)' | sort -u); do echo "$url $(curl -I -s -o /dev/null -w '%header{content-length}' "$url")"; done
...
😅 Let’s see… 🤔