@david@collantes.us Very nice! 👍
--fetch-context
, which asks a Yarn pod for a twt, wouldn’t break, but jenny would not be able anymore to verify that it actually got the correct twt. That’s a concrete example where we would lose functionality.
@movq@www.uninformativ.de Hmmm not sure what I was thinking sorry 🤦♂️been a long day 😂
@movq@www.uninformativ.de Am I missing something? 😅
@movq@www.uninformativ.de Precisely 👌
@movq@www.uninformativ.de Is t it? You read each Twt and compute its hash. It’s a simple O(1) lookup of the hash in that feed or your cache/archive right?
👋 Reminder that next Saturday 28th September will be out monthly online meetup! Hope to see some/all of you there 👌
I’ll try to reproduce locally later tonight
@lyse@lyse.isobeef.org I don’t think this is true.
@lyse@lyse.isobeef.org No that’s never a problem because we really only want to “navigate” the web anyway not form threads of xonversation 🤣
--fetch-context
, which asks a Yarn pod for a twt, wouldn’t break, but jenny would not be able anymore to verify that it actually got the correct twt. That’s a concrete example where we would lose functionality.
@movq@www.uninformativ.de this approach also wouldn’t work and when that Feed gets archived so you’ll be forced to crawl archived feeds at that point.
The important bits missing from this summary (devil is in the details) are two requirements:
- Clients should order Twts by their timestamp.
- Clients must validate all
edit
anddelete
requests that the hash being indicated belongs to and came from that feed.
- Client should honour delete requests and delete Twts from their cache/archive.
@lyse@lyse.isobeef.org This is why hashes provide that level of integrity. The hash can be verified in the cache or archive as belonging to said feed.
@movq@www.uninformativ.de I think the order of the lines in a feed don’t matter as long as we can guarantee the order of Twts. Clients should already be ordering by Timestamp anyway.
@movq@www.uninformativ.de Pretry much 👌
@lyse@lyse.isobeef.org Sorry could you explain this sifferently?
Do you k ow what you clicked on before going back?
yarnd
PR that upgrades the Bitcask dependency for its internal database to v2? 🙏
@eldersnake@we.loveprivacy.club Sweet thank you! 🙇♂️ I’ll merge this PR tonight I think.
@david@collantes.us I think we can!
yarnd
PR that upgrades the Bitcask dependency for its internal database to v2? 🙏
e.g: Shutdown yarnd
and cp -a yarn.db yarn.db.bak
before testing this PR/branch.
Can I get someone like maybe @xuu or @abucci@anthony.buc.ci or even @eldersnake@we.loveprivacy.club – If you have some spare time – to test this yarnd
PR that upgrades the Bitcask dependency for its internal database to v2? 🙏
VERY IMPORTANT If you do; Please Please Please backup your yarn.db
database first! 😅 Heaven knows I don’t want to be responsible for fucking up a production database here or there 🤣
yarnd
that I think have always been there, but only recently uncovered by the Go 1.23 compiler.
nevermind; I think this might be some changes internally in Go 1.23 and a dependency I needed to update 🤞
Can someone much smarter than me help me figure out a couple of newly discovered deadlocks in yarnd
that I think have always been there, but only recently uncovered by the Go 1.23 compiler.
Location Addressing is fine in smaller or single systems. But when you’re talking about large decentralised systems with no single point of control (kind of the point) things like independable variable integrity become quite important.
What is being proposed as a counter to content-addressing is called location-addressing. Two very different approaches, both with pros/cons of course. But a local cannot be verified, the content cannot be be guaranteed to be authenticate in any way, you just have to implicitly trust that the location points to the right thing.
For example, without content-addressing, you’d never have been able to find let alone pull up that ~3yr old Twt of me (my very first), hell I’d even though I lost my first feed file or it became corrupted or something 🤣 – If that were the case, it would actually be possible to reconstruct the feed and verify every single Twt against the caches of all of you 🤣
@david@collantes.us I really thinks articles like this explain the benefits far better than I can.
@david@collantes.us Oh ! 🤦♂️
@david@collantes.us Witout including the content, it’s no longer really “content addressing” now is it? You’re essentially only addressing say nick+timestamp or url+timestamp.
Speaking of AI tech (sorry!); Just came across this really cool tool built by some engineers at Google™ (currently completely free to use without any signup) called NotebookLM 👌 Looks really good for summarizing and talking to document 📃
@eldersnake@we.loveprivacy.club Yeah I’m looking forward to that myself 🤣 It’ll be great to see where technology grow to a level of maturity and efficiency where you can run the tools on your own PC or Device and use it for what, so far, I’ve found it to be somewhat decent at; Auto-Complete, Search and Q&A.
@sorenpeter@darch.dk I really don’t think we can ignore the last ~3 years and a bit of this threading model working quite well for us as a community across a very diverse set of clients and platforms. We cannot just drop something that “mostly works just fine” for the sake of “simplicity”. We have to weight up all the options. There are very real benefits to using content addressing here that really IMO shouldn’t be disregarded so lightly that actually provide a lot of implicit value that users of various clients just don’t get to see. I’d recommend reading up on the ideas behind content addressing before simply dismissing the Twt Hash spec entirely, it wasn’t even written or formalised by me, but I understand how it works quite well 😅 The guy that wrote the spec was (is?) way smarter than I was back then, probably still is now 🤣
@falsifian@www.falsifian.org Right I see. Yeah maybe we want to avoid that 🤣 I do kind of tend to agree with @xuu in another thread that there isn’t actually anything wrong with our use of Blake2 at all really, but we may want to consider all our options.
@xuu I don’t think this is a lextwt problem tbh. Just the Markdown aprser that yarnd
currently uses. twtxt2html
uses Goldmark and appears to behave better 🤣
@xuu Long while back, I experimented with using similarity algorithms to detect if two Twts were similar enough to be considered an “Edit”.
Right I see what you mean @xuu – Can you maybe come up with a fully fleshed out proposal for this? 🤔 This will help solve the problem of hash collision that result from the Twt/hash space growing larger over time without us having to change anything about the way we construct hashes in the first place. We just assume spec compliant clients will just dynamically handle this as the space grows.
abcdef0123456789...
any sub string of that hash after the first 6 will match. so abcdef
, abcdef012
, abcdef0123456
all match the same. on the case of a collision i think we decided on matching the newest since we archive off older threads anyway. the third rule was about growing the minimum hash size after some threshold of collisions were detected.
@xuu I think we never progressed this idea further because we weren’t sure how to tell if a hash collision would occur in the first place right? In other words, how does Client A know to expand a hash vs. Client B in a 100% decentralised way? 🤔
Plus these so-called “LLM”(s) have a pretty good grasp of the “shape” of language, so they appear to be quite intelligent or produce intelligible response (when they’re actually quite stupid really).
@eldersnake@we.loveprivacy.club You don’t get left behind at all 🤣 It’s hyped up so much, it’s not even funny anymore. Basically at this point (so far at least) I’ve concluded that all this GenAI / LLM stuff is just a fancy auto-complete and indexing + search reinvented 🤣
@bender@twtxt.net This is the different Markdown parsers being used. Goldmark vs. gomarkdown. We need to switch to Goldmark 😅
@quark@ferengi.one i’m guessing the quotas text should’ve been emphasized?
@slashdot@feeds.twtxt.net NahahahahHa 🤣 So glad I don’t use LinkedIn 🤦♂️
@falsifian@www.falsifian.org No u don’t sorry. But I tend to agree with you and I think if we continue to use hashes we should keep the remainder in mind as we choose truncation values of N
@falsifian@www.falsifian.org Mostly because Git uses it 🤣 Known attacks that would affect our use? 🤔
@xuu I don’t recall where that discussion ended up being though?
@bender@twtxt.net wut da fuq?! 🤣
@xuu you mean my original idea of basically just automatically detecting Twt edits from the client side?
(delete: 5vbi2ea)
.. would it delete someone elses twt?
@xuu this is where you would need to prove that the editor delete request actually came from that feed author. Hence why integrity is much more important here.
@falsifian@www.falsifian.org without supporting dudes properly though you’re running into GDP issues and the right to forget. 🤣 we’ve had pretty lengthy discussions about this in the past years ago as well, but we never came to a conclusion. We’re all happy with.
@movq@www.uninformativ.de it would work, you are right, however, it has drawbacks, and I think in the long term would create a new set of problems that we would also then have to solve.