Searching txt.sour.is

Twts matching #Ideas
Sort by: Newest, Oldest, Most Relevant

Yes it work: 2024-12-01T19:38:35Z twtxt/1.2.3 (+https://eapl.mx/twtxt.txt; @eapl) :D

The .log is just a simple append each request. The idea with the .cvs is to have it tally up how many request there have been from each client as a way to avoid having the log file grow too big. And that you can open the .cvs as a spreadsheet and have an easy overview and filtering options.

Access to those files are closed to the public.

⤋ Read More

@eapl.mx@eapl.mx Yes, the idea is to add User Agent support to #Timeline.
Right now it just adds every request to a growing log file, but I have also been working on a way to analyse it, so it only saves the time of the latest request.
I’m not sure how to make it part of timeline itself, since it requeses that you redirect/rewrite from twtAgent.php to the acctual twtxt.txt
Help with making Timeline send proper User Agents to others would be much appreciated:)

⤋ Read More

This morning (and a little bit of the afternoon) the idea of having a full referenced archive of twtxts on the web has consumed me a bit. I am talking about something similar to the email archives one see online, but for twtxts, and a more personal level. Such archive would be available, even if the involved feeds are long gone, because feeds will be treated as received emails.

⤋ Read More

@eapl.me@eapl.me here are my replies (somewhat similar to Lyse’s and James’)

  1. Metadata in twts: Key=value is too complicated for non-hackers and hard to write by hand. So if there is a need then we should just use #NSFS or the alt-text file in markdown image syntax ![NSFW](url.to/image.jpg) if something is NSFW

  2. IDs besides datetime. When you edit a twt then you should preserve the datetime if location-based addressing should have any advantages over content-based addressing. If you change the timestamp the its a new post. Just like any other blog cms.

  3. Caching, Yes all good ideas, but that is more a task for the clients not the serving of the twtxt.txt files.

  4. Discovery: User-agent for discovery can become better. I’m working on a wrapper script in PHP, so you don’t need to go to Apaches log-files to see who fetches your feed. But for other Gemini and gopher you need to relay on something else. That could be using my webmentions for twtxt suggestion, or simply defining an email metadata field for letting a person know you follow their feed. Interesting read about why WebMetions might be a bad idea. Twtxt being much simple that a full featured IndieWeb sites, then a lot of the concerns does not apply here. But that’s the issue with any open inbox. This is hard to solve without some form of (centralized or community) spam moderation.

  5. Support more protocols besides http/s. Yes why not, if we can make clients that merge or diffident between the same feed server by multiples URLs

  6. Languages: If the need is big then make a separate feed. I don’t mind seeing stuff in other langues as it is low. You got translating tool if you need to know whats going on. And again when there is a need for easier switching between posting to several feeds, then it’s about building clients with a UI that makes it easy. No something that should takes up space in the format/protocol.

  7. Emojis: I’m not sure what this is about. Do you want to use emojis as avatar in CLI clients or it just about rendering emojis?

⤋ Read More

@Codebuzz@www.codebuzz.nl I have separate mail boxes for private and work, but flattened both to have a simpler structure. For work, where we use Outlook, I am using categories for organising the mails and privately I am using Vivaldi’s labels system. The main idea is to use search and grouping through dynamic saved searches instead of static folders.

⤋ Read More

Three days from today, towards the end of the day, we in the US will have an idea of who the nation’s presiding person will be for the next four years. In the 32 years I have lived here, I have never been more worried about an election outcome.

⤋ Read More
In-reply-to » More thoughts about changes to twtxt (as if we haven't had enough thoughts):

@prologic@twtxt.net

See https://dev.twtxt.net

Yes, that is exactly what I meant. I like that collection and “twtxt v2” feels like a departure.

Maybe there’s an advantage to grouping it into one spec, but IMO that shouldn’t be done at the same time as introducing new untested ideas.

See https://yarn.social (especially this section: https://yarn.social/#self-host) – It really doesn’t get much simpler than this 🤣

Again, I like this existing simplicity. (I would even argue you don’t need the metadata.)

That page says “For the best experience your client should also support some of the Twtxt Extensions…” but it is clear you don’t need to. I would like it to stay that way, and publishing a big long spec and calling it “twtxt v2” feels like a departure from that. (I think the content of the document is valuable; I’m just carping about how it’s being presented.)

⤋ Read More

Recent #fiction #scifi #reading:

  • The Memory Police by Yōko Ogawa. Lovely writing. Very understated; reminded me of Kazuo Ishiguro. Sort of like Nineteen Eighty-Four but not. (I first heard it recommended in comparison to that work.)

  • Subcutanean by Aaron Reed; https://subcutanean.textories.com/ . Every copy of the book is different, which is a cool idea. I read two of them (one from the library, actually not different from the other printed copies, and one personalized e-book). I don’t read much horror so managed to be a little creeped out by it, which was fun.

  • The Wind from Nowhere, a 1962 novel by J. G. Ballard. A random pick from the sci-fi section; I think I picked it up because it made me imagine some weird 4-dimensional effect (“from nowhere” meaning not in a normal direction) but actually (spoiler) it was just about a lot of wind for no reason. The book was moderately entertaining but there was nothing special about it.

Currently reading Scale by Greg Egan and Inversion by Aric McBay.

⤋ Read More

More thoughts about changes to twtxt (as if we haven’t had enough thoughts):

  1. There are lots of great ideas here! Is there a benefit to putting them all into one document? Seems to me this could more easily be a bunch of separate efforts that can progress at their own pace:

1a. Better and longer hashes.

1b. New possibly-controversial ideas like edit: and delete: and location-based references as an alternative to hashes.

1c. Best practices, e.g. Content-Type: text/plain; charset=utf-8

1d. Stuff already described at dev.twtxt.net that doesn’t need any changes.

  1. We won’t know what will and won’t work until we try them. So I’m inclined to think of this as a bunch of draft ideas. Maybe later when we’ve seen it play out it could make sense to define a group of recommended twtxt extensions and give them a name.

  2. Another reason for 1 (above) is: I like the current situation where all you need to get started is these two short and simple documents:
    https://twtxt.readthedocs.io/en/latest/user/twtxtfile.html
    https://twtxt.readthedocs.io/en/latest/user/discoverability.html
    and everything else is an extension for anyone interested. (Deprecating non-UTC times seems reasonable to me, though.) Having a big long “twtxt v2” document seems less inviting to people looking for something simple. (@prologic@twtxt.net you mentioned an anonymous comment “you’ve ruined twtxt” and while I don’t completely agree with that commenter’s sentiment, I would feel like twtxt had lost something if it moved away from having a super-simple core.)

  3. All that being said, these are just my opinions, and I’m not doing the work of writing software or drafting proposals. Maybe I will at some point, but until then, if you’re actually implementing things, you’re in charge of what you decide to make, and I’m grateful for the work.

⤋ Read More
In-reply-to » (#5vbi2ea) @prologic I wouldn't want my client to honour delete requests. I like my computer's memory to be better than mine, not worse, so it would bug me if I remember seeing something and my computer can't find it.

@prologic@twtxt.net Do you have a link to some past discussion?

Would the GDPR would apply to a one-person client like jenny? I seriously hope not. If someone asks me to delete an email they sent me, I don’t think I have to honour that request, no matter how European they are.

I am really bothered by the idea that someone could force me to delete my private, personal record of my interactions with them. Would I have to delete my journal entries about them too if they asked?

Maybe a public-facing client like yarnd needs to consider this, but that also bothers me. I was actually thinking about making an Internet Archive style twtxt archiver, letting you explore past twts, including long-dead feeds, see edit histories, deleted twts, etc.

⤋ Read More

@prologic@twtxt.net the basic idea was to stem the hash.. so you have a hash abcdef0123456789... any sub string of that hash after the first 6 will match. so abcdef, abcdef012, abcdef0123456 all match the same. on the case of a collision i think we decided on matching the newest since we archive off older threads anyway. the third rule was about growing the minimum hash size after some threshold of collisions were detected.

⤋ Read More
In-reply-to » (replyto http://darch.dk/twtxt.txt 2024-09-15T12:50:17Z) @sorenpeter I like this idea. Just for fun, I'm using a variant in this twt. (Also because I'm curious how it non-hash subjects appear in jenny and yarn.)

@movq@www.uninformativ.de Agreed that hashes have a benefit. I came up with a similar example where when I twted about an 11-character hash collision. Perhaps hashes could be made optional somehow. Like, you could use the “replyto” idea and then additionally put a hash somewhere if you want to lock in which version of the twt you are replying to.

⤋ Read More

@quark@ferengi.one I don’t really mind if the twt gets edited before I even fetch it. I think it’s the idea of my computer discarding old versions it’s fetched, especially if it’s shown them to me, that bugs me.

But I do like @movq@www.uninformativ.de’s suggestion on this thread that feeds could contain both the original and the edited twt. I guess it would be up to the author.

⤋ Read More

@sorenpeter@darch.dk I like this idea. Just for fun, I’m using a variant in this twt. (Also because I’m curious how it non-hash subjects appear in jenny and yarn.)

URLs can contain commas so I suggest a different character to separate the url from the date. Is this twt I’ve used space (also after “replyto”, for symmetry).

I think this solves:

  • Changing feed identities: although @mckinley@twtxt.net points out URLs can change, I think this syntax should be okay as long as the feed at that URL can be fetched, and as long as the current canonical URL for the feed lists this one as an alternate.
  • editing, if you don’t care about message integrity
  • finding the root of a thread, if you’re not following the author

An optional hash could be added if message integrity is desired. (E.g. if you don’t trust the feed author not to make a misleading edit.) Other recent suggestions about how to deal with edits and hashes might be applicable then.

People publishing multiple twts per second should include sub-second precision in their timestamps. As you suggested, the timestamp could just be copied verbatim.

⤋ Read More

@prologic@twtxt.net

(#w4chkna) @falsifian@www.falsifian.org You mean the idea of being able to inline # url = changes in your feed?

Yes, that one. But @lyse@lyse.isobeef.org pointed out suffers a compatibility issue, since currently the first listed url is used for hashing, not the last. Unless your feed is in reverse chronological order. Heh, I guess another metadata field could indicate which version to use.

Or maybe url changes could somehow be combined with the archive feeds extension? Could the url metadata field be local to each archive file, so that to switch to a new url all you need to do is archive everything you’ve got and start a new file at the new url?

I don’t think it’s that likely my feed url will change.

⤋ Read More
In-reply-to » The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

@mckinley@twtxt.net Thanks for the feedback.

  1. Yeah I agrees that nick sound not be part of syntax. Any valid URL to a twtxt.txt-file should be enough and is more clear, so it is not confused with a email (one of the the issues with webfinger and fedivese handles)
  2. I think any valid URL would work, since we are not bound to look for exact matches. Accepting both http and https as well as a gemni and gophe could all work as long as the path to the twtxt.txt is the same.
  3. My idea is that you quote the timestamp as it is in the original twtxt.txt that you are referring to, so you can do it by simply copy/pasting. Also what are the change that the same human will make two different posts within the same second?!

Regarding the whole cryptographic keys for identity, to me it seems like an unnecessary layer of complexity. If you move to a new house or city you tell people that you moved - you can do the same in a twtxt.txt. Just post something like “I move to this new URL, please follow me there!” I did that with my feeds at least twice, and you guys still seem to read my posts:)

⤋ Read More

So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.

What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:

The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.

The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?

From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.

I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.

# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w 
# sig: BEGIN SALTPACK SIGNED MESSAGE. ... 
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...

⤋ Read More

@movq@www.uninformativ.de Another idea: just hash the feed url and time, without the message content. And don’t twt more than once per second.

Maybe you could even just use the time, and rely on @-mentions to disambiguate. Not sure how that would work out.

Though I kind of like the idea of twts being immutable. At least, it’s clear which version of a twt you’re replying to (assuming nobody is engineering hash collisions).

⤋ Read More

In fact, maybe your public key idea is compatible with my last point. Just come up with a url scheme that means “this feed’s primary URL is actually a public key”, and then feed authors can optionally switch to that.

⤋ Read More
In-reply-to » @movq Is there a good way to get jenny to do a one-off fetch of a feed, for when you want to fill in missing parts of a thread? I just added @slashdot to my private follow file just because @prologic keeps responding to the feed :-P and I want to know what he's commenting on even though I don't want to see every new slashdot twt.

@prologic@twtxt.net How does yarn.social’s API fix the problem of centralization? I still need to know whose API to use.

Say I see a twt beginning (#hash) and I want to look up the start of the thread. Is the idea that if that twt is hosted by a a yarn.social pod, it is likely to know the thread start, so I should query that particular pod for the hash? But what if no yarn.social pods are involved?

The community seems small enough that a registry server should be able to keep up, and I can have a couple of others as backups. Or I could crawl the list of feeds followed by whoever emitted the twt that prompted my query.

I have successfully used registry servers a little bit, e.g. to find a feed that mentioned a tag I was interested in. Was even thinking of making my own, if I get bored of my too many other projects :-)

⤋ Read More

@prologic@twtxt.net the new product was GPTs. A way to create tailored bots for specific use cases. https://openai.com/blog/introducing-gpts (fun fact: I did an internal hackathon where we made something like this for $work onboarding. And I won a prize!)

The competed project is poe https://quorablog.quora.com/Introducing-creator-monetization-for-Poe which is basically the same idea. Make a AI bot tailored to a specific domain of knowledge. And monitize it.

The timing fits very well as openAI announced it just a few weeks ago.

⤋ Read More

Messed up the configuration of the nut UPS monitor so bad it actually initialised an UPS test where the device switched itself off on the reboot of the PC. No idea how that happened. So uninstalled it again.

⤋ Read More

@abucci@anthony.buc.ci So.. The issue is that its showing the password by default? Would making an alias to always include the -c help? We can probably engage Jason with a PR to enable a more hardened approach when desired. I’ve spoken to him before and is generally a pretty open to ideas.

I found this app that was created by the gopass author that does copy by default and has a tui or GUI mode https://github.com/cortex/ripasso

⤋ Read More

Ah git-bug! Ive chatted with the creator when he was working on the graphql parts. Its working with git objects directly sorta like how git-repo does code reviews. Its a pretty neat idea for storing data along side the branches. I believe they don’t add a disconnected branch to avoid data getting corrupted by merging branches or something like that.

⤋ Read More

@tkanos@twtxt.net user in question had posted information about someones employment in what appeared to be a threat to contact their boss. Maybe it was in jest.. but we felt it was a form of doxing that we do not wish to see within our community. Yarn.Social is first and foremost a town square of ideas and should be viewed as a safe place for all.

⤋ Read More
In-reply-to » Progress! so i have moved into working on aggregates. Which are a grouping of events that replayed on an object set the current state of the object. I came up with this little bit of generic wonder.

@lyse@lyse.isobeef.org hah! I cut some out to fit into my pods 4k limit.

Yeah that does studder a bit. To be honest I have no idea what I was thinking there. This excerpt was written a good year ago.

⤋ Read More

it’s funny, conditional on AGI (and perhaps also WBE?) not doing us in, i’m pretty bullish on this century. bio seems much less of a problem, and everything else is basically a-okay, especially with people becoming richer and needing to fight less. most other collapse narratives sound pretty unlikely (though prepping is sitll a good idea! you should have three months of food & water at home)

⤋ Read More

idea: upvote-only lw shortforms posts: the karma isn’t counted on the user karma score, but it also can’t be downvoted, which encourages more wild and possibly wrong speculations

⤋ Read More
In-reply-to » @movq would it be possible to trim the subject to, say, 100 or 140 characters? Just the subject.

@movq@www.uninformativ.de

If Subject contains the full twt, then you can skim over conversations just by reading those lines in mutt’s index pager

Yes, I do the same, true.

So I decided: Okay, let’s have mutt do it.

And Mutt does it well. I agree it was/is a good idea.

The subject lines are already “compressed”

I noticed, yes.

I am not sure why I asked to begin with; in retrospect, in was a silly request. Perhaps the OCD in me got triggered while viewing rich headers, on a specific twt, when I saw the huge subject line that is, otherwise, always hidden.

Anyway, don’t mind me, move along. 😂

⤋ Read More

@prologic@twtxt.net sounds about right. I tend to try to build my own before pulling in libs. learn more that way. I was looking at using it as a way to build my twt mirroring idea. and testing the lex parser with a wide ranging corpus to find edge cases. (the pgp signed feeds for one)

⤋ Read More

Would online dating without images lead to deeper, more human connections? I.e. only descriptions of people. If yes, is it different because of molochian reasons? More beautiful people have no problem showing their faces, so not showing ones face is seen as a low-status signal at some point. Counter: The idea of deeper, more human connections is in itself flawed, most mating choices are the result of a combination of class/status signals and physical attractiveness anyway.

⤋ Read More

@prologic@twtxt.net My thoughts on it being if they switched from a different way of hosting the file or multiple locations for redundancy..

I have an idea of using something like SRV records where they can define weighted url endpoints to reach.

⤋ Read More

the idea would be to build and share tiny 6.5 bit programs encoded as printable ascii characters. this could then in turn be read by a virtual computer to do things like paint a picture or compose a piece of music. #halfbakedideas

⤋ Read More

@tdemin@tdemin.github.io good points, though another that I’ve noticed is that it’s difficult to tell who in your network is actually reachable with your tweets. My HTTPS cert went unupdated for a brief while and now I have no idea who is still following me since I got it working again, so it’s difficult to tell where I can really have a conversation. A centralized service can tell who’s following who, but that’s basically impossible in twtxt.

⤋ Read More