In all fairness, GOG says that Forsaken is only supported on Ubuntu 16.04 ā not current Arch Linux. If you ask me, this just goes to show that Linux is not a good platform for proprietary binary software.
Is it free software, do you have the source code? Then youāre good to go, things can be patched/updated (that can still be a lot of work). But proprietary binary blobs? Very bad idea.
Cheers @danzin@danzin, was it you who added a PR to core #Python about pprint?
(listening to #corepy #podcast)
Update: Thank you so much for improving Python @danzin@danzin !
core.py: PyCon US 2025 Recap
Starting from: 01:32:45 https://podcasters.spotify.com/pod/show/corepy/episodes/PyCon-US-2025-Recap-e347dc3
https://anchor.fm/s/eb6edc3c/podcast/play/104100675/https%3A%2F%2Fd3ctxlq1ktw2nl.cloudfront.net%2Fstaging%2F2025-5-13%2Fb281ac3a-b0ec-49b9-b31d-7a90031e910d.mp3#t=5565
Updating my āhow install and use #py5ā pages, check them out if you want to ā⦠draw and experiment some #CreativeCoding with #Python ā¦ā
EN: https://abav.lugaralgum.com/como-instalar-py5/index-EN.html
ES: https://abav.lugaralgum.com/como-instalar-py5/index-ES.html
Speaking of Wine, Arch Linux completely fucked up Wine for me with the latest update.
- 16-bit support is gone.
- Performance of 3D games is horrible and unplayable.
Arch is shipping a WoW64 build now, which is not yet ready for prime time.
And then I realized that thereās actually only one stable Wine release per year but Arch has been shipping development releases all the time. Thatās quite unusual. Iām used to Arch only shipping stable packages ⦠huh.
Hopefully things will improve again. Iām not eager to build Wine from source. Iād rather ditch it and resort to my real Windows XP box for the little (retro)gaming that I do ⦠š«¤
@kat@yarn.girlonthemoon.xyz UPDATE: getting it to run natively through a VM and other means all failed! so i did the cursed thing and tried the windows installer in wineā¦..
update on tux racer: ofc it doesnāt run on modern linux LMFAOOOOOOO iām installing red hat in a VM right now
Unless your Terms of use update email looks and reads the same as the one I got yesterday from mastodon.social
, I donāt wanna know about it, nor do I agree to it.
Hmmm š§ Not what I thought was going on⦠No bugā¦
time="2025-06-14T15:24:25Z" level=info msg="updating feeds for 8 users"
time="2025-06-14T15:24:25Z" level=info msg="skipping 0 inactive users"
time="2025-06-14T15:24:25Z" level=info msg="skipping 0 subscribed feeds"
time="2025-06-14T15:24:25Z" level=info msg="updating 80 sources (stale feeds)"
Buying a TV these days, means trying to avoid endless enshitification:
-Spyware and adware
-Shitty AI upscaling/ frame interpolation
-HW that breaks after 2 - 3 years
-One off OS, dead on arrival
-Android OS, that starts lagging after the third update
-8 buttons worth of ads, on your remote
You probably have to make some kind of a compromise. I thought that was buying from some other brand like Hyundai, but that one also felt into some of those categories and just broke, after less than 3 years of use. At this point Iāll probably go back to LG and hope their HW is still reliable and the rest manageable⦠It has AI bullshit and knowing LG, probably some spyware you have to try your best to get rid of, can buy a remote with āonlyā 2 ads on it, some web-based OS shared between all their TVs, that usually gets 4 - 5 years worth of updates and works decently enough afterwards.
At this point, Iāll probably settle for anything that doesnāt literally fall apart, not even 3 years in, like the Hyundai did.
RIP GitHub https://github.blog/changelog/2025-05-08-updated-rate-limits-for-unauthenticated-requests/
Good thing I left long ago.
@anth@a.9srv.net happy birthday, āyoungster!ā
Domain Name: NETBROS.COM
Registry Domain ID: 1193243_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.cloudflare.com
Registrar URL: https://www.cloudflare.com
Updated Date: 2025-03-29T04:08:33Z
Creation Date: 1998-04-29T04:00:00Z
git checkout main && git pull && make build
. Few bug fixes š
@prologic@twtxt.net done! hey i got a question, you got any clue why my feeds arenāt updating? maybe it has to do with the new cache flag but i messed with that a bit and didnāt notice a difference. basically itās like i have to manually restart yarnd
to see new posts itās really weird lol
After yarnd
v0.16 is released and the next round of specification updates are done and dusted, who wants me to have another crack at building Twtxt and activity pub integration support?
7
to 12
and use the first 12
characters of the base32 encoded blake2b hash. This will solve two problems, the fact that all hashes today either end in q
or a
(oops) š
And increasing the Twt Hash size will ensure that we never run into the chance of collision for ions to come. Chances of a 50% collision with 64 bits / 12 characters is roughly ~12.44B Twts. That ought to be enough! -- I also propose that we modify all our clients and make this change from the 1st July 2025, which will be Yarn.social's 5th birthday and 5 years since I started this whole project and endeavour! š± #Twtxt #Update
just for the record I didnāt say I was leaving the twtxt ācommunityā (did I?) but than I have other priorities to focus on in the following months. Please donāt be condescending, is not cool.
Development of Timeline (PHP client) has been stale for some reasons, a few of them in my side, so I think it wonāt be updated to the new thread model, at least pretty soon.
So is not that Iāll stop using twtxt, just the client I use wonāt be compatible with the new model in July.
gah iāve been so busy working on love4eva! TL;DR i switched image backends from the test/dev only module i was using to the S3 one, but with a catch - iām not using S3 or cloud shit!!! i instead got it to work with minio, so itās a middle ground between self hosting the image uploads & being compatible with the highly efficient S3 module. iām super happy with it :)
i posted a patreon update that details the changes more: https://www.patreon.com/posts/i-am-now-working-127687614
that post says i didnāt update my guide yet but i actually did like right after i made that post lol so you can CTRL+F
for minio stuff there!
I figure Eris is getting and update. A real certificate this time? Time will tell!
Finally I propose that we increase the Twt Hash length from 7
to 12
and use the first 12
characters of the base32 encoded blake2b hash. This will solve two problems, the fact that all hashes today either end in q
or a
(oops) š
And increasing the Twt Hash size will ensure that we never run into the chance of collision for ions to come. Chances of a 50% collision with 64 bits / 12 characters is roughly ~12.44B Twts. That ought to be enough! ā I also propose that we modify all our clients and make this change from the 1st July 2025, which will be Yarn.socialās 5th birthday and 5 years since I started this whole project and endeavour! š± #Twtxt #Update
Just like we donāt write emails by hand anymore (See: #a3adoka), we donāt manually write Twts or update our twtxt.txt
feeds. Instead, we use modern Twtxt clients that conform to the specifications at Twtxt.dev for a seamless, automated experience. #Twtxt #Twt #UserExperience
Today I added support for Letās Encrypt to eris via DNS-01 challenge. Updated the gcore libdns package I wrote for Caddy, Maddy and now Eris. Add support for yarnās cache to support # type = bot
and optionally # retention = N
so that feeds like @tiktok@feeds.twtxt.net work like they did before, and⦠Updated some internal metrics in yarnd
to be IMO ābetterā, with queue depth, queue time and last processing time for feeds.
@bender@twtxt.net Time to update my machines! š
@bender@twtxt.net The DM specification has been updated from time to time in response to advice from the community. For me, It is a successful!
The adoption is another topic š
(I am working on my side)
@prologic@twtxt.net @bmallred@staystrong.run Ah, I just found this, didnāt see it before:
https://restic.net/#compatibility
So, yeah, they do use semver and, yes, theyāre not at 1.0.0 yet, so things might break on the next restic update ⦠but they āpromiseā to not break things too lightheartedly. Hm, well. š Probably doesnāt make a big difference (they donāt say ādonāt use this software until we reach 1.0.0ā).
Even though I really do like the shell, I always use Dolphin to mount my digicam SD card and copy the photos onto my computer. I finally added a context menu item in Dolphin to create a forest stroll directory with the current date in order to save some typing:
The following goes in ~/.local/share/kservices5/ServiceMenus/galmkdir.desktop:
[Desktop Entry]
Type=Service
X-KDE-ServiceTypes=KonqPopupMenu/Plugin,inode/directory
Actions=Waldspaziergang;
[Desktop Action Waldspaziergang]
Name=Heutigen Waldspaziergang anlegenā¦
Icon=folder-green
Exec=~/src/gelbariab/galmkdir "%f"
In order to update the KDE desktop cache and make this action menu item available in Dolphin, I ran:
kbuildsycoca5
The referenced galmkdir
script looks like that:
#!/bin/sh
set -e
current_dir="$1"
if [ -z "$current_dir" ]; then
echo "Usage: $0 DIRECTORY" >&2
exit 1
fi
dir="$(kdialog \
--geometry 350x50 \
--title "Heutigen Waldspaziergang anlegen" \
--inputbox "Neues Verzeichnis in ā$current_dirā anlegen:" \
"waldspaziergang-$(date +%Y-%m-%d)")"
mkdir "$current_dir/$dir"
dolphin "$current_dir/$dir"
This solution is far from perfect, though. Ideally, Iād love to have it in the āCreate Newā menu instead of the āActionsā menu. But that doesnāt really work. I cannot define a default directory name, not to mention even a dynamic one with the current date. (I would have to update the .desktop file every day or so.) I also failed to create an empty directory. I somehow managed to create a directory with some other templates in it for some reason I do not really understand.
Letās see how that works out in the next days. If I like it, I might define a few more default directory names.
New version release of twtxt-el!
- Fixed many bugs.
- New back buttons.
- Updated documentation.
I am currently fixing an important bug that break the timeline in some cases and I am working around direct messages.
@lyse@lyse.isobeef.org Just needed to update the version of the tool I packaged as an OCI image š¤£
Hmm I think I can come up with some kind of heuristic.. Maybe if the feed is requested and hasnāt updated in the last few mins it adds to the queue. So the next time it will be fresh.
@eapl.me@eapl.me According to an update of the article, others have suggested the same.
Your explanation seems fitting. I just donāt get why people donāt use feed readers anymore. Anyway.
if it hasnāt updated in a while so i put the request rate to once a week it will take some time before i see an update if it happens today.
I need to figure out a way to back off requests to feeds that donāt update often.
@kat@yarn.girlonthemoon.xyz UPDATE I DID IT!!!!!!! you will now see a cute anime girl that is behind the scenes testing if you are a bot or not in a matter of seconds before being redirected to the site :) https://superlove.sayitditto.net/
I have applied your comments, and I tried to add you as an editor but couldnāt find your email address. Please request editing access if you wish.
Also, could you elaborate on how you envision migrating with a script? You mean that the client of the file owner could massively update URLs in old twts ?
I have released new updates to the twtxt.el client.
- New feature: Notifications.
- Updated: Improved user interface for new posts.
- Updated: Documentation.
- Updated: Some UI elements and included information about shortcuts in each buffer.
- Minor fixes.
Source code: https://codeberg.org/deadblackclover/twtxt-el
In the next version: You will be able to send direct messages.
Enjoy!
#emacs #twtxt #twtxtel
well (insert stubborn emoji here) š, word blog
comes from weblog, and microblogging could derivate from āsmaller weblogā. https://www.wikiwand.com/en/articles/Microblogging
Iād differentiate it from sharing status updates as it was done with āfingerā or even a BBS. For example, being able to reply; create new threads and sharing them on a URL is something we could expect from āTwitterā, the most popular microbloging model (citation needed)
I like to discuss it, since conversations usually are improved if we sync on what we understand for the same words.
I have released new updates to the twtxt.el client.
- New feature: View and interact with threads.
- Optimisation of ordering for long feeds.
- Minor fixes.
In the next version you will be able to see all your mentions.
Enjoy!
@bmallred@staystrong.run I forgot one more effect of edits. If clients remember the read status of massages by hash, an edit will mark the updated message as unread again. To some degree that is even the right behavior, because the message was updated, so the user might want to have a look at the updated version. On the other hand, if itās just a small typo fix, itās maybe not worth to tell the user about. But the client doesnāt know, at least not with additional logic.
Having said that, it appears that this only affects me personally, noone else. I donāt know of any other client that saves read statuses. But donāt worry about me, all good. Just keep doing what youāve done so far. I wanted to mention that only for the sake of completeness. :-)
I have released new updates to the twtxt.el client.
- Markdown to Org mode (you need to install Pandoc).
- Centred column.
- Added new logo.
- Added text helper.
The new version I will try to finish the visual thread. You still canāt see the thread yet.
#emacs #twtxt #twtxtel
I suspect the problem is that the content is updated. It looks like a design problem.
@aelaraji@aelaraji.com You can update the package š
@eapl.me@eapl.me Yeah, you need some kind of storage for that. But chances are that thereās already a cache in place. Ideally, the client remembers etags or last modified timestamps in order to reduce unnecessary network traffic when fetching feeds over HTTP(S).
A newsreader without read flags would be totally useless to me. But I also do not subscribe to fire hose feeds, so maybe thatās a different story with these. I donāt know.
To me, filtering read messages out and only showing new messages is the obvious solution. No need for notifications in my opinion.
There are different approaches with read flags. Personally, I like to explicitly mark messages read or unread. This way, I can think about something and easily come back later to reply. Of course, marking messages read could also happen automatically. All decent mail clients Iāve used in my life offered even more advanced features, like delayed automatic marking.
All I can say is that Iām super happy with that for years. It works absolutely great for me. The only downside is that I see heaps of new, despite years old messages when a bug causes a feed to be incorrectly updated (https://twtxt.net/twt/tnsuifa). ;-)
@andros@twtxt.andros.dev Awesome! Iāve seen the demo earlier on mastodon, things are getting better and better with each update š Good luck!
Heute fahren wir auffe Arbeit ein groĆen Update für das CMS der zentralen Webseiten. Hoffentlich geht das alles gut. š±
Hereās a twt from @andros@twtxt.andros.dev ās new version of Twtxt-el š„³ It feels WAaaaaY better! although it freezes on me as soon as I navigate to the next page complaining about some bad url, but the chronological sorting of the feed as well as the navigation buttons (links?) are a great addition. Looking forward to the next update already! š š„³š„³š„³
⨠Follow
button on their profile page or use the Follow form and enter a Twtxt URL. You may also find other feeds of interest via Feeds. Welcome! š¤
@prologic@twtxt.net @lyse@lyse.isobeef.org it seems a recent update reset my pod settings to open registration.
I updated the specification with base64, Curve25519 and more examples: https://github.com/tanrax/twtxt-direct-message-extension
although I agree that it helps, I donāt see completely correct to leave the nick definition to the source .txt. It could be wrong from the start or outdated with the time.
Iād rather prefer to get it from the mentioned .txt nick metadata (could be cached for performance).
So my vote would to make it mandatory to follow @<name url>
but only using that name/nick if the URL doesnāt contain another nick.
A main advantage is that when the destination URL changes the nick, itāll be automagically updated in the thread view (as happens with some other microblogging platforms, following the Jakobās Law)
Thatās pretty awesome @ ! Iāve seen your contributions to twtxt-el and wondering if youāve been updating the same one or made another from scratch. either way, I canāt wait to give it a try! š cheers
@kat@yarn.girlonthemoon.xyz iām an LXQt girlie for life and i like the convenience of apt despite that they never update their god damn packages so i guess iām stuck on lubuntu for everything
been having fun updating my dotfiles repo as if i have anything notable to put in there
Iāll be using another URL for this twtxt.
The older one will redirect to the new for a while (Iām not sure what would happen if you follow both URLs, I assume itās better to add the new one and remove the older)
Please update your following list to https://eapl.me/tw.txt !
ComeƧar a semana com um āsudo apt updateā e ver todas as coisas boas que aĆ vĆŖm :heart_cyber:
@falsifian@www.falsifian.org this one hits hard, as jenny
was just updated today. :ā-(
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.
What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:
The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.
The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co
that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?
From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.
I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.
# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
@movq@www.uninformativ.de pretty cool! Switched, and pulled. Nice update on README
!
So updated. Seems to duplicate here in the ui. And what is this āRead Moreā on every twt now?
@prologic@twtxt.net Well aināt that grand? Iāll get it updated.
@prologic@twtxt.net Well aināt that grand? Iāll get it updated
I think it is a good addition. Similar to how the Fraidycat RSS reader works. Fraidyc.at also support twtxt, but have not seen any updates since 2021ā¦
Going to have to reinstall my Yunohost server one of these days as the packages no longer seem to be updateable.
@lyse@lyse.isobeef.org its a hierarchy key value format. I designed it for the network peering tools i use.. I can grant access to different parts of the tree to other users.. kinda like directory permissions. a basic example of the format is:
@namespace
# multi
# line
# comment
root :value
# example space comment
@namespace.name space-tag
# attribute comments
attribute attr-tag :value for attribute
# attribute with multiple
# lines of values
foo :bar
:bin
:baz
repeated :value1
repeated :value2
each @
starts the definition of a namespace kinda like [name]
in ini format. It can have comments that show up before. then each attribute is key :value
and can have their own #
comment lines.
Values can be multi line.. and also repeated..
the namespaces and values can also have little meta data tags added to them.
the service can define webhooks/mqtt topics to be notified when the configs are updated. That way it can deploy the changes out when they are updated.
E aquele frio na espinha quando estamos a fazer um update na linha de comandos e ele fica parado na mesma mensagem mais de 10 segundos, e estamos ali a rezar āporamordedeus nĆ£o crashes, diz-me algo que sejaā
So now that I have a basic Twtxt form, I can also update my feed even when I am not on my PC.
By using scp I can see just how fast my updates are published to the WWW.
Anyone else keeping personal .log files updated through basic shell commands?
My home ISP has had a few prefixes allocated. They havenāt rolled of out yet because their custom CRM system needs to be updated to be able to allocate/bill for it. Along other reasons they gave when I asked last.
Funny.. I would never buy an iPhone again. My wife switched back this last phone update and I canāt stand the interface.
twtxt, as I believe it was originally intended, are short little status updates ā thatās it.
So, basically a .plan file for finger. But, on the web. like a *web*finger. We have come full circle on this loop!
@prologic@twtxt.net I have updated to kinda follow this. It now redirects to other webfingers if the resource has a different hostname. Iām still not sure what I should put multiple services with the same domain name. Like if they were to have conflicting properties.
$name$
and then dispatch the hashing or checking to its specific format.
Circling back to the IsPreferred method. A hasher can define its own IsPreferred method that will be called to check if the current hash meets the complexity requirements. This is good for updating the password hashes to be more secure over time.
func (p *Passwd) IsPreferred(hash string) bool {
_, algo := p.getAlgo(hash)
if algo != nil && algo == p.d {
// if the algorithm defines its own check for preference.
if ck, ok := algo.(interface{ IsPreferred(string) bool }); ok {
return ck.IsPreferred(hash)
}
return true
}
return false
}
https://github.com/sour-is/go-passwd/blob/main/passwd.go#L62-L74
example: https://github.com/sour-is/go-passwd/blob/main/pkg/argon2/argon2.go#L104-L133
$name$
and then dispatch the hashing or checking to its specific format.
Here is an example of usage:
func Example() {
pass := "my_pass"
hash := "my_pass"
pwd := passwd.New(
&unix.MD5{}, // first is preferred type.
&plainPasswd{},
)
_, err := pwd.Passwd(pass, hash)
if err != nil {
fmt.Println("fail: ", err)
}
// Check if we want to update.
if !pwd.IsPreferred(hash) {
newHash, err := pwd.Passwd(pass, "")
if err != nil {
fmt.Println("fail: ", err)
}
fmt.Println("new hash:", newHash)
}
// Output:
// new hash: $1$81ed91e1131a3a5a50d8a68e8ef85fa0
}
This shows how one would set a preferred hashing type and if the current version of ones password is not the preferred type updates it to enhance the security of the hashed password when someone logs in.
https://github.com/sour-is/go-passwd/blob/main/passwd_test.go#L33-L59
Twting to see if it will update my links list.
it uses the queries you define for add/del/set/keys. which corrispond to something like INSERT INTO <table> (key, value) VALUES ($key, $value)
, DELETE ...
, or UPDATE ...
the commands are issued by using the maddycli but not the running maddy daemon.
see https://maddy.email/reference/table/sql_query/
the best way to locate in source is anything that implements the MutableTable interface⦠https://github.com/foxcpp/maddy/blob/master/framework/module/table.go#L38
@tiktok@sour.is Hmm why arnāt you updating?
(cont.)
Just to give some context on some of the components around the code structure.. I wrote this up around an earlier version of aggregate code. This generic bit simplifies things by removing the need of the Crud functions for each aggregate.
Domain ObjectsA domain object can be used as an aggregate by adding the event.AggregateRoot
struct and finish implementing event.Aggregate. The AggregateRoot implements logic for adding events after they are either Raised by a command or Appended by the eventstore Load or service ApplyFn methods. It also tracks the uncommitted events that are saved using the eventstore Save method.
type User struct {
Identity string ```json:"identity"`
CreatedAt time.Time
event.AggregateRoot
}
// StreamID for the aggregate when stored or loaded from ES.
func (a *User) StreamID() string {
return "user-" + a.Identity
}
// ApplyEvent to the aggregate state.
func (a *User) ApplyEvent(lis ...event.Event) {
for _, e := range lis {
switch e := e.(type) {
case *UserCreated:
a.Identity = e.Identity
a.CreatedAt = e.EventMeta().CreatedDate
/* ... */
}
}
}
Events
Events are applied to the aggregate. They are defined by adding the event.Meta
and implementing the getter/setters for event.Event
type UserCreated struct {
eventMeta event.Meta
Identity string
}
func (c *UserCreated) EventMeta() (m event.Meta) {
if c != nil {
m = c.eventMeta
}
return m
}
func (c *UserCreated) SetEventMeta(m event.Meta) {
if c != nil {
c.eventMeta = m
}
}
Reading Events from EventStore
With a domain object that implements the event.Aggregate
the event store client can load events and apply them using the Load(ctx, agg)
method.
// GetUser populates an user from event store.
func (rw *User) GetUser(ctx context.Context, userID string) (*domain.User, error) {
user := &domain.User{Identity: userID}
err := rw.es.Load(ctx, user)
if err != nil {
if err != nil {
if errors.Is(err, eventstore.ErrStreamNotFound) {
return user, ErrNotFound
}
return user, err
}
return nil, err
}
return user, err
}
OnX Commands
An OnX command will validate the state of the domain object can have the command performed on it. If it can be applied it raises the event using event.Raise() Otherwise it returns an error.
// OnCreate raises an UserCreated event to create the user.
// Note: The handler will check that the user does not already exsist.
func (a *User) OnCreate(identity string) error {
event.Raise(a, &UserCreated{Identity: identity})
return nil
}
// OnScored will attempt to score a task.
// If the task is not in a Created state it will fail.
func (a *Task) OnScored(taskID string, score int64, attributes Attributes) error {
if a.State != TaskStateCreated {
return fmt.Errorf("task expected created, got %s", a.State)
}
event.Raise(a, &TaskScored{TaskID: taskID, Attributes: attributes, Score: score})
return nil
}
Crud Operations for OnX Commands
The following functions in the aggregate service can be used to perform creation and updating of aggregates. The Update function will ensure the aggregate exists, where the Create is intended for non-existent aggregates. These can probably be combined into one function.
// Create is used when the stream does not yet exist.
func (rw *User) Create(
ctx context.Context,
identity string,
fn func(*domain.User) error,
) (*domain.User, error) {
session, err := rw.GetUser(ctx, identity)
if err != nil && !errors.Is(err, ErrNotFound) {
return nil, err
}
if err = fn(session); err != nil {
return nil, err
}
_, err = rw.es.Save(ctx, session)
return session, err
}
// Update is used when the stream already exists.
func (rw *User) Update(
ctx context.Context,
identity string,
fn func(*domain.User) error,
) (*domain.User, error) {
session, err := rw.GetUser(ctx, identity)
if err != nil {
return nil, err
}
if err = fn(session); err != nil {
return nil, err
}
_, err = rw.es.Save(ctx, session)
return session, err
}
I have updated my eventDB to have subscriptions! It now has websockets like msgbus. I have also added a in memory store that can be used along side the disk backed wal.
even if cause X came along now, people wouldnāt be able to update towards it within a year.
second, thereās predictions. a prediction is Done when itās made. you could add comments, explanations, models &c, but the prediction can be Done and stand there on its own. (there is a slight problem with the fact that predictions need to be updated over time, though, so there is some Piling there as well).
going back to vim. #updates
One year ago to the date I made the lastest update for #phpub2twtxt to github and now 365 days later I have published #pixelblog as its successor - lets see where things are going for trip around the sun
Love the new icons on the latest update!
updated my !now page.
@movq@www.uninformativ.de
Updated. Will it be possible for the subject be moved at the begining instead (like Yarn and tt do)?
@movq@www.uninformativ.de
Meanwhile I only restart my iPhone when an iOS update is available, which normally happens every 4-5 months or so, or more. š
@prologic@twtxt.net finally updated yarnd.. FORK!? Awesome!
Once a day.. though if it hasnāt updated in n-months maybe once a week?
my little travel pillow arrived today for my feet while sitting in the meditation bench. paired with my foam pad for my knees, I think I can begin trying to test this out for my future portable kneeling workspace. #updates #halfbakedideas
makes me update towards the awfulness of ancestral environment
even an ASI would defer to an even smarter version of itself. does this solve the problem of fully updated deference?
okay. I didnāt get the PhD position. oh well. #updates
life decisions approaching. any day now. depending on the outcome, itāll range from major to huge. #updates
@mckinley@twtxt.net @prologic@twtxt.net I have updated the ticket with my findings.. its not what you expect! /clickbait https://github.com/jointwt/twtxt/issues/424
major difference between updating on evidence vs. morals: we know that a bayesian is converging (monotonically?) towards truth with each piece of evidence, while with moral progress, it seems likely that we are not bounded in how wrong our updates can be (even if we grant that the arc bends towards justice in the end). we might need to escape moral local maxima. therefore, it seems good to preserve option value.
@vain@www.uninformativ.dedd @lyse@lyse.isobeef.orgdd @prologic@twtxt.netdd Nope.. i have updated my gist to include the feeds listing. feeds.txt
@prologic@twtxt.net the add function just scans recursivley everything.. but the idea is to just add and any new mentions then have a cron to update all known feeds
okay. txtnish is now officially sketchy. sometimes feeds donāt update, even if I run txtnish update, and this means missing replies. I gotta find something else if Iām going to make this more than a write-only experience.
thereās a zet growing in my wiki now. #updates
with the addition of crate in weewiki, I finally found an opportunity to add some words to the !sqlar page #updates
@prologic@twtxt.net yeah I do.
It seems a bit wonky that it imports from your packages in some places. Iām guessing thatās some legacy bits that need updates?
initial crate words imported in to weewiki source repo. no code yet, but itās pretty clear to me what needs to happen next in order to make an MVP. #updates
import functionality now works in the !weewiki zet #updates
some good initial progress with the !weewiki zettelkasten. messages can be made and tied to previous messages by providing partial UUIDs (that then get automatically expanded). basic export also works. #updates