@movq@www.uninformativ.de I do wonder that sometimes, but I try to take notes if Iām doing something complicated. Just a few lines in a text file with some context plus the command I used. ffmpeg.txt
comes in very handy.
Itās 500. I never changed it, so thatās the default of either Bash or my distro. Itās fine for me.
@bender@twtxt.net Thatās what I suspected. I compared the text, including the alt text for the image. I guess I didnāt read it carefully enough.
No worries @aelaraji@aelaraji.com, it happens to the best of us.
@aelaraji@aelaraji.com Iām definitely putting that in the list. I like tmux but I just canāt wrap my head around the controls. This looks more like a tiling window manager.
@aelaraji@aelaraji.com Is that a terminal multiplexer? If so, which one? I suspect it says at the top but I canāt quite read the text.
@bender@twtxt.net Fair pointā¦ :)
@prologic@twtxt.net Planning it ahead of time is all well and good if you have the money to buy 6 or 8 hard drives at once. I really donāt, and I want to mirror the whole thing offsite anyway. Mergerfs will let me do it now, and Iāll buy a drive each for SnapRAID in short order.
QOTD: Have you ever suffered significant data loss? If so, what went wrong?
@bender@twtxt.net Ha, we both looked it up at once. You win.
@bender@twtxt.net Synology uses single-volume Btrfs on software RAID, which seems to be pretty solid in my research but thatās less flexible than ZFS. https://kb.synology.com/en-us/DSM/tutorial/What_was_the_RAID_implementation_for_Btrfs_File_System_on_SynologyNAS
@bender@twtxt.net Exactly. Itās just not an option with warnings like that all over the place. Some people have had success, but Iām not risking it. https://lore.kernel.org/linux-btrfs/20200627032414.GX10769@hungrycats.org/
@prologic@twtxt.net ZFS is fine but itās out-of-tree and extremely inflexible. If Btrfs RAID5/6 was reliable it would be fantastic. Add and remove drives at will, mix different sizes. I hear itās mostly okay as long as you mirror the metadata (RAID1), scrub frequently, and donāt hammer it with too many random reads and writes. However, there are serious performance penalties when running scrubs on the full array and random reads and writes are the entire purpose of a filesystem.
Bcachefs has similar features (but not all of them, like sending/receiving) and it doesnāt have the giant scary warnings in the documentation. I hear itās kind of slow and it was only merged into the kernel in version 6.7. I wouldnāt really trust it with my data.
I bought a couple more hard drives recently and Iām trying to figure out how Iām going to allocate them before badblocks completes. I have a few days to decide. :)
@bender@twtxt.net Thereās stagit which generates static HTML files
yarnd
itself is just downloading a binary and configuring it (which could also be easier)
@prologic@twtxt.net I remember running yarnd for testing on a couple of different occasions and both times I found all the required command line options to be annoying. If I remember correctly, running it with missing options would only tell you the first one that was missing and youād have to keep running it and adding that option before it would work.
This was a couple of years ago, so I donāt know if anythingās changed since then. Itās really not a big problem, because it would be run with some kind of preset command line (systemd service, container entrypoint) in a production environment.
yarnd
itself is just downloading a binary and configuring it (which could also be easier)
@bender@twtxt.net I avoid install scripts like the plague. This isnāt Windows and theyāre usually poorly written. I think itās better to prioritize native packages (or at least AUR, MPR, etc) and container images.
@prologic@twtxt.net Thatās good advice. I donāt open any ports to the Internet if I can possibly avoid it. Everything is on Wireguard, even stuff that doesnāt really need to be. Itās super easy to set up on other peopleās computers, too. Even on Windows.
@prologic@twtxt.net Both are very nice in my opinion. I donāt think you could make a mistake with either, at least when it comes to looks.
36/2 = 18
at 25 Twts per page, that's about ~72% of the search/view real estate you're taking up! wow š¤© -- I'd be very interested to hear what ideas you have to improve this? Those search filters were created so you could sift through either your own Timeline or the Discover view easily.
@prologic@twtxt.net I think this would be solved in the short to mid-term by fixing the mute function. Or, maybe, adding a āHide this user from Discoverā button.
@prologic@twtxt.net Picnic CSS is my favorite one on first glance.
@prologic@twtxt.net Are they changing unique IDs? I hate when people do that. If I ever do that with any of my feeds, feel free to mock me relentlessly.
@bender@twtxt.net Makes sense. We definitely need the ability to mute feeds from the Discover feed.
@movq@www.uninformativ.de I remember your solution. Itās very simple, I like it.
Yes, my backup target is my home server. I have a hard drive dedicated to Restic repositories. Itās still not a real backup as I donāt have anything offsite but itās better than my previous solution. I had two very old hard drives I kept plugged in to my desktop PC and I would (on very rare occasion) plug in another hard drive and copy all the files over to it. Luckily, Iāve never suffered any significant data loss and I would rather not start now. Once I have automated backups on each of my machines, the next project is getting those backups offsite.
@prologic@twtxt.net I think one-way feeds are okay and we shouldnāt discourage them so strongly. On the other hand, I think itās the duty of a poderator to filter out feeds that are just noise from the Discover feed. I definitely consider a truckload of one-way posts mostly in another language to be noise. Did you get rid of Gopher Chat too? Iād call that noise, for sure.
@bender@twtxt.net Standard twtxt is a microblog in its purest form. A blog, but smaller. Itās just a list of posts to read, and thatās an echochamber in the same way my regular blog is an echochamber. I donāt think thereās anything wrong with that.
@prologic@twtxt.net I support the delisting of ciberlandia.pt in the Discover feed due to the sheer volume of posts from there and the fact that most of them are in Portuguese with this being a predominantly English-language pod.
@prologic@twtxt.net Why do we need to avoid posting to the void? Thatās pretty much what twtxt was made for. I donāt like the āLegacy feedā terminology, either. I support the delisting of ciberlandia.pt but I think this change is heading in a bad direction.
I like @sorenpeter@darch.dk ās suggestion. It gives the users the information and lets them make their own decision instead of putting a big scary warning in their face. Thatās what Microsoft does, and we shouldnāt be Microsoft.
@prologic@twtxt.net How do you manage multiple remotes? Do you just run restic backup
for each one?
I wish there was a good GUI for Restic so I could have non-technical people using the same thing I do.
QOTD: How do you back up your files?
I asked this one almost a year ago and I started using Restic shortly after that. When I started, I was only backing up my home folder to the repository over NFS. Now, Iām backing up the entire root filesystem to a repository using the REST backend so I can run Restic as root without breaking the permissions.
Iām working on automating it now and Iām trying to come up with something using pinentry but my proof-of-concept is getting pretty obtuse. It will be spread out in a shell script, of course, but still.
systemd-inhibit --what=handle-lid-switch restic --password-command='su -c "printf '"'"'GETPIN\n\'"'"' | WAYLAND_DISPLAY=wayland-1 pinentry-qt5 | grep ^D | sed '"'"'s/^D //'"'"'" mckinley' --repository-file /root/restic-repo backup --exclude-file /root/restic-excludes --exclude-caches --one-file-system /
Iām curious to see how everyoneās backup solutions have changed since last year.
@aelaraji@aelaraji.com Iāve never had a use for Syncthing but I hope I get one at some point so I can see how it works. Do three-way merges work on Keepass database files?
I use KeePassXC because I really only use one device. I imagine it would be challenging to rsync the database around if I needed my passwords on more machines. Itās probably fine if youāre deliberate enough, but I donāt think it would take long before Iād lose a password by editing an outdated version of the repository and overwriting the main copy.
I like the simple architecture of Pass, and it would indeed lend itself well to a Git repository, but I donāt like that service names are visible on the filesystem. pass-tomb might mitigate this somewhat but it seems messy and I donāt know if it would work with Git without compromising the security of the tomb.
Whatās so good about Bitwarden? Everyone seems to love it. I like that it can be self-hosted. I certainly wouldnāt want a third party in control of my password database.
@prologic@twtxt.net This seems like it would drive a wedge between Yarn.social and the people on regular old twtxt.
@prologic@twtxt.net I use LocalMonero (onion) to buy Monero with cash sent by mail. You can sell on there if you want to convert back to fiat. People also like Bisq, which is peer-to-peer software for buying and selling cryptocurrency.
To accept Monero, all you need is a wallet program. I recommend Feather Wallet. Create your wallet in there, then youāll copy the wallet files into monero-wallet-rpc for use with MoneroPay, see docker-compose.yaml.
@prologic@twtxt.net Is it really banned? I thought the regulators just pressured the centralized exchanges to delist privacy coins without actually banning them outright.
@prologic@twtxt.net I concur. This little community of ours is here because of you, and Iām very grateful for that. :)
@movq@www.uninformativ.de Itās very useful. I always start my music player in a tmux session so I can SSH in, attach it, and control the music from another computer. Itās also handy for letting long-running tasks on a remote machine continue in the background even if the SSH connection is broken.
@prologic@twtxt.net Monero has stayed a little more stable than Bitcoin but itās still a cryptocurrency and itās still going to fluctuate quite a bit. It also uses proof-of-work algorithm so it still consumes quite a bit of electricity. I think the value of being able to send any amount of money, any time of the day, to anyone on the planet in 20 minutes (appears in 2 minutes, spendable in 20) completely privately with near-zero transaction fees exceeds the drawbacks.
Unfortunately, the characteristics that make it useful as a global currency for day-to-day transactions also make it useful for people doing illicit things. Many exchanges, fearing regulatory action, wonāt accept Monero for the same reason they wonāt accept Bitcoin from a mixer.
Monero shouldnāt be banned just because people use it for bad things. Itās just a tool and it can be used for good or evil. Itās the same reason countries use when they ban or restrict Tor usage.
@prologic@twtxt.net Iām in if you accept XMR
Actually, kyun.host might offer container hosting at some point.
On-demand Linux containers.
Run almost anything, without having to touch the command line.
Coming Soon
@prologic@twtxt.net That sounds great. The only other container-level hosting service Iāve heard of is PikaPods which seems much more managed than cas.run would be. It has customizable tier-based pricing and the minimum specs are Ā¼ of a CPU core, 256 MB of memory, and āabout 100 MBā of storage for $1/mo which seems awfully steep compared to a low-cost VPS. I donāt know if PikaPods offers an IPv4 reverse proxy or not.
Monero uses cryptography to make transactions anonymous and the coins completely fungible. With most cryptocurrencies including Bitcoin, the transactions associated with an address are public and you can trace those coins all the way back to their origin. This means that not all coins are the same. For example, some exchanges wonāt accept Bitcoin that comes from a mixer because they assume youāre doing something untoward.
With Monero, itās not possible to trace any transactions with just an address. People canāt see what youāre spending your money on or where your coins came from. Transaction fees using Monero are also very small. Itās less than the equivalent of 1 cent in USD.
Minuscule transaction fees and anonymity make it the best choice in my opinion for buying goods and services online. Monero is much more like ādigital cashā than Bitcoin, which I think is better described as ādigital goldā.
@prologic@twtxt.net I might have mentioned this already but you might want to look into MoneroPay for payment processing when you get to that point with cas.run. Itās a completely self-hosted backend service for receiving and tracking Monero payments and itās written in Go.
@movq@www.uninformativ.de You could always keep it running in a detached tmux session and attach it when you see the spike. Processes that were recently using the netwotk stay in the list for 10 or 15 seconds after theyāre finished so you donāt have to catch it in the act.
@prologic@twtxt.net $0.15 sounds great but you need to make money doing this. Is it still going to be use-based pricing or will there be tiers like conventional VPS providers?
You could get better value for money with a super cheap VPS without IPv4 connectivity but it wouldnāt be worth it if you didnāt need the extra resources as a VPS wouldnāt be practical with such low specs. It would also require significantly more effort on the part of the operator.
I would understand paying a small premium for using the lowest-cost tier, convenience, and especially if you operated a reverse proxy with IPv4 connectivity.
@prologic@twtxt.net $0.50/month seems reasonable. Is this for cas.run?
@movq@www.uninformativ.de I use nethogs for this sort of thing: https://github.com/raboof/nethogs
@prologic@twtxt.net What is an mCore? 1/1000th of a core?
@prologic@twtxt.net Plexamp has some really cool features. Itās a shame itās proprietary and dependent on central services.
@movq@www.uninformativ.de Interesting. mpd + ncmpcpp seems to be a common setup among our type but I really like cmus. Whipper is my CD ripper of choice and it is excellent. It queries AccurateRip for checksums and MusicBrainz for metadata, and can encode to any format you want. It also creates a nice log file like EAC does (it can even create EAC-compatible logs with a plugin) so you can verify that it was ripped properly.