Hmm: gopher://uninformativ.de/0/phlog/2025/2025-08/2025-08-18--permacomputing.txt
That’s fairly recent, but fully justified. I give up! :-D
@lyse@lyse.isobeef.org (Haha, every time I read the word “Gophers”, I have to stop and remind myself that this is about Golang. 🤪)
I should have checked the CHANGELOG first. LOL.
Fellow Gophers might find this interesting, too: https://flak.tedunangst.com/post/what-the-go-proxy-has-been-doing
@movq@www.uninformativ.de I noticed that:
gopher://uninformativ.de/0/phlog/2018/2018-06/2018-06-01.txt
Is the first non-justified, and it is when you started using Markdown. The last justified one was:
gopher://uninformativ.de/0/phlog/2018/2018-05/2018-05-27.txt
So, I might have found the mystery! :-D
Haha, fun! I browsed your gopher hole a little bit. I noticed some entries are fully justified (formatting), while others are not. I didn’t notice a pattern, though it makes sense not to use justification on entries with code. Yet, some prose entries are, and some are not. A mystery. :-)
@bender@twtxt.net The address is/was correct but probably got mangled by the Markdown renderer. Let’s try again in a code block:
gopher://uninformativ.de/0/phlog/2025/2025-09/2025-09-03--roophloch.txt
@bender@twtxt.net Yeah, the acronym is funny. 😅
Wandering through the woods for 8km … gopher://uninformativ.de/0/phlog/2025/2025-09/2025-09-03–roophloch.txt
This probably means that I can no longer host my own website. I don’t want to deploy something like Anubis, because that ruins the whole thing: I want it to be accessible from ancient browsers, like OS/2 or Windows 3.11.
I’ll keep an eye on it for a while. Maybe try to block some IPs.
Sooner or later, I’ll take the website down and shift everything to Gopher.
@dce@hashnix.club I switched over to following you on Gopher, because why not. 😅
So, in addition to HTTPS and Gemini, my twtxt should now also be available over Gopher (gopher://hashnix.club:70/0/~dce/twtxt.txt). Not sure who, if anyone, would need this; but since my tilde provides Gopher hosting, I’d may as well mirror my twtxt there as well.
@bender@twtxt.net curl -s gopher://…
does that for you.
@movq@www.uninformativ.de having to go to a gopher proxy to see a text document better served on readily available web servers… 🤭, but I digress. Verbatim text:
What's Missing from "Retro"
~softwarepagan
------------------------------------------------------------------
You know, often, when I say I miss older ways of computing or
connecting online, people tell me "there's nothing stopping you
from doing that now!" and they are technicay correct in most cases
(though I can't, for example, chat with friends on MSN ever
again...) However, let me explain that while this type of thing can
*sort of* fill that hole in my heart, it isn't *the same.*
Say, for example, I wanted to connect with others over a BBS. This
wouldn't offer the same types of connections it used to. While
there are BBSes around with active users, they're no longer there
to discuss movies, Star Trek, D&D, games, etc. They're there to
discuss *BBSes.* The same can be said for Gopher, old-school forums
and all sorts of revival projects (such as Escargot, Spacehey,
etc.) Retrocomputing enthusiasts, while they have a variety of
interests, are often in these spaces to discuss the medium itself
and not other topics. This exists at a stark contrast from how
things were in the past, where a non-tech-inclined person may learn
the tech to connect with likeminded others (as I did as a
Zelda-obsessed kid.)
The same can be said of old media. People will say "well, nobody is
stopping you from watching old shows/movies now!" Again, they are
technically correct. I can go home right now and watch *Star Trek:
The Next Generation* to my heart's content. It will never again,
however, be current, or new. When something is new, it serves as a
shared cultural experience. Remember how "Game of Thrones* felt in
the mid-to-late 2010s? Yeah, that.
It's sad. I sustain myself on a mixed diet of old things, new
things, and new things intended for old millenials like me who like
old things. It can be bittersweet.
What’s Missing from “Retro”: gopher://midnight.pub/0/posts/2679
Really, it won’t be long until I give the world the finger and move everything behind Gopher or Gemini. It’ll be a while until the bots find me there.
@bender@twtxt.net Yeah, well, it’s a bit like twtxt. There is a Gopher community, but it’s small. I actually don’t like that HTTP is so easily accessible. I don’t like it that much when people post links to my site on HackerNews or something like that. Too much exposure.
Gopher is a small world. It’s slow and cozy.
And much like twtxt, the protocol is simple®, so it’s easier to tinker with it.
@movq@www.uninformativ.de why Gopher to babble, and not just HTTP? I mean, may as well just write plain text files on your machine, and leave them there, right?
Gopher and Mastodon are two completely different things. That’s where my confusion comes from.
@bender@twtxt.net Both Gopher and Mastodon are a way for me to “babble”. 😅 I basically shut down Gopher in favor of Mastodon/Fedi last year. But the Fediverse doesn’t really work for me. It’s too focused on people (I prefer topics) and I dislike the addictive nature of likes and boosts (I’m not disciplined enough to ignore them). Self-hosting some Fedi thing is also out of the question (the minimalistic daemons don’t really support following hashtags, which is a must-have for me).
I’ll probably keep reading Fedi stuff, I just won’t post that much, I think.
@movq@www.uninformativ.de how does Gopher relates to Mastodon? Are you getting off the Fedi completely?
Gopher server is back online and I’ll be phasing out Mastodon.
gopher://uninformativ.de
(No, I won’t do multi-protocol twtxt again. 😅)
When I chose the MIT license for all of my software, I thought:
“Should I use GPL, which I don’t really understand? Is that worth it? Yeah, there is a theoretical possibility that some company might use my code in their proprietary product … and then what? Should I sue them to enforce the GPL? I’m not going to do that anyway, so I’ll just use the MIT license.”
And now we have those LLM scrapers and now it’s suddenly a reality that these companies (ab)use my code. I can see it in my logs. I didn’t expect that back then.
GPL wouldn’t help, either, of course. (Regardless, I now think that GPL would have been the better choice anyway.)
I’m honestly considering taking my code and website offline. Maybe make it accessible through some obscure protocol like Gopher or Gemini, but no more HTTP.
(Yes, Anubis might help. Temporarily.)
I’m just tired.
irc.mills.io
running behind Caddy Layer 4. However I don't terminate TLS at the edge in this case.
@prologic@twtxt.net OH SHIT using this for a protocol like gopher is smart! might have to try that for gemini so i don’t have to keep a port open for that
irc.mills.io
running behind Caddy Layer 4. However I don't terminate TLS at the edge in this case.
@bender@twtxt.net Sure! 👍
{
...
# Layer 4 Reverse Proxy
layer4 {
# Gopher
0.0.0.0:70 {
route {
proxy <internal_ip>:70
}
}
# IRC (TLS)
0.0.0.0:6697 {
route {
proxy <internal_ip>:6697
}
}
}
}
Bloody hell 🤦♂️🤦♂️
$ jq -r --arg host "gopher.mills.io" '. | select(.request.host==$host) | "\(.request.client_ip) \(.request.uri) \(.request.headers["User-Agent"])"' mills.io.log-au | while IFS=$' ' read -r ip uri ua; do asn="$(geoip -a "$ip")"; echo "$asn $ip $uri $ua"; done | grep -E '^45102.*' | sort | head
45102 47.251.70.245 /gopher.floodgap.com/0/feeds/democracynow/2015/Oct/14/0 ["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36"]
45102 47.251.84.25 /gopher.floodgap.com/0/feeds/voaheadlines/2014/Mar/09/voanews.com-content-article-1867433.html ["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"]
45102 47.82.10.106 /gopher.viste.fr/1/OnlineTools/hangman.cgi%3F0692937396569A52972EB2 ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.43"]
45102 47.82.10.106 /gopher.viste.fr/1/OnlineTools/hangman.cgi%3F9657307A96569A52974634 ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.43"]
45102 47.82.10.106 /gopher.viste.fr/1/OnlineTools/hangman.cgi%3FB7571C7896569A529E6603 ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.43"]
45102 47.82.10.106 /gopher.viste.fr/1/OnlineTools/hangman.cgi%3FB75EF81296569A529E6617 ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.43"]
45102 47.82.10.106 /gopher.viste.fr/1/OnlineTools/hangman.cgi%3FC6564ADB96569A5A9E660C ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.43"]
@andros@twtxt.andros.dev Broke on me for having alt-urls I think 🥲
twtxt---profile-layout: Wrong type argument: char-or-string-p, ("https://aelaraji.com/twtxt.txt" "gemini://box.aelaraji.com/twtxt.txt" "gopher://box.aelaraji.com/0/twtxt.txt")
robots.txt
file. only noticed it because the OpenAI bot was hitting me with a lot of nonsensical requests. here is the list from last month:
(I keep thinking that going back go Gopher or Gemini might be a good idea at this point. They don’t care about that, probably. 🫣)
EdgeGuard Update:
I am now in a position where I’m no longer having any ports open on my firewall at the Mills DC. 🥳 All services (Gopher, SMTP, IRC, SSH, HTTP) are being proxied through my edge network 💪
@eapl.me@eapl.me here are my replies (somewhat similar to Lyse’s and James’)
Metadata in twts: Key=value is too complicated for non-hackers and hard to write by hand. So if there is a need then we should just use #NSFS or the alt-text file in markdown image syntax

if something is NSFWIDs besides datetime. When you edit a twt then you should preserve the datetime if location-based addressing should have any advantages over content-based addressing. If you change the timestamp the its a new post. Just like any other blog cms.
Caching, Yes all good ideas, but that is more a task for the clients not the serving of the twtxt.txt files.
Discovery: User-agent for discovery can become better. I’m working on a wrapper script in PHP, so you don’t need to go to Apaches log-files to see who fetches your feed. But for other Gemini and gopher you need to relay on something else. That could be using my webmentions for twtxt suggestion, or simply defining an email metadata field for letting a person know you follow their feed. Interesting read about why WebMetions might be a bad idea. Twtxt being much simple that a full featured IndieWeb sites, then a lot of the concerns does not apply here. But that’s the issue with any open inbox. This is hard to solve without some form of (centralized or community) spam moderation.
Support more protocols besides http/s. Yes why not, if we can make clients that merge or diffident between the same feed server by multiples URLs
Languages: If the need is big then make a separate feed. I don’t mind seeing stuff in other langues as it is low. You got translating tool if you need to know whats going on. And again when there is a need for easier switching between posting to several feeds, then it’s about building clients with a UI that makes it easy. No something that should takes up space in the format/protocol.
Emojis: I’m not sure what this is about. Do you want to use emojis as avatar in CLI clients or it just about rendering emojis?
Simplified twtxt - I want to suggest some dogmas or commandments for twtxt, from where we can work our way back to how to implement different feature like replies/treads:
It’s a text file, so you must be able to write it by hand (ie. no app logic) and read by eye. If you edit a post you change the content not the timestamp. Otherwise it will be considered a new post.
The order of lines in a twtxt.txt must not hold any significant. The file is a container and each line an atomic piece of information. You should be able to run
sort
on a twtxt.txt and it should still work.Transport protocol should not matter, as long as the file served is the same. Http and https are preferred, so it is suggested that feed served via Gopher or Gemini also provide http(s).
Do we need more commandments?
iirc in twtxt v2 it starts prohibited
This is not true. There are no issues supporting fetching feeds via Gemini/Gopher. This is totally fine. What will likely happen is “recommendations” and “drawbacks of using Gemini/Gopher”
“Fu*** IRC maaan, all the cool kids are on Discord! IRC sucks”
LOL, Now substitute IRC and Discord with Gopher/Gemini
and Web
.
I hope you get the joke 😅
(#bqor23a)
. Its the same one. My pod doesn't have the Root Twt: https://twtxt.net/twt/bqor23a => 404 Not Found.
Oh, and I think I said this before, but just in case, fuck Gemini. Hell, fuck Gopher too. Bring on telnet, and UCCP. 😈
@quark@ferengi.one Mine is a little overkill 😂 but I need to do something for practice:
#!/bin/bash
set -e
trap 'echo "!! Something went wrong...!!"' ERR
#============= Variables ==========#
# Source files
LOCAL_DIR=$HOME/twtxt
TWTXT=$LOCAL_DIR/twtxt.txt
HTML=$LOCAL_DIR/log.html
TEMPLATE=$LOCAL_DIR/template.tmpl
# Destination
REMOTE_HOST=remotHostName # Host already setup in ~/.ssh/config
WEB_DIR="path/to/html/content"
GOPHER_DIR="path/to/phlog/content"
GEMINI_DIR="path/to/gemini-capsule/content"
DIST_DIRS=("$WEB_DIR" "$GOPHER_DIR" "$GEMINI_DIR")
#============ Functions ===========#
# Building log.html:
build_page() {
twtxt2html -T $TEMPLATE $TWTXT > $HTML
}
# Bulk Copy files to their destinations:
copy_files() {
for DIR in "${DIST_DIRS[@]}"; do
# Copy both `txt` and `html` files to the Web server and only `txt`
# to gemini and gopher server content folders
if [ "$DIR" == "$WEB_DIR" ]; then
scp -C "$TWTXT" "$HTML" "$REMOTE_HOST:$DIR/"
else
scp -C "$TWTXT" "$REMOTE_HOST:$DIR/"
fi
done
}
#========== Call to functions ===========$
build_page && copy_files
url
field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:
I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.
If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt
they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt
… damn I just notice the gemini.
subdomain.
Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?
More specifically a gopher based zine www.mmn.ca/malware
I would love to see a world where ones twtxt feed is defined by webfinger. So @xuu@txt.sour.is
=> https://text.sour.is/user/xuu/twtxt.txt
Then my identity can exist independent of the feed location. And I can host multiple protocol types for my feed. Ie. http/gopher/Gemini/irc DCC/etc
Check out the Nex Protocol. It’s designed to be even simpler than Gemini and Gopher. What do you think? Could be great to host a twtxt feed on.
Question to all you Gophers out there: How do you deal with custom errors that include more information and different kinds of matching them?
I started with a simple var ErrPermissionNotAllowed = errors.New("permission not allowed")
. In my function I then wrap that using fmt.Errorf("%w: %v", ErrPermissionNotAllowed, failedPermissions)
. I can match this error using errors.Is(err, ErrPermissionNotAllowed)
. So far so good.
Now for display purposes I’d also like to access the individual permissions that could not be assigned. Parsing the error message is obviously not an option. So I thought, I create a custom error type, e.g. type PermissionNotAllowedError []Permission
and give it some func (e PermissionNotAllowedError) Error() string { return fmt.Sprintf("permission not allowed: %v", e) }
. My function would then return this error instead: PermissionNotAllowedError{failedPermissions}
At some layers I don’t care about the exact permissions that failed, but at others I do, at least when accessing them. A custom func (e PermissionNotAllowedError) Is(target err) bool
could match both the general ErrPermissionNotAllowed
as well as the PermissionNotAllowedError
. Same with As(…)
. For testing purposes the PermissionNotAllowedError
would then also try to match the included permissions, so assertions in tests would work nicely. But having two different errors for different matching seems not very elegant at all.
Did you ever encounter this scenario before? How did you address this? Is my thinking flawed?
@prologic@twtxt.net It’s called “cgod” and it isn’t written in C or Go? I want my money back…
I also like Gopher more than Gemini. The problem Gemini is trying to solve is better solved by just writing static HTML 4.01 pages.
I finally got my gopherhole to mirror my site, mostly - gopher://oh.mg:70/1
also at gemini://om.gay/twtxt.txt and gopher://oh.mg:70/0/twtxt.txt
… and also Gopher, but I fixe that too
@fastidious@arrakis.netbros.com the things Gemini has going for it are mutual TLS and lack of JavaScript. Which makes for a secure albeit boring experience (much like gopher). The fake markdown is a bit of a drag.
A render mode for Gemini probably wouldnt be too hard. There are markdown to Gemini libs out there.
With Web3 the whole trust a 3rd party browser ext + high fees + env impact for compute and storage are serious no gos for me.. I have heard one too many horror stories about clicking the wrong link and some script draining your metamask wallet.
@prologic@twtxt.net that seems to match my numbers. are you picking up the few gophers out there?
kinda makes me wonder about the ~300k you have cached. y’all got the library of alexandria over there.
@prologic@twtxt.net yeah it reads a seed file. I’m using mine. it scans for any mention links and then scans them recursively. it reads from http/s or gopher. i don’t have much of a db yet.. it just writes to disk the feed and checks modified dates.. but I will add a db that has hashs/mentions/subjects and such.
btw my main feed is on gopher now too gopher://tilde.team/0/~dgy/twtxt.txt
Enough gophering and twtexting for now, taking a break to enjoy the evening
Minimal gophermap design for gopher://oevl.info #lessismore
Doing some housekeeping at gopher://oevl.info as some old folders were being published
@lyxal@twtxt.net @prologic@twtxt.net yah. the service can have a flag for allowing non-TLS for development. but by default ignores.
are there some users that use alternative protos for twtxt? like ftp/gopher/dnsfs 🤔
Can I interest you in the latest edition of Tales From The Dork Web when it’s about Gopher, Gemini and The Smol Internet? https://thedorkweb.substack.com/p/gopher-gemini-and-the-smol-internet
@johanbove@johanbove.info After hearing about your Gopher server and seeing a .plan link on your website, I was disappointed to not see a finger daemon running on your server
New post on my Gopher site. Back to updating it once a month.
What I need is to serve gopher and http behind a proxy and under the same domain, in a way that a unique container serves each protocol
Need to fix the infraestructure that lets me serve content via gopher and http. Sometimes https calls go to my gopher server.
Well @freemor@freemor.homelinux.net they way I am serving my content via http and gopher may need some fixing, thanks for the following #twtxt
@kas@enotty.dk My Gopher URL is gopher://oevl.info and my twtxt.txt file is available at gopher://oevl.info/twtxt.txt
Testing a lsyncd configuration to keep twtxt.txt in sync between the web server and the gopher server.
Using lsyncd to sync my twttxt file between my web server and gopher server roots
@kas@enotty.dk I like your gopher server’s formatting, nice and clean and how did you implement the TLS certificate?
@kas@enotty.dk [re: gopher client] If you happen to be on Windows, then Gopher Browser for Windows by Matt Owen is pretty nice, otherwise I use Lynx indeed for gopher.
Enjoying the constraints of the Gopher protocol as a minimalistic zen-mode kind of online publishing revival.
@mdom@domgoergen.com The news site at gopher://taz.de:70/ is really cool. How did you make it?
My gopher site is about 38K big - still plenty of space left on the 1.24MB floppy disk
Updated my daily journal at gopher://gopher.johanbove.info:70/notes
Mirrored on gopher://gopher.johanbove.info/0/twtxt.txt
@davebucklin@davebucklin.com Welcome to the IndieWeb! Thanks also for introducing me to #twtxt. Gophering this for sure.
Hot take: gopher is not a failed attempt to invent the WWW, but its own crystallization of a coherent philosophy – to present jump links between text files, without all the other bollocks. That makes it as valuable today as it was 25 years ago.
it’s really weird that my most popular medium post thus far this year is a rant about extensions to the gopher protocol
Avoiding the gravity-well of webbiness in gopher “ John Ohno ” Medium https://medium.com/@/avoiding-the-gravity-well-of-webbiness-in-gopher-68a52a1094e5?source=friends_link&sk=dace73b31966b1c112204695c03f8fa8
Pondering what’s inbetween Gopher and the web gopher://zaibatsu.circumlunar.space:70/0/~solderpunk/phlog/pondering-whats-inbetween-gopher-and-the-web.txt
Gopher - Commons Host https://gopher.commons.host/
I am also thinking about using a gopher url for that…
Running a Gopher Server in 2018 https://prgmr.com/blog/gopher/2018/11/09/setting-up-gopherserver.html
@sdk@codevoid.de Well I’ve added the special datetime to my kitbashed client. I store the URL it gets but I’m not doing anything with it right now.
@sdk@codevoid.de as for the 140 character limit. I swear I read somewhere that the limit was really more of a suggestion than anything else. I don’t think any of the clients I’ve looked out enforce it. As long as it’s on a single line, no one seems to care too much.
@sdk@codevoid.de A comment might not be in the spec, but I know several of the twtxt files I’ve looked at have them. I know my kit bashed twtxt client ignores those lines and I’m sure other clients do too.
@sdk@codevoid.de you know the more I think about it, it might make sense to have it the twtxt file. It would just need to be a comment line something like “#follows sdk gopher://codevoid.de/0/tw.txt” on a single line. That way it would be easy to parse out those follows by finding the #follows.
@sdk@codevoid.de a random mix into the the twtxt file seems less clean to me. The former would be easier to implement and simpler for another program to get and parse.
@sdk@codevoid.de That’s an interesting thought. I Know most are text files but at one time there was someone that used a python CGI Script. That person would have had to make a script for the follows.
@sdk@codevoid.de I have to admit that’s true. While I don’t call myself an expert, I almost always wore several hats at places I’ve worked. Programmer, Server Admin, Network Admin, Cable Puller, Telephone Admin, PBX installer, Database Admin, etc
Against trendism: ipfs://QmQDqrz8Asn3wPbiTHFH9pyAXPNxwbeytJgzWUHF1PZup2 / http://ipfs.io/ipfs/QmQDqrz8Asn3wPbiTHFH9pyAXPNxwbeytJgzWUHF1PZup2 / gopher://fuckup.solutions/1enkiv2/medium-backup/2018-04-01_Against-trendism–how-to-defang-the-social-media-disinformation-complex-81a8e2635956.txt / https://medium.com/@/against-trendism-how-to-defang-the-social-media-disinformation-complex-81a8e2635956
Re: TLS in Gopher https://lists.debian.org/gopher-project/2018/02/msg00038.html
The gopher onion initiative project gopher://bitreich.org/1/onion
Alex Schroeder: 2018-01-10 Encrypted Gopher https://alexschroeder.ch/wiki/2018-01-10_Encrypted_Gopher
Formatting for Gopher with GNU troff http://davebucklin.com/play/2018/03/04/gopher-groff.html
My Mondo piece on sublims is up on gopher: gopher://fuckup.solutions/1enkiv2/sublim.txt
I now have a gopher setup at gopher://vernunftzentrum.de
Gopher: Remembering the web that wasn’t – Andrew Writing – I make websites, and write stuff. http://ajroach42.github.io/gopher-remembering-the-web-that-wasn-t/
@freemor@freemor.homelinux.net I’m cheating geting @mekon@sdf.org file by using my own kitbashed php CLI client I am playing around with
Had to update my client to use CURL so I could get @mekon@sdf.org twtxt file via gopher