@quark@ferengi.one wow everybody loves @prologic@twtxt.net
Had to disable support functions because I’ve received three spammy support emails today. Thanks for that feature @prologic@twtxt.net
yarnd
that's been around for awhile and is still present in the current version I'm running that lets a person hit a constructed URL like
A stopgap setting that would let me stop all calls to /external
matching a particular pattern (like this damn lovetocode999
nick) would do the job. Given the potential for abuse of that endpoint, having more moderation control over what it can do is probably a good idea.
@lyse@lyse.isobeef.org Interesting. The yarnd --help
currently says (for me):
-R, --open-registrations whether or not to have open user registgration
meaning it doesn’t give the default setting or warn you that you need to use -R=false
and not -R false
. It also leaves unclear whether --open-registrations false
would work or if you need to do --open-registrations=false
. It’s also unclear whether the setting change in the user interface is overridden by the command line arguments, overrides the command line arguments, is persisted across restarts.
Maybe all this is worth posting an issue for additional documentation on the git repo if there isn’t one already.
“registgration” is misspelled that way in the help by the way.
There is a bug in yarnd
that’s been around for awhile and is still present in the current version I’m running that lets a person hit a constructed URL like
YOUR_POD/external?nick=lovetocode999&uri=https://socialmphl.com/story19510368/doujin
and see a legitimate-looking page on YOUR_POD, with an HTTP code 200 (success). From that fake page you can even follow an external feed. Try it yourself, replacing “YOUR_POD” with the URL of any yarnd
pod you know. Try following the feed.
I think URLs like this should return errors. They should not render HTML, nor produce legitimate-looking pages. This mechanism is ripe for DDoS attacks. My pod gets roughly 70,000 hits per day to URLs like this. Many are porn or other types of content I do not want. At this point, if it’s not fixed soon I am going to have to shut down my pod. @prologic@twtxt.net please have a look.
⨁ Follow
button on their profile page or use the Follow form and enter a Twtxt URL. You may also find other feeds of interest via Feeds. Welcome! 🤗
@mckinley@twtxt.net He’s signed up three times now even though I keep deleting the account, which is enough for me to permaban this person. I don’t technically want open registrations on my pod but up till now I’ve been too lazy to figure out how to turn them off and actually do that, and there hasn’t been a pressing need. I may have to now.
receieveFile()
)? 🤔
@stigatle@yarn.stigatle.no @prologic@twtxt.net my /tmp
is also fine now! Thanks for your help @prologic@twtxt.net!
receieveFile()
)? 🤔
@stigatle@yarn.stigatle.no Sweet, thank you! I’ve been shooting myself in the foot over here and want to make sure the situation is getting fixed!
receieveFile()
)? 🤔
@stigatle@yarn.stigatle.no @prologic@twtxt.net testing 1 2 3 can either of you see this?
Hmm, I wonder if I banned too many IPs and caused these issues for myself 😆
twts are taking a very long time to post from yarn
after the latest upgrade. Like a good 60 seconds.
receieveFile()
)? 🤔
@prologic@twtxt.net I don’t know if this is new, but I’m seeing:
Jul 25 16:01:17 buc yarnd[1921547]: time="2024-07-25T16:01:17Z" level=error msg="https://yarn.stigatle.no/user/stigatle/twtxt.txt: client.Do fail: Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)" error="Get \"https://yarn.stigatle.no/user/stigatle/twtxt.txt\": dial tcp 185.97.32.18:443: i/o timeout (Client.Timeout exceeded while awaiting headers)"
I no longer see twts from @stigatle@yarn.stigatle.no at all.
receieveFile()
)? 🤔
@prologic@twtxt.net Have you been seeing any of my replies?
It shows up in my twtxt feed so that’s good.
This is a test. I am not seeing twts from @stigatle@yarn.stigatle.no and it seems like @prologic@twtxt.net might not be seeing twts from me. Do people see this?
@prologic@twtxt.net I am not seeing twts from @stigatle@yarn.stigatle.no anymore. Are you seeing twts from me?
./tools/dump_cache.sh: line 8: bat: command not found
No Token Provided
I don’t have bat
on my VPS and there is no package for installing it. Is cat
a reasonable alternate?
@prologic@twtxt.net Try hitting this URL:
https://twtxt.net/external?nick=nosuchuser&uri=https://foo.com
Change nosuchuser
to any phrase at all.
If you hit https://twtxt.net/external?nick=nosuchuser , you’re given an error. If you hit that URL above with the uri
parameter, you can a legitimate-looking page. I think that is a bug.
@prologic@twtxt.net Hitting that URL returns a bunch of HTML even though there is no user named lovetocode999
on my pod. I think it should 404, and maybe with a delay, to discourage whatever this abuse is. Basically this can be used to DDoS a pod by forcing it to generate a hunch of HTML just by doing a bogus GET like this.
I’m seeing GETs like this over and over again:
"GET /external?nick=lovetocode999&uri=https://vuf.minagricultura.gov.co/Lists/Informacin%20Servicios%20Web/DispForm.aspx?ID=8375144 HTTP/1.1" 200 35861 17.077914ms
always to nick=lovetocode999
, but with different uri
s. What are these calls?
@stigatle@yarn.stigatle.no I used the following hack to keep my VPS from running out of space: watch -n 60 rm -rf /tmp/yarn-avatar-*
, run in tmux
so it keeps running.
The vast majority of this traffic was coming from a single IP address. I blocked that IP on my VPS, and I sent an abuse report to the abuse email of the service provider. That ought to slow it down, but the vulnerability persists and I’m still getting traffic from other IPs that seem to be doing the same thing.
@prologic@twtxt.net There are a lot of logs being generated by yarnd
, which is something I haven’t seen before too:
Jul 25 14:32:42 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:42 (162.211.155.2) "GET /twt/ubhq33a HTTP/1.1" 404 29 643.251µs
Jul 25 14:32:43 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:43 (162.211.155.2) "GET /twt/112073211746755451 HTTP/1.1" 400 12 505.333µs
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (111.119.213.103) "GET /twt/whau6pa HTTP/1.1" 200 37360 35.173255ms
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (162.211.155.2) "GET /twt/112343305123858004 HTTP/1.1" 400 12 455.069µs
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (168.199.225.19) "GET /external?nick=lovetocode999&uri=http%3A%2F%2Fwww.palapa.pl%2Fbaners.php%3Flink%3Dhttps%3A%2F%2Fwww.dwnewstoday.com HTTP/1.1" 200 36167 19.582077ms
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (162.211.155.2) "GET /twt/112503061785024494 HTTP/1.1" 400 12 619.152µs
Jul 25 14:32:46 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:46 (162.211.155.2) "GET /twt/111863876118553837 HTTP/1.1" 400 12 817.678µs
Jul 25 14:32:46 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:46 (162.211.155.2) "GET /twt/112749994821704400 HTTP/1.1" 400 12 540.616µs
Jul 25 14:32:47 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:47 (103.204.109.150) "GET /external?nick=lovetocode999&uri=http%3A%2F%2Fampurify.com%2Fbbs%2Fboard.php%3Fbo_table%3Dfree%26wr_id%3D113858 HTTP/1.1" 200 36187 15.95329ms
I’ve seen that nick=lovetocode999
a bunch.
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net Inspect? What’s sift
? What would you like to know about the files?
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net 10 Gbytes has accumulated since I made that last post. It’s coming in at a rate of 55 Mbits/second !
@prologic@twtxt.net I think there’s more to it than that. I’ve updated, yet hundreds of gigabytes of junk is still accumulating.
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net I’m still getting this crap:
abucci@buc:~/yarnd/yarn$ ls -lh /tmp/yarnd-avatar-*
-rw------- 1 abucci abucci 863M Jul 25 14:19 /tmp/yarnd-avatar-1594499680
-rw------- 1 abucci abucci 7.8G Jul 25 14:19 /tmp/yarnd-avatar-2144295337
-rw------- 1 abucci abucci 9.8G Jul 25 14:19 /tmp/yarnd-avatar-2334738193
-rw------- 1 abucci abucci 10G Jul 25 14:14 /tmp/yarnd-avatar-2494107777
-rw------- 1 abucci abucci 9.5G Jul 25 13:59 /tmp/yarnd-avatar-2619243454
-rw------- 1 abucci abucci 11G Jul 25 14:04 /tmp/yarnd-avatar-2922187513
-rw------- 1 abucci abucci 7.5G Jul 25 14:14 /tmp/yarnd-avatar-349775570
-rw------- 1 abucci abucci 10G Jul 25 14:09 /tmp/yarnd-avatar-3640724243
-rw------- 1 abucci abucci 901M Jul 25 14:19 /tmp/yarnd-avatar-3921595598
-rw------- 1 abucci abucci 9.5G Jul 25 13:59 /tmp/yarnd-avatar-609094539
-rw------- 1 abucci abucci 9.3G Jul 25 14:04 /tmp/yarnd-avatar-755173392
-rw------- 1 abucci abucci 7.9G Jul 25 14:09 /tmp/yarnd-avatar-984061000
Something like 100 Gbytes of this junk has accumulated since I updated and re-started the server. I’m now running the latest version of yarnd
, so the update did not fix the problem. Something else is going wrong.
How are temporary files growing to 10 Gbytes in size? The name of the file is “yarn-avatar”, but why would avatars be so large?
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net Alright, running yarnd
0.15.1 now. I stopped my hack so we’ll see if the VPS gets clogged with junk 😆
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
abucci@buc:~/yarnd/yarn$ make preflight
Checking Go version ... [ ERR ]
Go 1.16+ is required, found go1.22.5
FATAL: 🙁 preflight failed
make: *** [Makefile:33: preflight] Error 1
🤔
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net Aha, got it. Thanks for looking into it. I’m updating now and we’ll see if that stops it.
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net Sure, but why would this start happening all of a sudden today? Nothing like this has happened before. Is this a known bug?
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
https://anthony.buc.ci/info has the deets!
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net 0.15.1, looks like.
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@bender@twtxt.net I hope so too. I’ve never seen anything like this before. Whatever it is, it’s strange.
Hack of the day: running watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
This is completely insane!
abucci@buc:/tmp$ du -sh /tmp/yarnd-avatar-*
564M /tmp/yarnd-avatar-3024946878
7.2G /tmp/yarnd-avatar-3122347915
11G /tmp/yarnd-avatar-3533381443
445M /tmp/yarnd-avatar-441914658
I’m going to have to shut down my server soon. This looks like some kind of DDoS. Whether intentional or not it’s filling up the disk at an unsustainable rate.
There are also a bunch of log messages scrolling by. I’ve never seen this much activity in the log:
Jul 25 01:37:39 buc.ci yarnd[829]: [yarnd] 2024/07/25 01:37:39 (149.71.56.69) "GET /external?nick=lovetocode999&uri=https://pagez.co.uk/services/your-own-100-fully-owned-online-vi>
Jul 25 01:37:39 buc.ci yarnd[829]: [yarnd] 2024/07/25 01:37:39 (162.211.155.2) "GET /twt/112135496802692324 HTTP/1.1" 400 12 826.65µs
Jul 25 01:37:40 buc.ci yarnd[829]: [yarnd] 2024/07/25 01:37:40 (51.222.253.14) "GET /conv/muttriq HTTP/1.1" 200 36881 20.448309ms
Jul 25 01:37:40 buc.ci yarnd[829]: [yarnd] 2024/07/25 01:37:40 (162.211.155.2) "GET /twt/112730114943543514 HTTP/1.1" 400 12 663.493µs
Jul 25 01:37:40 buc.ci yarnd[829]: [yarnd] 2024/07/25 01:37:40 (27.75.213.253) "GET /external?nick=lovetocode999&uri=http%3A%2F%2Falfarah.jo%2FHome%2FChangeCulture%3FlangCode%3Den>
Jul 25 01:37:40 buc.ci yarnd[829]: time="2024-07-25T01:37:40Z" level=error msg="http://bynet.com.br/log_envio.asp?cod=335&email=%21%2AEMAIL%2A%21&url=https%3A%2F%2Fwww.almanacar.c>
Jul 25 01:37:40 buc.ci yarnd[829]: [yarnd] 2024/07/25 01:37:40 (162.211.155.2) "GET /twt/111674756400660911 HTTP/1.1" 400 12 545.106µs
Jul 25 01:37:40 buc.ci yarnd[829]: time="2024-07-25T01:37:40Z" level=warning msg="feed FetchFeedRequest: @<lovetocode999 http://alfarah.jo/Home/ChangeCulture?langCode=en&returnUrl>
Jul 25 01:37:41 buc.ci yarnd[829]: [yarnd] 2024/07/25 01:37:41 (162.211.155.2) "GET /twt/112507964696096567 HTTP/1.1" 400 12 838.946µs
Something really weird is going on?
I deleted them all right before I sent my previous message, and already, a few minutes later, there are two more:
abucci@buc:~$ du -sh /tmp/yarnd-avatar-3*
1.8G /tmp/yarnd-avatar-3122347915
2.4G /tmp/yarnd-avatar-3533381443
What is this?
@prologic@twtxt.net This is weird, but today, out of nowhere, yarnd
filled up the disk on the VPS where I run it. It’s never done anything like this before and I have no idea why it would start. But it threw almost 700 Gbytes of data into /tmp
in files like this:
yarnd-avatar-1087570772 yarnd-avatar-1599127133 yarnd-avatar-2042956376 yarnd-avatar-2562946212 yarnd-avatar-3274766535 yarnd-avatar-3931929859 yarnd-avatar-553201529
yarnd-avatar-1089125452 yarnd-avatar-1606826819 yarnd-avatar-2089122560 yarnd-avatar-2611944556 yarnd-avatar-3310922372 yarnd-avatar-3938996661 yarnd-avatar-556240195
yarnd-avatar-1101228867 yarnd-avatar-1618755765 yarnd-avatar-2104107259 yarnd-avatar-2641384948 yarnd-avatar-3326285269 yarnd-avatar-3939402047 yarnd-avatar-559344463
yarnd-avatar-1112165824 yarnd-avatar-1650827505 yarnd-avatar-2142824779 yarnd-avatar-2680659340 yarnd-avatar-3340682113 yarnd-avatar-3998621883 yarnd-avatar-570292705
yarnd-avatar-1119886894 yarnd-avatar-1656673647 yarnd-avatar-2160786463 yarnd-avatar-271923479 yarnd-avatar-3374584613 yarnd-avatar-4005102536 yarnd-avatar-595490106
yarnd-avatar-1131417623 yarnd-avatar-1685698239 yarnd-avatar-2165405940 yarnd-avatar-2793562275 yarnd-avatar-3380606954 yarnd-avatar-4016872095 yarnd-avatar-679251850
yarnd-avatar-1160959085 yarnd-avatar-1746759128 yarnd-avatar-2171489899 yarnd-avatar-2842068287 yarnd-avatar-3416352997 yarnd-avatar-4110048378 yarnd-avatar-679950970
yarnd-avatar-1231649265 yarnd-avatar-1752278279 yarnd-avatar-2251317422 yarnd-avatar-2843868670 yarnd-avatar-3468636088 yarnd-avatar-4116552474 yarnd-avatar-737874628
164 files. Some are empty, some are 7 or even 10 Gbyte.
Any idea what would cause that? And why now, after running yarnd
for so long with nothing like this happening?
@movq@www.uninformativ.de This outage did affect me, though not much, via the university where my wife teaches and where I teach sometimes. They actually sent out an alert in their emergency alert system (the one they use to alert people of extreme weather events and bomb threats, mostly), telling people that all IT systems were down.
A friend of mine elsewhere pointed out that they pushed this change on a Friday, which of course no software developer with any experience would ever, ever, ever do. I have to assume there’s some toxic management at CrowdStrike, but who knows. Even more reasons to sympathize with the poor folks who are probably going to be working nights and weekends to clean up this mess.
⨁ Follow
button on their profile page or use the Follow form and enter a Twtxt URL. You may also find other feeds of interest via Feeds. Welcome! 🤗
@prologic@twtxt.net One of these days I’ll turn off registrations
@movq@www.uninformativ.de Somewhere or another, I think in a William Byrd talk, I heard it suggested that the best ideas in computer science should fit on an index card (ah yes it’s this one: https://paperswelove.org/2017/video/will-byrd-most-beautiful-program/ ). He was referring to the basic principles of LISP/the lambda calculus, which have sometimes been called the Maxwell’s equations of computer programming (by Alan Kay). Simple, short, elegant, but very densely packed with meaning–generations of people have spent their whole careers unpacking what those simple rules can do.
Much of modern software feels like the polar opposite of that. Not only can you not write it on an index card, you never will be able to because people who write software don’t seem to aspire to try. I wish more people thought this way though!
@New_scientist@feeds.twtxt.net It’s insane that a single botched software update can have worldwide impact. We’ve messed up badly.
⨁ Follow
button on their profile page or use the Follow form and enter a Twtxt URL. You may also find other feeds of interest via Feeds. Welcome! 🤗
⨁ Follow
button on their profile page or use the Follow form and enter a Twtxt URL. You may also find other feeds of interest via Feeds. Welcome! 🤗
@bender@twtxt.net I have nothing against GoToSocial, but:
GoToSocial stores statuses, accounts, etc, in a database. This can be either SQLite or Postgres.
snac
is simpler. Some JSON files and that’s it. I can read them with jq
and less
. I can use tar
to back them up. I can hand edit them in a text editor.
I think @abucci@anthony.buc.ci and @stigatle@yarn.stigatle.no are running snac? I didn’t have a closer look at snac (no intention of running it), but if that is a relatively small daemon (maybe comparable to Yarn?) that gives you access to the whole world of ActivityPub, then, well, yeah … That’s tough to beat.
Yes, I am running snac
on the same VPS where I run my yarn pod. I heard of it from @stigatle@yarn.stigatle.no, so blame him 😏 snac
is written in C and is one simple executable, uses very little resources on the server, and stores everything in JSON files (no databases or other integrations; easy to save and migrate your data) . It’s definitely like yarn in that respect.
I haven’t been around yarn much lately. Part of that is that I’ve been very busy at work and home and only have a limited time to spend goofing off on a social network. Part of it is that I’m finding snac
very useful: I’ve connected with friends I’d previously lost touch with, I’ve found useful work-related information, I’ve found colleagues to follow, and even found interesting conferences to attend. There’s a lot more going on over there.
I guess if I had to put it simply, I’d say I have limited time to play and there are more kids in the ActivityPub sandbox than this one. That’s not a ding on yarn–I like yarn and twtxt–I’m just time constrained.
@New_scientist@feeds.twtxt.net Silicon Valley’s top AI models are terrible at almost everything. They only seem otherwise because people are easily fooled into believing they have capabilities they don’t have.