Ooof
$ jq '.Feeds | keys[]' cache.json | wc -l
4402
If you both don’t mind dropping your caches. I would recommend it. Settings -> Poderator Settings -> Refresh cache.
Ooof
$ jq '.Feeds | keys[]' cache.json | wc -l
4402
If you both don’t mind dropping your caches. I would recommend it. Settings -> Poderator Settings -> Refresh cache.
./tools/dump_cache.sh: line 8: bat: command not found
No Token Provided
I don’t have bat
on my VPS and there is no package for installing it. Is cat
a reasonable alternate?
@prologic@twtxt.net No worries, thanks for working on the fix for it so fast :)
Google DeepMind’s Game-Playing AI Tackles a Chatbot Blind Spot
Google’s new advance combines a large language model with a self-learning AI. The technique could address some shortcomings with AI—although there’s a catch. ⌘ Read more
@prologic@twtxt.net Yup. Didn’t regret climbing these three hundred odd meters of elevation. :-)
@stigatle@yarn.stigatle.no Thank you! 🙏
@prologic@twtxt.net Try hitting this URL:
https://twtxt.net/external?nick=nosuchuser&uri=https://foo.com
Change nosuchuser
to any phrase at all.
If you hit https://twtxt.net/external?nick=nosuchuser , you’re given an error. If you hit that URL above with the uri
parameter, you can a legitimate-looking page. I think that is a bug.
@prologic@twtxt.net here you go:
https://drive.proton.me/urls/XRKQQ632SG#LXWehEZMNQWF
Russia will be forced to scale down its attacks in a month and a half, Ukrainian commander says ⌘ Read more
@stigatle@yarn.stigatle.no Ta. I hope my theory is right 😅
@prologic@twtxt.net Hitting that URL returns a bunch of HTML even though there is no user named lovetocode999
on my pod. I think it should 404, and maybe with a delay, to discourage whatever this abuse is. Basically this can be used to DDoS a pod by forcing it to generate a hunch of HTML just by doing a bogus GET like this.
@prologic@twtxt.net thank you. I run it now as you said, I’ll get the files put somewhere shortly.
But just have a look at the yarnd
server logs too. Any new interesting errors? 🤔 No more multi-GB tmp files? 🤔
@stigatle@yarn.stigatle.no You want to run backup_db.sh
and dump_cache.sh
They pipe JSON to stdout and prompt for your admin password. Example:
URL=<your_pod_url> ADMIN=<your_admin_user> ./tools/dump_cache.sh > cache.json
I’m seeing GETs like this over and over again:
"GET /external?nick=lovetocode999&uri=https://vuf.minagricultura.gov.co/Lists/Informacin%20Servicios%20Web/DispForm.aspx?ID=8375144 HTTP/1.1" 200 35861 17.077914ms
always to nick=lovetocode999
, but with different uri
s. What are these calls?
@stigatle@yarn.stigatle.no Worky, worky now! :-)
Mate, these are some really nice gems! What a stunning landscape. I love it. Holy cow, that wooden church looks really sick. Even though, I’m not a scroll guy and prefer simple, straight designs, I have to say, that the interior craftmanship is something to admire.
@prologic@twtxt.net so, if I’m correct the dump tool made a pods.txt and a stats.txt file, those are the ones you want? or do you want the output that it spits out in the console window?
Just thinking out loud here… With that PR merged (or if you built off that branch), you might hopefully see new errors popup and we might catch this problematic bad feed in the act? Hmmm 🧐
@slashdot@feeds.twtxt.net I thought Sunday was the hottest day on Earth 🤦♂️ wtf is wrong with Slashdot these days?! 🤣
if we can figure out wtf is going on here and my theory is right, we can blacklist that feed, hell even add it to the codebase as an “asshole”.
@stigatle@yarn.stigatle.no The problem is it’ll only cause the attack to stop and error out. It won’t stop your pod from trying to do this over and over again. That’s why I need some help inspecting both your pods for “bad feeds”.
@prologic@twtxt.net I’m running it now. I’ll keep an eye out for the tmp folder now (I built the branch you have made). I’ll let you know shortly if it helped on my end.
@abucci@anthony.buc.ci / @stigatle@yarn.stigatle.no Please git pull
, rebuild and redeploy.
There is also a shell script in ./tools
called dump_cache.sh
. Please run this, dump your cache and share it with me. 🙏
I’m going to merge this…
@abucci@anthony.buc.ci Yeah I’ve had to block entire ASN(s) recently myself from bad actors, mostly bad AI bots actually from Facebook and Caude AI
@stigatle@yarn.stigatle.no I used the following hack to keep my VPS from running out of space: watch -n 60 rm -rf /tmp/yarn-avatar-*
, run in tmux
so it keeps running.
The vast majority of this traffic was coming from a single IP address. I blocked that IP on my VPS, and I sent an abuse report to the abuse email of the service provider. That ought to slow it down, but the vulnerability persists and I’m still getting traffic from other IPs that seem to be doing the same thing.
Or if y’all trust my monkey-ass coding skillz I’ll just merge and you can do a git pull
and rebuild 😅
@stigatle@yarn.stigatle.no / @abucci@anthony.buc.ci My current working theory is that there is an asshole out there that has a feed that both your pods are fetching with a multi-GB avatar URL advertised in their feed’s preamble (metadata). I’d love for you both to review this PR, and once merged, re-roll your pods and dump your respective caches and share with me using https://gist.mills.io/
@prologic@twtxt.net yeah I still do have that issue, I compiled latest main, did not apply any patches or anything like that.
@stigatle@yarn.stigatle.no I’m wondering whether you’re having the same issue as @abucci@anthony.buc.ci still? mulit-GB yarnd-avatar-*1
files piling up in /tmp/
? 🤔
@prologic@twtxt.net yeah, I ran out of space again. also have the activitypub stuff turned off (just so you know).
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci So… The only way I see this happening at all is if your pod is fetching feeds which have multi-GB sized avatar(s) in their feed metadata. So the PR I linked earlier will plug that flaw. But now I want to confirm that theory. Can I get you to dump your cache to JSON for me and share it with me?
@abucci@anthony.buc.ci Yeah that should be okay, you get so much crap on the web 🤦♂️
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci sift
is a tool I use for grep/find, etc.
What would you like to know about the files?
Roughly what their contents are. I’ve been reviewing the code paths responsible and have found a flaw that needs to be fixed ASAP.
Here’s the PR: https://git.mills.io/yarnsocial/yarn/pulls/1169
Monday Was Hottest Recorded Day on Earth: ‘Uncharted Territory’
World temperature reached the hottest levels ever measured on Monday, beating the record that was set just one day before, data suggests. From a report: Provisional data published on Wednesday by the Copernicus Climate Change Service, which holds data that stretches back to 1940, shows that the global surface air temperature reached 62.87F (17.15C), co … ⌘ Read more
@prologic@twtxt.net There are a lot of logs being generated by yarnd
, which is something I haven’t seen before too:
Jul 25 14:32:42 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:42 (162.211.155.2) "GET /twt/ubhq33a HTTP/1.1" 404 29 643.251µs
Jul 25 14:32:43 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:43 (162.211.155.2) "GET /twt/112073211746755451 HTTP/1.1" 400 12 505.333µs
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (111.119.213.103) "GET /twt/whau6pa HTTP/1.1" 200 37360 35.173255ms
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (162.211.155.2) "GET /twt/112343305123858004 HTTP/1.1" 400 12 455.069µs
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (168.199.225.19) "GET /external?nick=lovetocode999&uri=http%3A%2F%2Fwww.palapa.pl%2Fbaners.php%3Flink%3Dhttps%3A%2F%2Fwww.dwnewstoday.com HTTP/1.1" 200 36167 19.582077ms
Jul 25 14:32:44 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:44 (162.211.155.2) "GET /twt/112503061785024494 HTTP/1.1" 400 12 619.152µs
Jul 25 14:32:46 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:46 (162.211.155.2) "GET /twt/111863876118553837 HTTP/1.1" 400 12 817.678µs
Jul 25 14:32:46 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:46 (162.211.155.2) "GET /twt/112749994821704400 HTTP/1.1" 400 12 540.616µs
Jul 25 14:32:47 buc yarnd[1911318]: [yarnd] 2024/07/25 14:32:47 (103.204.109.150) "GET /external?nick=lovetocode999&uri=http%3A%2F%2Fampurify.com%2Fbbs%2Fboard.php%3Fbo_table%3Dfree%26wr_id%3D113858 HTTP/1.1" 200 36187 15.95329ms
I’ve seen that nick=lovetocode999
a bunch.
Eminem’s New Album Prompted Gen X to Declare a TikTok ‘War’ on Gen Z
The release of Eminem’s new album The Death of Slim Shady has led to a series of viral shitposts from Gen X. Their targets, Gen Z, remain unbothered. ⌘ Read more
Swiss court ruling: only mothers have legal say in abortion cases ⌘ Read more
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net Inspect? What’s sift
? What would you like to know about the files?
@abucci@anthony.buc.ci I believe you are correct.
@abucci@anthony.buc.ci That’s fucking insane 😱 I know what code-paths is triggering this, but need to confirm a few other things… Some correlation with logs would also help…
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
Do you happen to have the activitypub
feature turned on btw? In fact could you just list out what features you have enabled please? 🙏
The 7 Best Folding Phones We’ve Tested and Reviewed (2024)
Ready to move on from the traditional glass slab? Introduce a hinge into your life with these folding smartphones. ⌘ Read more
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net 10 Gbytes has accumulated since I made that last post. It’s coming in at a rate of 55 Mbits/second !
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
These should be getting cleaned up, but I’m very concerned about the sizes of these 🤔
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
Hah 😈
prologic@JamessMacStudio
Fri Jul 26 00:22:44
~/Projects/yarnsocial/yarn
(main) 0
$ sift 'yarnd-avatar-*'
internal/utils.go:666: tf, err := receiveFile(res.Body, "yarnd-avatar-*")
@abucci@anthony.buc.ci Don’t suppose you can inspect one of those files could you? Kinda wondering if there’s some other abuse going on here that I need to plug? 🔌
@prologic@twtxt.net I think there’s more to it than that. I’ve updated, yet hundreds of gigabytes of junk is still accumulating.
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci Hmm that’s a bit weird then. Lemme have a poke.
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@prologic@twtxt.net I’m still getting this crap:
abucci@buc:~/yarnd/yarn$ ls -lh /tmp/yarnd-avatar-*
-rw------- 1 abucci abucci 863M Jul 25 14:19 /tmp/yarnd-avatar-1594499680
-rw------- 1 abucci abucci 7.8G Jul 25 14:19 /tmp/yarnd-avatar-2144295337
-rw------- 1 abucci abucci 9.8G Jul 25 14:19 /tmp/yarnd-avatar-2334738193
-rw------- 1 abucci abucci 10G Jul 25 14:14 /tmp/yarnd-avatar-2494107777
-rw------- 1 abucci abucci 9.5G Jul 25 13:59 /tmp/yarnd-avatar-2619243454
-rw------- 1 abucci abucci 11G Jul 25 14:04 /tmp/yarnd-avatar-2922187513
-rw------- 1 abucci abucci 7.5G Jul 25 14:14 /tmp/yarnd-avatar-349775570
-rw------- 1 abucci abucci 10G Jul 25 14:09 /tmp/yarnd-avatar-3640724243
-rw------- 1 abucci abucci 901M Jul 25 14:19 /tmp/yarnd-avatar-3921595598
-rw------- 1 abucci abucci 9.5G Jul 25 13:59 /tmp/yarnd-avatar-609094539
-rw------- 1 abucci abucci 9.3G Jul 25 14:04 /tmp/yarnd-avatar-755173392
-rw------- 1 abucci abucci 7.9G Jul 25 14:09 /tmp/yarnd-avatar-984061000
Something like 100 Gbytes of this junk has accumulated since I updated and re-started the server. I’m now running the latest version of yarnd
, so the update did not fix the problem. Something else is going wrong.
How are temporary files growing to 10 Gbytes in size? The name of the file is “yarn-avatar”, but why would avatars be so large?