@bender@twtxt.net Haha 🤣
@bender@twtxt.net I heard one of the candidates promised to invest 4,000,000 bitcoin 🤣
@bender@twtxt.net it’s very muggy in the Table Tennis hall right now I had to take my jacket off 🤣
@lyse@lyse.isobeef.org I’ll fix it tonight Sadly I have to rebuild the index 🤦♂️
@lyse@lyse.isobeef.org This ☝️
Oh I forgot again 🤦♂️ Last Saturday of the month, so if anyone’s up for a friendly catch up over video tomorrow? Same time, same place 👌
@bender@twtxt.net Weird dunno what to say🤣
@bender@twtxt.net Huh? 🤔
receieveFile()
)? 🤔
Also FWIW this is all my fault for writing shitty vulnerable code 🤣 So blame me! I’m sorry 🙏
receieveFile()
)? 🤔
FWIW I’m still trying to find the the cause of the mult-GB avatars that both @stigatle@yarn.stigatle.no and @abucci@anthony.buc.ci ’s pods were both teying yo download. The flaw has since been fixed in the code but I’m still trying to investigate the source 🤞
Hmmm something happened last night at ~3am (AEST) that decrased traffic to my pod quite considerably… Hmmm? Anyone have any ideas? 💡
/tmp
is also fine now! Thanks for your help @prologic!
@abucci@anthony.buc.ci No worries! All in the name of better reliability and security 😅
@stigatle@yarn.stigatle.no Thanks! Sooo cold 🥶
receieveFile()
)? 🤔
@stigatle@yarn.stigatle.no no problems 👌 one problem solved at least 🤣
Anyway, I’m gonna have to go to bed… We’ll continue this on the weekend. Still trying to hunt down some kind of suspected mult-GB avatar using @stigatle@yarn.stigatle.no ’s pod’s cache:
$ (echo "URL Bytes"; sort -n -k 2 -r < avatars.txt | head) | column -t
URL Bytes
https://birkbak.neocities.org/avatar.jpg 667640
https://darch.neocities.org/avatar.png 652960
http://darch.dk/avatar.png 603210
https://social.naln1.ca/media/0c4f65a4be32ff3caf54efb60166a8c965cc6ac7c30a0efd1e51c307b087f47b.png 327947
...
But so far nothing much… Still running the search…
receieveFile()
)? 🤔
Out of interest, are you able to block whole ASN(s)? I blocked the entirely of teh AWS and Facebook ASN(s) recently.
receieveFile()
)? 🤔
@abucci@anthony.buc.ci Oh 🤣 Well my IP is a known subnet and static, so if you need to know what it is, Email me 😅
receieveFile()
)? 🤔
@abucci@anthony.buc.ci Seems to be okay now hmmm
@abucci@anthony.buc.ci Hmm I can see your twts on my pod now 🤔
@abucci@anthony.buc.ci / @abucci@anthony.buc.ci Any interesting errors pop up in the server logs since the the flaw got fixed (unbounded receieveFile()
)? 🤔
Hmmm 🧐
for url in $(jq -r '.Twters[].avatar' cache.json | sed '/^$/d' | grep -v -E '(twtxt.net|anthony.buc.ci|yarn.stigatle.no|yarn.mills.io)' | sort -u); do echo "$url $(curl -I -s -o /dev/null -w '%header{content-length}' "$url")"; done
...
😅 Let’s see… 🤔
@stigatle@yarn.stigatle.no The one you sent is fine. I’m inspecting it now. I’m just saying, do yourself a favor and nuke your pod’s garbage cache 🤣 It’ll rebuild automatically in a much more prestine state.
That was also a source of abuse that also got plugged (being able to fill up the cache with garbage data)
Ooof
$ jq '.Feeds | keys[]' cache.json | wc -l
4402
If you both don’t mind dropping your caches. I would recommend it. Settings -> Poderator Settings -> Refresh cache.
@stigatle@yarn.stigatle.no Thank you! 🙏
@stigatle@yarn.stigatle.no Ta. I hope my theory is right 😅
But just have a look at the yarnd
server logs too. Any new interesting errors? 🤔 No more multi-GB tmp files? 🤔
@stigatle@yarn.stigatle.no You want to run backup_db.sh
and dump_cache.sh
They pipe JSON to stdout and prompt for your admin password. Example:
URL=<your_pod_url> ADMIN=<your_admin_user> ./tools/dump_cache.sh > cache.json
Just thinking out loud here… With that PR merged (or if you built off that branch), you might hopefully see new errors popup and we might catch this problematic bad feed in the act? Hmmm 🧐
@slashdot@feeds.twtxt.net I thought Sunday was the hottest day on Earth 🤦♂️ wtf is wrong with Slashdot these days?! 🤣
if we can figure out wtf is going on here and my theory is right, we can blacklist that feed, hell even add it to the codebase as an “asshole”.
@stigatle@yarn.stigatle.no The problem is it’ll only cause the attack to stop and error out. It won’t stop your pod from trying to do this over and over again. That’s why I need some help inspecting both your pods for “bad feeds”.
@abucci@anthony.buc.ci / @stigatle@yarn.stigatle.no Please git pull
, rebuild and redeploy.
There is also a shell script in ./tools
called dump_cache.sh
. Please run this, dump your cache and share it with me. 🙏
I’m going to merge this…
@abucci@anthony.buc.ci Yeah I’ve had to block entire ASN(s) recently myself from bad actors, mostly bad AI bots actually from Facebook and Caude AI
Or if y’all trust my monkey-ass coding skillz I’ll just merge and you can do a git pull
and rebuild 😅
@stigatle@yarn.stigatle.no / @abucci@anthony.buc.ci My current working theory is that there is an asshole out there that has a feed that both your pods are fetching with a multi-GB avatar URL advertised in their feed’s preamble (metadata). I’d love for you both to review this PR, and once merged, re-roll your pods and dump your respective caches and share with me using https://gist.mills.io/
@stigatle@yarn.stigatle.no I’m wondering whether you’re having the same issue as @abucci@anthony.buc.ci still? mulit-GB yarnd-avatar-*1
files piling up in /tmp/
? 🤔
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci So… The only way I see this happening at all is if your pod is fetching feeds which have multi-GB sized avatar(s) in their feed metadata. So the PR I linked earlier will plug that flaw. But now I want to confirm that theory. Can I get you to dump your cache to JSON for me and share it with me?
@abucci@anthony.buc.ci Yeah that should be okay, you get so much crap on the web 🤦♂️
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci sift
is a tool I use for grep/find, etc.
What would you like to know about the files?
Roughly what their contents are. I’ve been reviewing the code paths responsible and have found a flaw that needs to be fixed ASAP.
Here’s the PR: https://git.mills.io/yarnsocial/yarn/pulls/1169
@abucci@anthony.buc.ci I believe you are correct.
@abucci@anthony.buc.ci That’s fucking insane 😱 I know what code-paths is triggering this, but need to confirm a few other things… Some correlation with logs would also help…
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
Do you happen to have the activitypub
feature turned on btw? In fact could you just list out what features you have enabled please? 🙏
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
These should be getting cleaned up, but I’m very concerned about the sizes of these 🤔
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
Hah 😈
prologic@JamessMacStudio
Fri Jul 26 00:22:44
~/Projects/yarnsocial/yarn
(main) 0
$ sift 'yarnd-avatar-*'
internal/utils.go:666: tf, err := receiveFile(res.Body, "yarnd-avatar-*")
@abucci@anthony.buc.ci Don’t suppose you can inspect one of those files could you? Kinda wondering if there’s some other abuse going on here that I need to plug? 🔌
watch -n 60 rm -rf /tmp/yarn-avatar-*
in a tmux
because all of a sudden, without warning, yarnd
started throwing hundreds of gigabytes of files with names like yarn-avatar-62582554
into /tmp
, which filled up the entire disk and started crashing other services.
@abucci@anthony.buc.ci Hmm that’s a bit weird then. Lemme have a poke.
Hmm remove the cpu limits on this pod, not even sure why I had ‘em set tbh, we decided at my day job that setting cpu limits on containers is a bit of a silly idea too. Anyway, pod should be much snappier now 😅