Post AzHQ0X5IsEVYuSWBP6 by skaphle@social.tchncs.de
(DIR) More posts by skaphle@social.tchncs.de
(DIR) Post #AzHMIWTJUV4qPB4UPA by publicvoit@graz.social
2025-10-16T21:20:48Z
0 likes, 0 repeats
A small number of samples can poison LLMs of any size:https://www.anthropic.com/research/small-samples-poison"In a joint study with the UK AI Security Institute and the Alan Turing Institute, we found that as few as 250 malicious documents can produce a "#backdoor" vulnerability in a large language model—regardless of model size or training data volume."Size does not matter: the #LLM edition. 😜#AI #Claude #backdoors #malware #Anthropic
(DIR) Post #AzHQ0X5IsEVYuSWBP6 by skaphle@social.tchncs.de
2025-10-16T22:02:21Z
0 likes, 0 repeats
@publicvoit So... can that be used as a weapon? My immediate thought is that if enough people put some of these documents on their public servers, maybe we can fight back those crawler bots that ignore robots.txt
(DIR) Post #AzI8UOMOGsewWQYhU0 by publicvoit@graz.social
2025-10-17T06:20:46Z
0 likes, 0 repeats
@skaphle Go.