Posts by mttaggart@infosec.exchange
(DIR) Post #B0t12CrU8Lw5bhW32e by mttaggart@infosec.exchange
2025-12-03T20:04:30Z
0 likes, 1 repeats
You know what? I'm not getting into a fight about how big a deal the React vuln is. The footprint is bigger than you think because the correlation is difficult to ascertain, but Next.js alone is a monster.And this is in an ecosystem that is not particularly used to this kind of vulnerability.
(DIR) Post #B17VSWs05Dl61qfCvg by mttaggart@infosec.exchange
2025-12-10T17:07:37Z
0 likes, 1 repeats
You may be tempted to think of prompt injection attacks against language models as "social engineering." Resist this temptation. Prompt injection is a mathematical attack against a non-deterministic system. Language may be the substrate, but the substance is numerical vectors. In other words, thinking of the attack as human language is a pointless limitation. The possibilities of what can go into the prompt to produce undesirable output are functionally infinite. Poetry, context shifting, and other human-like attacks are only the beginning. What comes next is a weaponization of the linguistic form in ways that seem utterly alien to human readers. But to the models, it's all just elements in the matrix.
(DIR) Post #B1EfPBnuJD9ZTOVyeu by mttaggart@infosec.exchange
2024-10-16T22:11:39Z
1 likes, 0 repeats
@cR0w @Viss Make sure you get the original
(DIR) Post #B1eM0O0VDnuvWuZvlI by mttaggart@infosec.exchange
2025-12-26T18:14:35Z
0 likes, 1 repeats
New Year's Resolution: attack and dethrone genAI
(DIR) Post #B23MVLC5GbAPmq4PLc by mttaggart@infosec.exchange
2026-01-07T19:12:12Z
2 likes, 0 repeats
Don't, uhDon't use this. Don't let your family or friends use it. If you see it in your neighborhood, bang pots and pans, whistle, and scare it off.https://help.openai.com/en/articles/20001036-what-is-chatgpt-health
(DIR) Post #B23MVPwJceWaVNjNxo by mttaggart@infosec.exchange
2026-01-07T19:36:49Z
0 likes, 0 repeats
Let me explain very clearly: this is not a feature. This, as evinced by its unavailability in countries with more stringent laws, is a privacy invasion and money grab masquerading as convenience. It is a legislative failure that this can exist at all.
(DIR) Post #B23MVUmDdcQ3VW2tQ8 by mttaggart@infosec.exchange
2026-01-07T19:42:47Z
0 likes, 0 repeats
This is being rolled out to all users, including non-paying ones. So as always, you gotta ask: what are they getting out of this?
(DIR) Post #B23MVZn6zj6X3jVB0S by mttaggart@infosec.exchange
2026-01-07T20:23:06Z
0 likes, 0 repeats
Here's the marketing blog post for this, but the help post above keeps getting updated.https://openai.com/index/introducing-chatgpt-health/
(DIR) Post #B24vVdZ4ehtQJgezjc by mttaggart@infosec.exchange
2026-01-08T14:57:21Z
0 likes, 1 repeats
RE: https://infosec.exchange/@mttaggart/115855401426498779Let me reiterate: ChatGPT Health is a dangerous product and a terrible idea. Keep it away from everyone you care about.
(DIR) Post #B2DggHsVsLz1Lw0pQe by mttaggart@infosec.exchange
2026-01-12T19:04:13Z
0 likes, 0 repeats
Hell yeahhttps://github.com/zoicware/RemoveWindowsAI
(DIR) Post #B2DggJa9XaU8dZRC5Y by mttaggart@infosec.exchange
2026-01-12T20:25:58Z
1 likes, 0 repeats
I'm just gonna block any ding dongs who reply that the right move is to install Linux.This is Mastodon. Who the hell do you think you're talking to? Save your proselytizing for elsewhere.People don't always have a choice about OS, but they may yet be able to mitigate harm.
(DIR) Post #B2JtR3pyGJUXoXeWZs by mttaggart@infosec.exchange
2026-01-15T05:24:38Z
0 likes, 1 repeats
Problem: LLMs can't defend against prompt injection.Solution: A specialized filtering model that detects prompt injections.Problem: That too is susceptible to bypass and prompt injection.Solution: We reduce the set of acceptable instructions to a more predictable space and filter out anything that doesn't match.Problem: If you over-specialize, the LLM won't understand the instructions.Solution: We define a domain-specific language in the system prompt, with all allowable commands and parameters. Anything else is ignored.Problem: We just reinvented the CLI.
(DIR) Post #B2JtR9cMloArk5KkCm by mttaggart@infosec.exchange
2026-01-15T05:33:54Z
0 likes, 0 repeats
What are we doing with our time on this earthhttps://www.promptarmor.com/resources/claude-cowork-exfiltrates-fileshttps://www.varonis.com/blog/reprompt
(DIR) Post #B2M805HKgNWo5txu6q by mttaggart@infosec.exchange
2026-01-16T21:34:30Z
0 likes, 0 repeats
@alice This is what happens in our Discord when someone uses "guys." Folks sometimes get annoyed, but I'm not budging on the issue.And yeah, https://heyguys.cc/
(DIR) Post #B2UNpGg6plU4X0eGPI by mttaggart@infosec.exchange
2026-01-20T20:47:28Z
0 likes, 0 repeats
WowowowIf the client supply[sic] a carefully crafted USER environment value being the string "-f root", and passes the telnet(1) -a or --login parameter to send this USER environment to the server, the client will be automatically logged in as root bypassing normal authentication processes.https://seclists.org/oss-sec/2026/q1/89
(DIR) Post #B2UNpIFwxzkPQSQOuW by mttaggart@infosec.exchange
2026-01-20T21:29:41Z
1 likes, 0 repeats
Well dang
(DIR) Post #B2VykFv47PNABz4qSO by mttaggart@infosec.exchange
2026-01-21T16:11:50Z
0 likes, 1 repeats
Cool: Wikipedia editors compiled this comprehensive resource on detecting generative text.Not Cool: Of course slop shovelers are using it to avoid those exact tells.https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
(DIR) Post #B2hthI4FDyFvQeCbJo by mttaggart@infosec.exchange
2025-11-12T21:32:02Z
1 likes, 0 repeats
@paul_ipv6 I'm upping the stakes for 2026; stay tuned
(DIR) Post #B2lHPrKOMvk7wsr7uS by mttaggart@infosec.exchange
2026-01-28T19:42:27Z
1 likes, 1 repeats
Dang Windows is losing the brovelopers at an alarming ratehttps://www.himthe.dev/blog/microsoft-to-linux
(DIR) Post #B2tlkS9XiQZtqBCFyi by mttaggart@infosec.exchange
2026-02-02T03:36:23Z
0 likes, 0 repeats
This settlement is making the rounds, along with a lot of "I knew it!" FUD. Google has any number of reasons to not want to go to court on this. But the research on this has been fairly robust, and we've yet to see evidence of microphone eavesdropping as claimed here.Until someone shows me evidence a microphone is being activated by a Google service without user input, this is speculation at best.https://www.cbsnews.com/news/google-voice-assistant-lawsuit-settlement-68-million/