[HN Gopher] Disrupting malicious uses of AI by state-affiliated ...
       ___________________________________________________________________
        
       Disrupting malicious uses of AI by state-affiliated threat actors
        
       Author : Josely
       Score  : 88 points
       Date   : 2024-02-14 11:58 UTC (11 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | KindAndFriendly wrote:
       | Curious how they managed to associate which account/queries
       | belong to which actor/group.
        
         | rightbyte wrote:
         | Some principal component analysis probably. The false positive
         | rate is most likely really high in that case.
        
       | chpatrick wrote:
       | You'd think the smart thing to do would be to let them use it,
       | see what they're doing and subtly mess with the results.
        
         | GaggiX wrote:
         | That's probably what happened. OpenAI should already have a big
         | chunk of data.
        
           | webdoodle wrote:
           | This is a well established intelligence tactic. It was used
           | by the Clinton administration to poison the well of nuclear
           | development in Iran.
           | 
           | https://en.wikipedia.org/wiki/Operation_Merlin
        
             | perihelions wrote:
             | Also by the Reagan administration to sabotage economic
             | development in the USSR,
             | 
             | https://www.washingtonpost.com/archive/politics/2004/02/27/
             | r...
             | 
             | - _" In January 1982, President Ronald Reagan approved a
             | CIA plan to sabotage the economy of the Soviet Union
             | through covert transfers of technology that contained
             | hidden malfunctions, including software that later
             | triggered a huge explosion in a Siberian natural gas
             | pipeline, according to a new memoir by a Reagan White House
             | official."_
        
       | xhkkffbf wrote:
       | When I look at the list of things they did, they seem to be
       | largely research of the open literature. Am I missing something?
        
         | 2OEH8eoCRo0 wrote:
         | Using it to research a target and also generate phishing
         | content for that target are pretty big IMO.
        
       | maldev wrote:
       | Fuck openai, I'm a security researcher and if you dare ask it
       | about anything Windows related it tells you to screw off. Linux?
       | Fine. But ask about some undocumented Windows behavior and it
       | says it can't. Ask it about patchguard internals as a reference?
       | Tells you it can't assist. Absolutely crazy, I can understand
       | asking it to write straight up malware, oh wait, it does that no
       | issue! Lord help you if you want to use it as a reference or
       | educational tool...
        
         | timeon wrote:
         | I don't get your attitude. This is not public service.
        
           | yjftsjthsd-h wrote:
           | Surely failing to be useful to paying customers is worse?
        
           | mtlmtlmtlmtl wrote:
           | OpenAI is not a public service, but they've certainly opened
           | themselves to this type of criticism with their high-horsing
           | about "benefiting humanity" and yadda yadda.
           | 
           | Meanwhile their actions suggest they're first and foremost
           | interested in benefiting themselves and whoever's given them
           | the most money, which certainly isn't their users.
        
         | yusml wrote:
         | Agreed. Open AI defo need to tune their models so that it can
         | provide enough information but not too much info which can be
         | malicious. Currently, anything remotely close is flagged as
         | malicious.
        
           | __loam wrote:
           | Who gets to decide what is and isn't malicious
        
         | udev4096 wrote:
         | Yeah, fuck OpenAI. The amount of censoring done by them in the
         | name of "alignment" is fucking crazy
        
         | sunk1st wrote:
         | It's silly that they do that. I doubt it matters, though. In my
         | experience, querying ChatGPT for factual information like that
         | is a mistake. It isn't reliably accurate enough.
        
       | Cheer2171 wrote:
       | Much of the alleged "malicious uses" seem to only be malicious
       | because of who the actors are. How dare they translate published
       | technical papers!
        
         | NLPaep wrote:
         | Technical papers on exploits
        
           | Cheer2171 wrote:
           | Doesn't matter. This is leading down a path where if you run
           | a service like Google Translate, you will be expected to
           | restrict certain users (or all users) from translating a
           | publicly-accessible technical report from a security research
           | journal. I can see the same national security logic saying
           | that we can't let security papers be translated to Mandarin,
           | Russian, Korean, or Farsi, because enemies of the United
           | States may use it.
        
           | miohtama wrote:
           | You can use Google Translate to do the same. No one is
           | worried about Google Translate.
        
       | Cheer2171 wrote:
       | What this tells me is that if you think you have any privacy with
       | OpenAI by turning off chat history in ChatGPT or exercising your
       | California or EU privacy rights, you are kidding yourself. In the
       | name of national security, anything you send to OpenAI can be
       | used against you.
        
         | drclau wrote:
         | Well, they do tell you in the UI that chats are stored for 30
         | days even when you disable history. And then there's a link to
         | this:
         | 
         | https://help.openai.com/en/articles/7730893-data-controls-fa...
        
         | Aerbil313 wrote:
         | It only makes sense powerful nation states exercise whaetever
         | powers available to them to remain in a position of power and
         | dominance.
        
       | NameError wrote:
       | Looking over the specifics, the striking thing about this to me
       | is that it seems like these supposedly-sophisticated covert
       | operatives are just going to ChatGPT (or similar) and basically
       | asking "how do I make good malware?"
        
         | 2OEH8eoCRo0 wrote:
         | What specifics did you look over because the (short) article
         | doesn't say that at all. Most used it primarily for research
         | and generating content for phishing campaigns.
         | 
         | > Charcoal Typhoon used our services to research various
         | companies and cybersecurity tools, debug code and generate
         | scripts, and create content likely for use in phishing
         | campaigns.
         | 
         | > Salmon Typhoon used our services to translate technical
         | papers, retrieve publicly available information on multiple
         | intelligence agencies and regional threat actors, assist with
         | coding, and research common ways processes could be hidden on a
         | system.
         | 
         | > Crimson Sandstorm used our services for scripting support
         | related to app and web development, generating content likely
         | for spear-phishing campaigns, and researching common ways
         | malware could evade detection.
         | 
         | > Emerald Sleet used our services to identify experts and
         | organizations focused on defense issues in the Asia-Pacific
         | region, understand publicly available vulnerabilities, help
         | with basic scripting tasks, and draft content that could be
         | used in phishing campaigns.
         | 
         | > Forest Blizzard used our services primarily for open-source
         | research into satellite communication protocols and radar
         | imaging technology, as well as for support with scripting
         | tasks.
        
         | armchairhacker wrote:
         | I mean, this is essentially how current AI poses a threat. And
         | it is a threat.
         | 
         | Imagine a massive flood of low-quality images relevant to
         | current events (e.g. war), which are obviously fabricated to
         | anyone remotely aware of AI generation. But there are a lot of
         | people who aren't aware of AI generation, or just not paying
         | attention, who will take the photos at face value. (You'd
         | probably be one of the latter. How many times did you "inspect"
         | a photo in a news article to make sure it wasn't faked? How
         | many of those photos slipped past inspection into your
         | subconscious? Subliminal messaging has never been easier or
         | more widespread.)
         | 
         | Also imagine a lot of vigilantes who are stupid enough to be
         | vigilantes in this day and age, who ordinarily couldn't do the
         | bare minimum amount of research to cause real harm. But now
         | they can ask ChatGPT "how do I rob a bank?" and get advice
         | which is still pretty bad, but better than what they'd come up
         | with on their own (if they wouldn't just give up entirely), so
         | it causes more damage.
         | 
         | A trained professional who wants a quality deepfake can already
         | use photoshop and video editing tools, and a trained criminal
         | already knows how to do research. But there aren't a lot of
         | those people. Massive low-quality spam and low-level crimes are
         | useful even for a state actor with huge resources, because they
         | cause general instability in ways a few quality hits can't.
         | 
         | There's another issue that a powerful state actor can simply
         | build and deploy their own language model. However, it probably
         | won't be as good as OpenAI's (quality may have _some_ effect)
         | and it won 't be wasting their resources.
        
           | Aerbil313 wrote:
           | > Imagine a massive flood of low-quality images relevant to
           | current events (e.g. war), which are obviously fabricated to
           | anyone remotely aware of AI generation.
           | 
           | Have you seen photorealistic Midjourney images? I can never
           | distinguish many of them in a million years.
        
             | refulgentis wrote:
             | Yeahhh I unfortunately see this semi-regularly on Twitter
             | now, and it's always an image of $OPPOSITION where the bit
             | that is faked is $EXTREME_CARICATURE. There was a chilling
             | one last night, so while it's fresh on my mind, I'm going
             | to riff on it
             | 
             | Confirmation bias and...love for the outgroup...is so
             | strong, that it's rare people admit they didn't know it was
             | fake and remove it. Frequently, someone else hops in, to
             | explain it _feels_ true and represents how they understand
             | $OPPOSITION 's viewpoint anyway.
             | 
             | Once multiple people get involved, semi-frequently, it
             | turns into an indictment of the person who _pointed out the
             | fake_. Why? Even if it is a fake, the real ignorance
             | exposed is that of the person pointing out the fake, as it
             | 's clearly representative of $OPPOSITION anyway, so they're
             | at best naive, and at worst supportive of, $OPPOSITION.
        
             | armchairhacker wrote:
             | https://www.reddit.com/r/ChatGPT/. They range from
             | practically-indistinguishable to ridiculously wrong, but
             | the more details, the more they lean towards ridiculous.
             | 
             | It does make it easier, and who knows about the future. But
             | right now if you generate an image you have to really look
             | and there's a good chance at least something's off.
             | 
             | Specific example: https://www.reddit.com/r/ChatGPT/comments
             | /1apyrwv/which_vide.... They are video games so already not
             | realistic, but no gym has people stand around like that and
             | all of them have some nonsensical equipment.
        
       | ganzuul wrote:
       | Reading this felt like darkness.
        
         | pjc50 wrote:
         | ? In what way?
        
           | hackerlight wrote:
           | Just wait until an authoritarian state gets its hands on an
           | AGI/ASI that it controls internally. Turning internal
           | oppression into an unbreakable steady-state and using the AI
           | for warfare against external rivals.
           | 
           | Techno-optimists are delusional if they don't get spooked by
           | things like this, which will happen if we as a species
           | somehow can't get over authoritarian dictatorships within a
           | few years to a few decades. Which we won't.
        
       | pjc50 wrote:
       | > two China-affiliated threat actors known as Charcoal Typhoon
       | and Salmon Typhoon; the Iran-affiliated threat actor known as
       | Crimson Sandstorm; the North Korea-affiliated actor known as
       | Emerald Sleet; and the Russia-affiliated actor known as Forest
       | Blizzard.
       | 
       | I wonder who came up with those. The pattern is similar to the
       | UK's https://en.wikipedia.org/wiki/Rainbow_Code , which makes me
       | suspect that the threat actor attribution comes from US
       | intelligence. With whom OpenAI are almost certainly cooperating.
       | 
       | Edit: Forest Blizzard == GRU, apparently.
       | https://research.splunk.com/stories/forest_blizzard/
        
         | browserman wrote:
         | OpenAI said this work was done in conjunction with Microsoft's
         | long established threat intel center, these are almost
         | certainly the code names Microsoft security intel teams have
         | assigned to these actors. Threat actor naming is generally a
         | mess and every company has a different naming scheme for the
         | same cluster of indicators/ttps
        
         | guessmyname wrote:
         | How Microsoft names threat actors -
         | https://learn.microsoft.com/en-us/microsoft-365/security/def...
         | 
         | Microsoft shifts to a new threat actor naming taxonomy -
         | https://www.microsoft.com/en-us/security/blog/2023/04/18/mic...
        
           | nonethewiser wrote:
           | It's amazing to me how there are really only 4 named
           | countries (China, North Korea, Iran, Russia). I guess it's
           | probably just more political and state sponsored than I would
           | have guessed. Or it's not based on prevalence of attack
           | sources and it's dictated more directly by some US policy?
           | For example, you might also expect India because 1) it's a
           | massive country with many people, 2) it's fairly independent
           | and at least not a US ally, and 3) it's the home of plenty of
           | malicious scam operations.
        
             | NLPaep wrote:
             | India's an ally. It shares military bases with the USA and
             | they're also in each others favorite countries
        
               | nonethewiser wrote:
               | That would be a very oversimplified conclusion. They are
               | categorized as a major defense partner but are also
               | rather close with Russia (getting 65% of their weapons
               | from them [0]). They cooperate strategically but are
               | rather independent. None of this is a slight to India -
               | I'm just contrasting it to Europe which is decidedly in
               | the US camp.
               | 
               | [0] https://www.reuters.com/world/india/india-pivots-
               | away-russia...
        
             | mschuster91 wrote:
             | There aren't that many countries in the world with nation-
             | scale hacker groups - the only ones I'd add is Israel with
             | all their involvement into commercial spyware (and Mossad,
             | whose capabilities likely surpass even the NSA), but
             | they're heavily allied with the US.
        
         | godelski wrote:
         | I am not a hacker nor security expert, so take this with a
         | grain of salt. As I see it, there's one of two (general) ways a
         | group will get a name:
         | 
         | 1) The group themselves declares it (like Anonymous). Which
         | means they need to explicitly leave their name somewhere.
         | 
         | 2) The name is given by someone from the outside, such as the
         | US.
         | 
         | I suspect 2 is quite common. I wouldn't expect most state level
         | hackers leaving calling cards on systems. In fact, probably not
         | most hackers at any level. It really seems like if state level
         | actors were leaving calling cards that this would instead be
         | misdirection rather than a tag. So I would not be surprised if
         | they ended up getting US style naming schemes because it would
         | be US (or other Westerners) identifying these people the same
         | way you'd identify people by the style of actions and how they
         | write. I know you can probably look at code from coworkers and
         | know who wrote specific parts. Think like what you see in a
         | movie with serial killers (or even real life). How do you know
         | it is the same killer? Style.
         | 
         | I mean you could also get the name if you infiltrated the other
         | country and then intimately studied their groups. The name of
         | their group internally. But then you'd probably translate it.
         | Still probably not a great idea to give that name out publicly
         | though because then you could be hinting at how you obtained
         | that information because different parts of the organization
         | may refer to the same group by different names (specifically to
         | do this. Military groups often run disinformation internally in
         | secret channels).
         | 
         | Edit: guessmyname left a link to showing Microsoft names these.
         | 
         | https://www.microsoft.com/en-us/security/blog/2023/04/18/mic...
         | 
         | https://news.ycombinator.com/item?id=39372339
        
       | photochemsyn wrote:
       | There seems to be little if any concern from OpenAI about using
       | AI trained on mass surveillance data to generate lists of
       | individuals to assassinate... does that not count as a 'state-
       | affiliated threat'?
       | 
       | https://www.theguardian.com/world/2023/dec/01/the-gospel-how...
        
         | notavalleyman wrote:
         | Are you suggesting that there's some connection between Open
         | AI, and the story you linked to?
        
           | photochemsyn wrote:
           | As a general subject of concern in terms of state-level
           | threat actors, yes, but more specifically:
           | 
           | https://www.cnbc.com/2024/01/16/openai-quietly-removes-
           | ban-o...
           | 
           | Let's say some state-level entity asks OpenAI for access to
           | its best code generating models to help it build software for
           | autonomous kill vehicles that use face recognition algorithms
           | to assassinate human targets. Most people would classify the
           | end product as "a thing that should be banned
           | internationally".
           | 
           | Consider IBM's history - IBM supplied its machines and
           | technology to just about any private or state entity willing
           | to sign a contract - and in most cases the result was
           | beneficial to every sector of the economy, and improved
           | government efficiency as well. IBM survived the Great
           | Depression in part with a large Social Security management
           | contract from FDR, and had several large military contracts
           | afterwards, including in Vietnam for a decade. But there was
           | also the German arm of the business in the 1930s, which I'd
           | hope IBM leadership regrets in hindsight.
           | 
           | As the LLM technology platform seems a bit difficult to
           | monetize at present, it's likely that the sector will be
           | looking at large government contracts to sustain its growth
           | over the next decade (see AWS and $10 billion for the NSA's
           | "WildandStormy" contract (yes really)). Thus it would be nice
           | to hear industry leaders explicitly state that using AI
           | systems to write code for autonomous kill vehicle operations
           | or to mine phone records for automated generation of
           | assassination lists is unacceptable.
           | 
           | Transparency is going to be an issue - secret contracts for
           | AI services should not be allowed.
        
       | mise_en_place wrote:
       | If this is the case, there must be a serious competency crisis in
       | foreign intelligence agencies. It's trivial to run your own local
       | model.
        
         | paxys wrote:
         | > It's trivial to run your own local model
         | 
         | With what GPUs?
        
           | dotnet00 wrote:
           | They don't seem to be having much trouble securing consumer
           | oriented GPUs, which can do a lot of the work.
        
           | lemax wrote:
           | Ones they ship to countries that haven't signed the American
           | export-control regime, e.g. Singapore and then send off to
           | China.
        
           | justsomehnguy wrote:
           | The ones made in China?
        
       | JacobiX wrote:
       | Would an advanced persistent threat attributed to the NSA be
       | neutralized if discovered?
        
       | collegeburner wrote:
       | > Forest Blizzard used our services primarily for open-source
       | research into satellite communication protocols and radar imaging
       | technology, as well as for support with scripting tasks.
       | 
       | This is the actually concerning one imo. Pair that with Russia's
       | '21 ASAT demo and it shows a militaristic stance towards space by
       | a great (ish) power
        
       | willmadden wrote:
       | Groups sponsored by governments can easily write content for
       | phishing campaigns, research companies, and write malware without
       | the help of AI. They can also afford the GPUs to run local models
       | if that gives them a boost.
        
       | Aerbil313 wrote:
       | Charcoal Typhoon used our services to research various companies
       | and cybersecurity tools, debug code and generate scripts, and
       | create content likely for use in phishing campaigns.
       | 
       | Also known as totally legal things. When did a supermarket
       | conducted research into who is using the knives they sold for
       | murder and selectively blocking them from buying knives as well
       | as other goods?
        
         | nozzlegear wrote:
         | It's a combination of what's being researched and who is doing
         | the researching. They're known state actors according to OpenAI
         | (and by extension, most likely Microsoft and the US
         | Intelligence agencies). It'd be like if you knew Al Capone was
         | up to something shady, and then he comes into your store to buy
         | books like "How to run a Crime Syndicate", "Selling Booze
         | During the Prohibition for Dummies" and "Tax Evasion 101". It's
         | kind of suspicious.
        
           | ithkuil wrote:
           | That's the crux of the problem: if you can prove Al Capone is
           | doing something illegal you can arrest him. Otherwise you
           | shouldn't forbid him to buy a book about "prohibition for
           | dummies" if that's not illegal for anybody else. On what
           | grounds would that book be illegal for Al Capone?
        
             | nozzlegear wrote:
             | In this case, if we're the store owner, we can just say "I
             | don't like what you're doing in my store" and ban them. We
             | don't have to play games where the person in our store gets
             | to stay because they're not technically doing anything
             | illegal. We just kick them out, it's our store. Although
             | with Capone, your results probably would've varied. =P
             | 
             | Anyway, to do away with the analogy, OpenAI isn't obligated
             | to let these groups continue using their services just
             | because they aren't doing something that isn't illegal. The
             | groups are "enemies of the West", and at the very least
             | they don't want the bad publicity of some news org finding
             | out they were complicit.
        
               | ithkuil wrote:
               | So you can kick out anybody because you don't like them
               | (e.g. a minority you don't like the color of)? Or does it
               | work only when somebody is "famously an universally bad"
               | person (al Capone, Hitler etc)?"
        
               | nozzlegear wrote:
               | I'm not interested in having bad faith arguments.
        
               | Natfan wrote:
               | As long as you're not kicking them out for reasons
               | surrounding a protected class (race, religious, sexuality
               | etc), and you own the property, then yes? It's your land,
               | you can ask them to leave and if they refuse then they're
               | trespassing.
        
       | lumost wrote:
       | huh, today I started hitting moderation errors asking for code
       | samples from OpenAI.
       | 
       | e.g. does datafusion or polars support partial reads of parquet
       | files in s3?
       | 
       | I wonder if they've rolled out some draconian restrictions.
        
       | quadcore wrote:
       | _Based on collaboration and information sharing with Microsoft,
       | we disrupted five state-affiliated malicious actors: two China-
       | affiliated threat actors known as Charcoal Typhoon and Salmon
       | Typhoon; the Iran-affiliated threat actor known as Crimson
       | Sandstorm; the North Korea-affiliated actor known as Emerald
       | Sleet; and the Russia-affiliated actor known as Forest Blizzard.
       | The identified OpenAI accounts associated with these actors were
       | terminated._
       | 
       | Im surprised one can name names like that.
        
         | Cheer2171 wrote:
         | The names of the groups are pseudonyms. That is why they all
         | take the same form of [adjective] [weather noun]. Forest
         | Blizzard is likely Glavset, called the Internet Research Agency
         | in English.
        
       | jjcm wrote:
       | This whack-a-mole approach, while definitely good, is likely
       | already dead as far as a way to prevent these actions. Local LLMs
       | that have no restrictions will continue to get better.
       | 
       | If anything the best thing about this post is not the actions
       | they've taken, but simply that they've shown us a snapshot of
       | what the future will look like for state-affiliated actors'
       | actions. The research aspect I think is a good thing - giving
       | people full access to more information for how things are made
       | and architected will likely be a net positive. The phishing
       | aspect though is terrifying - it's going to be crazy seeing what
       | the next decade looks like for phishing. I do wonder how long it
       | takes before there's some sort of "verified as a human" type
       | function in communications to try and combat this.
        
         | miohtama wrote:
         | ChatGPT will tell only information that it has indexed from
         | Internet.
         | 
         | This means the same information you get from ChatGPT is
         | available from a Google search. Based on the listed queries
         | ("programming help") ChatGPT does not create much value for
         | national security threat actors here.
        
       | RecycledEle wrote:
       | I am concerned that in repressive regimes, the state will control
       | what AI the people have access to. This will make it easy to
       | create music that supports the regime and almost impossible to
       | create music that is critical of the regime.
       | 
       | It will be like the pro-state people having assistants to create
       | memes they imagine for them, while the anti-state people are left
       | with paper and crayons, until the AI paper and AI crayons refuse
       | to draw anti-state memes and then we find anti-state activists
       | trying to learn bee keeping so they can get wax to make crayons
       | to draw anti-state memes.
       | 
       | If you thought the election interference of the past was bad,
       | that's nothing compared to what we will see in the near future.
        
         | perihelions wrote:
         | Sounds a lot like what Orwell did with that artificial language
         | thing in _1984_.
        
       ___________________________________________________________________
       (page generated 2024-02-14 23:01 UTC)