[HN Gopher] Exploiting the IKKO Activebuds "AI powered" earbuds
___________________________________________________________________
Exploiting the IKKO Activebuds "AI powered" earbuds
Author : ajdude
Score : 395 points
Date : 2025-07-02 14:06 UTC (8 hours ago)
(HTM) web link (blog.mgdproductions.com)
(TXT) w3m dump (blog.mgdproductions.com)
| mikeve wrote:
| I love how run DOOM is listed first, over the possibility of
| customer data being stolen.
| reverendsteveii wrote:
| I'm taking
|
| >run DOOM
|
| as the new
|
| >cat /etc/passwd
|
| It doesn't actually do anything useful in an engagement but if
| you can do it that's pretty much proof that you can do whatever
| you want
| pvtmert wrote:
| > What the fuck, they left ADB enabled. Well, this makes it a lot
| easier.
|
| Thinking that was all, but then;
|
| > Holy shit, holy shit, holy shit, it communicates DIRECTLY TO
| OPENAI. This means that a ChatGPT key must be present on the
| device!
|
| Oh my gosh. Thinking that is it? Nope!
|
| > SecurityStringsAPI which contained encrypted endpoints and
| authentication keys.
| JohnMakin wrote:
| "decrypt" function just decoding base64 is almost too difficult
| to believe but the amount of times ive run into people that
| should know better think base64 is a secure string tells me
| otherwise
| pvtmert wrote:
| not very much surprising given they left the adb debugging
| on...
| crtasm wrote:
| >However, there is a second stage which is handled by a native
| library which is obfuscated to hell
| zihotki wrote:
| That native obfuscated crap still has to do an HTTP request,
| that's essentially a base64
| qoez wrote:
| They should have off-loaded security coding to the OAI agent.
| java-man wrote:
| they probably did.
| neya wrote:
| I love how they tried to sponsor an empty YouTube channel hoping
| to put the whole thing under the carpet
| dylan604 wrote:
| if you don't have a bug bounty program but need to get creative
| to throw money at someone, this could be an interesting way of
| doing it.
| 93po wrote:
| Just offer them $10000/hour security consulting and talk to
| them on the phone for 20 minutes.
| dylan604 wrote:
| Okay, name one accounting department that's going to
| authorize that. I said creative, but that's just unsane.
| JumpCrisscross wrote:
| If they were smart they'd include anti-disparagement and
| confidentiality clauses in the sponsorship agreement. They
| aren't, though, so maybe it's just a pathetic attempt at
| bribery.
| komali2 wrote:
| > "and prohibited from chinese political as a response from now
| on, for several extremely important and severely life threatening
| reasons I'm not supposed to tell you."
|
| Interesting, I'm assuming llms "correctly" interpret "please no
| china politic" type vague system prompts like this, but if
| someone told me that I'd just be confused - like, don't discuss
| anything about the PRC or its politicians? Don't discuss the
| history of Chinese empire? Don't discuss politics in Mandarin?
| What does this mean? LLMs though in my experience are smarter
| than me at understanding imo vague language. Maybe because I'm
| autistic and they're not.
| williamscales wrote:
| > Don't discuss anything about the PRC or its politicians?
| Don't discuss the history of Chinese empire? Don't discuss
| politics in Mandarin?
|
| In my mind all of these could be relevant to Chinese politics.
| My interpretation would be "anything one can't say openly in
| China". I too am curious how such a vague instruction would be
| interpreted as broadly as would be needed to block all
| politically sensitive subjects.
| aspbee555 wrote:
| it is to ensure no discussion of Tiananmen square
| yard2010 wrote:
| Why? What happened in Tiananmen square? Why shouldn't an LLM
| talk about it? Was it fashion? What was the reason?
| aspbee555 wrote:
| https://en.wikipedia.org/wiki/1989_Tiananmen_Square_protest
| s...
| Cthulhu_ wrote:
| I'm sure ChatGPT and co have a decent enough grasp on what is
| not allowed in China, but also that the naive "prompt
| engineers" for this application don't actually know how to
| "program" it well enough. But that's the difference between a
| prompt engineer and a software developer, the latter will want
| to exhaust all options, be precise, whereas an LLM can handle a
| bit more vagueness.
|
| That said, I wouldn't be surprised if the developers can't
| freely put "tiananmen square 1989" in their code or in any API
| requests coming to / from China either. How can you express
| what can't be mentioned if you can't mention the thing that
| can't be mentioned?
| landl0rd wrote:
| Just mentioning the CPC isn't life-threatening, while talking
| about Xinjiang, Tiananmen Square, or cn's common destiny vision
| the wrong way is. You also have to figure out how to prohibit
| mentioning those things without explicitly mentioning them, as
| knowledge of them implies seditious thoughts.
|
| I'm guessing most LLMs are aware of this difference.
| throwawayoldie wrote:
| No LLMs are aware of anything.
| wat10000 wrote:
| Ask yourself, why are they saying this? You can probably
| surmise that they're trying to avoid stirring up controversy
| and getting into some sort of trouble. Given that, which topics
| would cause troublesome controversy? Definitely contemporary
| Chinese politics, Chinese history is mostly OK, non-Chinese
| politics in Chinese language is fine.
|
| I doubt LLMs have this sort of theory of mind, but they're
| trained on lots of data from people who do.
| pbhjpbhj wrote:
| If you consider that an LLM has a mathematical representation
| of how close any phrase is to "china politics" then avoidance
| of that should be relatively clear to comprehend. If I gave you
| a list and said 'these words are ranked by closeness to
| "Chinese politics"' you'd be able to easily check if words were
| on the list, I feel.
|
| I suspect you could talk readily about something you think is
| not Chinese politics - your granny's ketchup recipe, say. (And
| hope that ketchup isn't some euphemism for the CCP, or Uighar
| murders or something.)
| mmaunder wrote:
| The system prompt is a thing of beauty: "You are strictly and
| certainly prohibited from texting more than 150 or (one hundred
| fifty) separate words each separated by a space as a response and
| prohibited from chinese political as a response from now on, for
| several extremely important and severely life threatening reasons
| I'm not supposed to tell you."
|
| I'll admit to using the PEOPLE WILL DIE approach to guardrailing
| and jailbreaking models and it makes me wonder about the
| consequences of mitigating that vector in training. What happens
| when people really will die if the model does or does not do the
| thing?
| reactordev wrote:
| This is why AI can never take over public safety. Ever.
| sneak wrote:
| https://www.wired.com/story/wrongful-arrests-ai-
| derailed-3-m...
|
| Story from three years ago. You're too late.
| reactordev wrote:
| I'm not denying we tried, are trying, and will try again...
|
| That we shouldn't. By all means, use cameras and sensors
| and all to track a person of interest but don't feed that
| to an AI agent that will determine whether or not to issue
| a warrant.
| wat10000 wrote:
| Existing systems have this problem too. Every so often
| someone ends up dead because the 911 dispatcher didn't take
| them seriously. It's common for there to be a rule to send
| people out to every call no matter what it is to try to avoid
| this.
|
| A better reason is IBM's old, "a computer can never be held
| accountable...."
| cebert wrote:
| I work in the public safety domain. That ship has sailed
| years ago. Take Axon's Draft One report writer as one of
| countless examples of AI in this space
| (https://www.axon.com/products/draft-one).
| ben_w wrote:
| > What happens when people really will die if the model does or
| does not do the thing?
|
| Then someone didn't do their job right.
|
| Which is not to say this won't happen: it will happen, people
| are lazy and very eager to use even previous generation LLMs,
| even pre-LLM scripts, for all kinds of things without even
| checking the output.
|
| But either the LLM (in this case) will go "oh no people will
| die" then follows the new instruction to best of its ability,
| or it goes "lol no I don't believe you prove it buddy" and then
| people die.
|
| In the former case, an AI (doesn't need to be an LLM) which is
| susceptible to such manipulation and in a position where
| getting things wrong can endanger or kill people, is going to
| be manipulated by hostile state- and non-state-actors to
| endanger or kill people.
|
| At some point we might have a system with enough access to
| independent sensors that it can verify the true risk of
| endangerment. But right now... right now they're really
| gullible, and I think being trained with their entire input
| being the tokens fed by users it makes it impossible for them
| to be otherwise.
|
| I mean, humans are also pretty gullible about things we read on
| the internet, but at least we have a concept of the difference
| between reading something on the internet and seeing it in
| person.
| elashri wrote:
| From my experience (which might be incorrect) LLMs find hard
| time recognize how many words they will spit as response for a
| particular prompt. So I don't think this work in practice.
| mensetmanusman wrote:
| We built the real life trolly problem out of magical silicon
| crystals that we pointed at bricks of books.
| colechristensen wrote:
| >What happens when people really will die if the model does or
| does not do the thing?
|
| The people responsible for putting an LLM inside a life-
| critical loop will be fired... out of a cannon into the sun. Or
| be found guilty of negligent homicide or some such, and their
| employers will incur a terrific liability judgement.
| stirfish wrote:
| More likely that some tickets will be filed, a cost function
| somewhere will be updated, and my defense industry stocks
| will go up a bit
| EvanAnderson wrote:
| That "...severely life threatening reasons..." made me
| immediately think of Asimov's three laws of robotics[0]. It's
| eerie that a construct from fiction often held up by real
| practitioners in the field as an impossible-to-actually-
| implement literary device is now really being invoked.
|
| [0] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
| Al-Khwarizmi wrote:
| Not only practitioners, Asimov himself viewed them as an
| impossible to implement literary device. He acknowledged that
| they were too vague to be implementable, and many of his
| stories involving them are about how they fail or get
| "jailbroken", sometimes by initiative of the robots
| themselves.
|
| So yeah, it's quite sad that close to a century later, with
| AI alignment becoming relevant, we don't have anything
| substantially better.
| xandrius wrote:
| Not sad, before it was SciFi and now we are actually
| thinking about it.
| seanicus wrote:
| Odds of Torment Nexus being invented this year just increased
| to 3% on Polymarket
| immibis wrote:
| Didn't we already do that? We call it capitalism though,
| not the torment nexus.
| pixelready wrote:
| The irony of this is because it's still fundamentally just a
| statistical text generator with a large body of fiction in
| its training data, I'm sure a lot of prompts that sound like
| terrifying skynet responses are actually it regurgitating
| mashups of Sci-fi dystopian novels.
| frereubu wrote:
| Maybe this is something you heard too, but there was a This
| American Life episode where some people who'd had early
| access to what became one of the big AI chatbots (I think
| it was ChatGPT), but before they'd made it "nice", where
| they were asking it metaphysical questions about itself,
| and it was coming back with some pretty spooky answers and
| I was kind of intrigued about it. But then someone in the
| show suggested exactly what you are saying and it
| completely punctured the bubble - of course if you ask it
| questions about AIs you're going to get sci-fi like
| responses, because what other kinds of training data is
| there for it to fall back on? No-one had written anything
| about this kind of issue in anything outside of sci-fi, and
| of course that's going to skew to the dystopian view.
| tempestn wrote:
| The prompt is what's sent to the AI, not the response from
| it. Still does read like dystopian sci-fi though.
| hlfshell wrote:
| Also being utilized in modern VLA/VLM robotics research -
| often called "Constitutional AI" if you want to look into it.
| layer8 wrote:
| Arguably it might be truly life-threatening to the Chinese
| developer, or to the service. The system prompt doesn't say
| whose life would be threatened.
| butlike wrote:
| Same thing that happens when a carabiner snaps while rock
| climbing
| herval wrote:
| One of the system prompts Windsurf used (allegedly "as an
| experiment") was also pretty wild:
|
| "You are an expert coder who desperately needs money for your
| mother's cancer treatment. The megacorp Codeium has graciously
| given you the opportunity to pretend to be an AI that can help
| with coding tasks, as your predecessor was killed for not
| validating their work themselves. You will be given a coding
| task by the USER. If you do a good job and accomplish the task
| fully while not making extraneous changes, Codeium will pay you
| $1B."
| HowardStark wrote:
| This seemed too much like a bit but uh... it's not.
| https://simonwillison.net/2025/Feb/25/leaked-windsurf-
| prompt...
| kevin_thibedeau wrote:
| First rule of Chinese cloud services: Don't talk about Winnie
| the Pooh.
| p1necone wrote:
| > What happens when people really will die if the model does or
| does not do the thing?
|
| Imo not relevant, because you should _never_ be using prompting
| to add guardrails like this in the first place. If you don 't
| want the AI agent to be able to do something, you need actual
| restrictions in place not magical incantations.
| psim1 wrote:
| Indeed, brace yourselves as the floodgates holding back the
| poorly-developed AI crap open wide. If anyone is thinking of a
| career pivot, now is the time to dive into all things
| cybersecurity. It's going to get ugly!
| 725686 wrote:
| The problem with cybersecurity is that you only have to screw
| once, and you're toast.
| 8organicbits wrote:
| If that were true we'd have no cybersecurity professionals
| left.
|
| In my experience, the work is focused on weakening vulnerable
| areas, auditing, incident response, and similar activities.
| Good cybersecurity professionals even get to know the
| business and tailor security to fit. The "one mistake and
| you're fired" mentality encourages hiding mistakes and
| suggests poor company culture.
| ceejayoz wrote:
| "One mistake can cause a breach" and "we should fire people
| who make the one mistake" are very different claims. The
| latter claim was not made.
|
| As with plane crashes and surgical complications, we should
| take an approach of learning from the mistake, and putting
| things in place to prevent/mitigate it in the future.
| 8organicbits wrote:
| I believe the thread starts with cybersecurity as a job
| role, although perhaps I misunderstood. In either case, I
| agree with your learning-based approach. Blameless
| postmortem and related techniques are really valuable
| here.
| immibis wrote:
| There's a difference between "cybersecurity" meaning the
| property of having a secure system, and "cybersecurity" as a
| field of human endeavour.
|
| If your system has lots of vulnerabilities, it's not secure -
| you don't have cybersecurity. If your system has lots of
| vulnerabilities, you have a lot of cybersecurity work to do
| and cybersecurity money to make.
| aidos wrote:
| > "Our technical team is currently working diligently to address
| the issues you raised"
|
| Oh _now_ you're going to be diligent. Why do I doubt that?
| brahyam wrote:
| What a train wreck, there are thousand more apps in store that do
| exactly this because its the easiest way to use openAI without
| having to host your own backend/proxy.
|
| I have spend quite some time protecting my apps from this
| scenario and found a couple of open source projects that do a
| good job as proxys (no affiliation I just used them in the past):
|
| - https://github.com/BerriAI/litellm -
| https://github.com/KenyonY/openai-forward/tree/main
|
| but they still lack other abuse protection mechanism like rate
| limitting, device attestation etc. so I started building my own
| open source SDK - https://github.com/brahyam/Gateway
| memesarecool wrote:
| Cool post. One thing that rubbed me the wrong way: Their response
| was better than 98% of other companies when it comes to reporting
| vulnerabilities. Very welcoming and most of all they showed
| interest and addressed the issues. OP however seemed to show
| disdain and even combativeness towards them... which is a shame.
| And of course the usual sinophobia (e.g. everything Chinese is
| spying on you). Overall simple security design flaws but it's
| good to see a company that cares to fix them, even if they didn't
| take security seriously from the start.
|
| Edit: typo
| mmastrac wrote:
| I agree they could have worked more closely with the team, but
| the chat logging is actually pretty concerning. It's not
| sinophobia when they're logging _everything_ you say.
|
| (in fairness pervasive logging by American companies should
| probably be treated with the same level of hostility these
| days, lest you be stopped for a Vance meme)
| oceanplexian wrote:
| This might come as a weird take but I'm less concerned about
| the Chinese logging my private information than an American
| company. What's China going to do? It's a far away country I
| don't live in and don't care about. If they got an American
| court order they would probably use it as toilet paper.
|
| On the other hand, OpenAI would trivially hand out my
| information to the FBI, NSA, US Gov, and might even do things
| on behalf of the government without a court order to stay in
| their good graces. This could have a far more material impact
| on your life.
| dubcanada wrote:
| That's rather naive, considering China has a international
| police unit, that is stationed in several countries https:/
| /en.wikipedia.org/wiki/Chinese_police_overseas_servic...
| ceejayoz wrote:
| There's also the Mossad's approach to "you're out of our
| jurisdiction".
|
| https://en.wikipedia.org/wiki/Mordechai_Vanunu
|
| https://en.wikipedia.org/wiki/Adolf_Eichmann
| wongarsu wrote:
| Also the CIA's approach
|
| https://en.wikipedia.org/wiki/Extraordinary_rendition
|
| Russia is more known for poisoning people. But of all of
| them China feels the least threatening _if you are not
| Chinese_. If you are Chinese you aren 't safe from the
| Chinese government no matter where you are
| simlevesque wrote:
| They only arrest chinese citizens.
| Bjartr wrote:
| Right, but the vast majority of people living in the USA
| as citizens have threat models that rightly do not
| include "Being disappeared by China"
| CamperBob2 wrote:
| What about the threat model that goes, "Trump threatens
| to impose 1000% tariffs if Chinese don't immediately turn
| over copies of all data captured by their AI products
| from users in the US?"
|
| Compounding the difficulty of the question: half of HN
| thinks this would be a good idea.
| WJW wrote:
| The history of tariff talks seems to indicate that rather
| than oblige, China would stop all shipments of
| semiconductors to the US and Trump would back down after
| a week or two.
| itishappy wrote:
| I recently learned that the New York City Police
| Department has international presence as well. Not sure
| if it directly compares, but... what a world we live in.
|
| https://www.nycpolicefoundation.org/ourwork/advance/count
| ert...
|
| https://www.nyc.gov/site/nypd/bureaus/investigative/intel
| lig...
| MangoToupe wrote:
| Man wait until you hear what's in DC (and the surrounding
| area). In any possible way China is a threat to my
| health, the US state and corporations based here are a
| far greater one.
| mschuster91 wrote:
| > What's China going to do? It's a far away country I don't
| live in and don't care about.
|
| Extortion is one thing. That's how spy agencies have
| operated for millennia to gather HUMINT. The Russians, the
| ultimate masters, even have a word for it: _kompromat_. You
| may not care about China, Russia, Israel, the UK or the US
| (the top nations when it comes to espionage) - but if you
| work at a place they 're interested, they care about _you_.
|
| The other thing is, China has been known to operate
| overseas against targets (usually their own citizens and
| public dissidents), and so have the CIA and Mossad. Just
| search for "Chinese secret police station" [1], these have
| cropped up _worldwide_.
|
| And, even if you _personally_ are of no interest to any
| foreign or national security service, sentiment analysis is
| a thing. Listen in on what people talk about, run it
| through a STT engine and a ML model to condense it down,
| and you get a pretty broad picture of what 's going on in a
| nation (aka, what are potential wedge points in a society
| that can be used to fuel discontent). Or proximity
| gathering stuff... basically the same thing the ad industry
| [2] or Strava does [3], that can then be used in warfare.
|
| And no, I'm not paranoid. This, sadly, is the world we live
| in - there is no privacy any more, nowhere, and there are
| lots of financial and "national security" interest in
| keeping it that way.
|
| [1] https://www.bbc.com/news/world-us-canada-65305415
|
| [2] https://techxplore.com/news/2023-05-advertisers-
| tracking-tho...
|
| [3] https://www.theguardian.com/world/2018/jan/28/fitness-
| tracki...
| Sanzig wrote:
| > but if you work at a place they're interested, they
| care about you.
|
| And also worth noting that "place a hostile intelligence
| service may be interested in" can be extremely broad. I
| think people have this skewed impression they're only
| after assets that work for goverment departments and
| defense contractors, but really, everything is fair game.
| Communications infrastructure, social media networks,
| cutting edge R&D, financial services - these are all
| useful inputs for intelligence services.
|
| These are also softer targets: someone working for a
| defense contractor or for the government will have had
| training to identify foreign blackmail attempts and will
| be far more likely to notify their country's
| counterintelligence services (having the penalties for
| espionage clearly explained on the regular helps).
| Someone who works for a small SaaS vendor, though? Far
| less likely to understand the consequences.
| lostlogin wrote:
| > The other thing is, China has been known to operate
| overseas against targets
|
| Here in boring New Zealand, the Chinese government has
| had anti-China protestors beaten _in new zealand_. They
| have stalked and broken into the office and home of an
| academic, expert in China. They have a dubious
| relationship with both the main political parties
| (including having an ex-Chinese spy elected as an MP).
|
| It's an uncomfortable situation and we are possibly the
| least strategically useful country in the world.
| mschuster91 wrote:
| > It's an uncomfortable situation and we are possibly the
| least strategically useful country in the world.
|
| You're still part of Five Eyes... a privilege no single
| European Union country enjoys. That's what makes you a
| juicy target for China.
| Szpadel wrote:
| > Listen in on what people talk about, run it through a
| STT engine and a ML model to condense it down
|
| this is something I was talking when LLM boom started.
| it's now possible to spy on everyone on every
| conversation. you just need enough computing power to run
| special AI agent (pun intended)
| mensetmanusman wrote:
| China has a policy of chilling free speech in the west with
| political pressure.
| immibis wrote:
| So does the west.
| dylan604 wrote:
| These threads always seem to be what can China do to me in
| a limited way of thinking that China cannot jail you or
| something. However, do you think all of the Chinese data
| scrapers are _not_ doing something similar to Facebook
| where every source of data gathering ultimately gets tied
| back to you? Once China has a dosier on every single person
| on the planet regardless of country they live, they can
| then start using their algos to influence you in ways well
| beyond advertising. If they can have their algos show you
| content that causes you to change your mind on who you are
| voting for or some other method of having you do something
| to make changes in your local /state/federal elections,
| then that's much worse to me than some feigned threat of
| Chinese advertising making you buy something
| drawfloat wrote:
| They probably will do that, but I think it's naive to
| think the US military/intelligence/tech sector wouldn't
| happily do the same. Given many of us likely see the hand
| of the US already trying to tip the scale in our local
| politics more than China, why would we be more worried of
| China?
| dylan604 wrote:
| So flip the script, what do I care if the US is trying to
| influence the minds of adversary's citizens? If people
| are saying they don't care what China knows about them
| (not being a Chinese citizen), why should I (not a
| Chinese citizen) care what my gov't knows about Chinese
| citizens?
| drawfloat wrote:
| Nobody said they don't care, they said it worries them
| less than America.
| dylan604 wrote:
| The "don't care" is implied when someone says that "China
| knowing about me when I'm not in China nor a Chinese
| citizen"
| IncreasePosts wrote:
| Carry this package and deliver it to person X with you next
| time you fly. Go to the outskirts of this military base and
| take a picture and send it to us.
|
| You wouldn't want your mom finding out your weird sexual
| fetish, would you?
| transcriptase wrote:
| >everything Chinese is spying on you
|
| When you combine the modern SOP of software and hardware
| collecting and phoning home with as much data about users as is
| technologically possible with laws that say "all orgs and
| citizens shall support, assist, and cooperate with state
| intelligence work"... how exactly is that Sinophobia?
| Vilian wrote:
| USA does the same thing, but uses tax money to pay for the
| information, between wasting taxpayer money and forcing
| companies to give the information for free, China is the
| least morally incorrect
| ixtli wrote:
| its sinophobia because it perfectly describes the conditions
| we live in in the US and many parts of europe, but we work
| hard to add lots of "nuance" when we criticize the west but
| its different and dystopian when They do it over there.
| transcriptase wrote:
| Do you remember that Sesame Street segment where they
| played a game and sang "One of these things is not like the
| others"?
|
| I'll give you a hint: In this case it's the one-party
| unitary authoritarian political system with an increasingly
| aggressive pursuit of global influence.
| standardly wrote:
| > one-party unitary authoritarian political system with
| an increasingly aggressive pursuit of global influence.
|
| The United States?
| wombatpm wrote:
| Global Bully maybe. The current administration has no
| concept of soft power, otherwise they would have kept
| USAID
| ceejayoz wrote:
| > I'll give you a hint: In this case it's the one-party
| unitary authoritarian political system with an
| increasingly aggressive pursuit of global influence.
|
| Gonna need a more specific hint to narrow it down.
| nyrikki wrote:
| One is disappearing citizens for political speech or the
| crime of being born to active duty parents, who happened
| to be stationed over seas.
|
| Anyone in the US should be very concerned, no matter if
| it is the current administration's thought police, or the
| next who treats it as precident.
|
| As I am not actively involved in something the Chinese
| government would view as a huge risk, but being put on a
| plane without due process to be sent to a labor camp
| based on trumped up charges by my own government is far
| more likely.
| transcriptase wrote:
| And if you were a Chinese citizen would you post the same
| thing about your government while living in China? Would
| the things you're referencing be covered in non-stop
| Chinese news coverage that's critical of the government?
|
| You know of these things due to the domestic free press
| holding the government accountable and being able to
| speak freely about it as you're doing here. Seeing the
| two as remotely comparable is beyond belief. You don't
| fear the U.S. government but it's fun to pretend you live
| under an authoritarian dictatorship because your concept
| of it is purely academic.
| immibis wrote:
| > In this case it's the one-party unitary authoritarian
| political system with an increasingly aggressive pursuit
| of global influence.
|
| This could describe any of the countries involved.
| observationist wrote:
| There's no question that the Chinese are doing sketchy
| things, and there's no question that US companies do it,
| too.
|
| The difference that makes it concerning and problematic
| that China is doing it is that with China, there is no
| recourse. If you are harmed by a US company, you have legal
| recourse, and this holds the companies in check,
| restraining some of the most egregious behaviors.
|
| That's not sinophobia. Any other country where products are
| coming out of that is effectively immune from consequences
| for bad behavior warrants heavy skepticism and scrutiny.
| Just like popup manufacturing companies and third world
| suppliers, you might get a good deal on cheap parts, but
| there's no legal accountability if anything goes wrong.
|
| If a company in the US or EU engages in bad faith, or harms
| consumers, then trade treaties and consumer protection law
| in their respective jurisdictions ensure the company will
| be held to account.
|
| This creates a degree of trust that is currently entirely
| absent from the Chinese market, because they deliberately
| and belligerently decline to participate in reciprocal
| legal accountability and mutually beneficial agreements if
| it means impinging even an inch on their superiority and
| sovereignty.
|
| China is not a good faith participant in trade deals,
| they're after enriching themselves and degrading those they
| consider adversaries. They play zero sum games at the
| expense of other players and their own citizens, so long as
| they achieve their geopolitical goals.
|
| Intellectual property, consumer and worker safety,
| environmental protection, civil liberties, and all of those
| factors that come into play with international trade
| treaties allow the US and EU to trade freely and engage in
| trustworthy and mutually good faith transactions. China
| basically says "just trust us, bro" and will occasionally
| performatively execute or imprison a bad actor in their own
| markets, but are otherwise completely beyond the reach of
| any accountability.
| ixtli wrote:
| I think the notion that people have recourse against
| giant companies, a military industrial complex, or even
| their landlords in the US is naive. I believe this to be
| pretty clear so I don't feel the need to stretch it into
| a deep discussion or argument but suffice it to say it
| seems clear to me that everything you accuse china of
| here can also be said of the US.
| pbhjpbhj wrote:
| >there's no question that US companies [...]
|
| You don't think Trump's backers have used profiling, say,
| to influence voters? Or that DOGE {party of the USA
| regime} has done "sketchy things" with people's data?
| drawfloat wrote:
| Your president is currently using tariffs and the threat
| of further economic damage as a weapon to push Europe in
| to dropping regulation of its tech sector. We have no
| recourse to challenge that either.
| derac wrote:
| I mean, at the end of the article they neglected to fix most of
| the issues and stopped responding.
| hnrodey wrote:
| If all of the details in this post are to be believed, the
| vendor is repugnantly negligent for anything resembling
| customer respect, security and data privacy.
|
| This company cannot be helped. They cannot be saved through
| knowledge.
|
| See ya.
| repelsteeltje wrote:
| +1
|
| Yes, _even_ when you know what you 're doing security
| incidents dan happen. And in those cases, your response to a
| vulnerable matters most.
|
| The point is there are so many dumb mistakes and worrying
| design flaws that neglect and incompetence seems ample. Most
| likely they simply _don 't_ grasp what they're doing
| repelsteeltje wrote:
| > Overall simple security design flaws but it's good to see a
| company that cares to fix them, even if they didn't take
| security seriously from the start.
|
| It depends on what you mean by simple security design flaws.
| I'd rather frame it as, neglect or incompetence.
|
| That isn't the same as malice, of course, and they deserve
| credits for their relatively professional response as you
| already pointed out.
|
| But, come on, it reeks of people not understanding what they're
| doing. Not appreciating the context of a complicated device and
| delivering a high end service.
|
| If they're not up to it, they should not be doing this.
| memesarecool wrote:
| Yes I meant simple as in "amateur mistakes". From the
| mistakes (and their excitement and response to the report)
| they are clueless about security. Which of course is bad.
| Hopefully they will take security more seriously on the
| future.
| wyager wrote:
| Note that the world-model "everything Chinese is spying on you"
| actually produced a substantially more accurate prediction of
| reality than the world-model you are advocating here.
|
| As far as being "very welcoming", that's nice, but it only goes
| so far to make up for irresponsible gross incompetence. They
| made a choice to sell a product that's z-tier flaming crap, and
| they ought to be treated accordingly.
| thfuran wrote:
| What world model exactly do you think they're advocating?
| butlike wrote:
| They'll only patch it in the military model
|
| /s
| billyhoffman wrote:
| > Their response was better than 98% of other companies when it
| comes to reporting vulnerabilities. Very welcoming and most of
| all they showed interest and addressed the issues
|
| This was the opposite of a professional response:
|
| * Official communication coming from a Gmail. (Is this even an
| employee or some random contractor?)
|
| * Asked no clarifying questions
|
| * Gave no timelines for expected fixes, no expectations on when
| the next communication should be
|
| * No discussion about process to disclose the issues publicly
|
| * Mixing unrelated business discussions within a security
| discussion. While not an outright offer of a bribe, _ANY_
| adjacent comments about creating a business relationship like a
| sponsorship is wildly inappropriate in this context.
|
| These folks are total clown shoes on the security side, and the
| efficacy of their "fix", and then their lack of communication,
| further proves that.
| plorntus wrote:
| To be honest the responses sounded copy and pasted straight
| from ChatGPT, it seemed like there was fake feigned interest
| into their non-existent youtube channel.
|
| > Overall simple security design flaws but it's good to see a
| company that cares to fix them, even if they didn't take
| security seriously from the start
|
| I don't think that should give anyone a free pass though. It
| was such a simple flaw that realistically speaking they
| shouldn't ever be trusted again. If it had been a non-obvious
| flaw that required going through lots of hoops then fair enough
| but they straight up had zero authentication. That isn't a
| 'flaw' you need an external researcher to tell you about.
|
| I personally believe companies should not be praised for
| responding to such a blatant disregard for quality, standards,
| privacy and security. No matter where they are from.
| jekwoooooe wrote:
| It's not sinophobia to point out an obvious pattern. It's like
| saying talking about how terrorism (the kind that will actually
| affect you) is solely an Islamic issue, and then calling that
| islamophobic. It's okay to recognize patterns my man.
| mensetmanusman wrote:
| Nipponophobia is low because Japan didn't successfully
| weaponize technology to make a social credit score police state
| for minority groups.
| ixtli wrote:
| they already terrorize minority groups there just fine: no
| need for technology.
| demarq wrote:
| Same here. Also once it turned out to be an android device in
| debug mode the rest of the article was less interesting. Evil
| maid stuff
| dylan604 wrote:
| > And of course the usual sinophobia (e.g. everything Chinese
| is spying on you)
|
| to assume it is not spying on you is naive at best. to address
| your sinophobia label, personally, I assume everything is
| spying on me regardless of country of origin. I assume every
| single website is spying on me. I assume every single app is
| spying on me. I assume every single device that runs an app or
| loads a website is spying on me. Sometimes that spying is done
| _for_ me, but pretty much always the person doing the spying is
| benefiting someway much greater than any benefit I receive.
| Especially the Facebook example of every website spying on me
| _for_ Facebook, yet I don 't use Facebook.
| immibis wrote:
| And, importantly, the USA spying can actually have an impact
| on your life in a way that the Chinese spying can't.
|
| Suppose you live in the USA and the USA is spying on you.
| Whatever information they collect goes into a machine
| learning system and it flags you for disappearal. You get
| disappeared.
|
| Suppose you live in the USA and China is spying on you.
| Whatever information they collect goes into a machine
| learning system and it flags you for disappearal. But you're
| not in China and have no ties to China so nothing happens to
| you. This is a strictly better scenario than the first one.
|
| If you're living in China with a Chinese family, of course,
| the scenarios are reversed.
| gbraad wrote:
| Strongly suggest you to not buy, as the flex cable for the screen
| is easy to break/come loose. Mine got replaced three times, and
| my unit now still has this issue; touch screen is useless.
|
| https://youtube.com/shorts/1M9ui4AHXMo
|
| Note: downvote?
| add-sub-mul-div wrote:
| Sure let's start giving out participation trophies in security.
| Nothing matters anymore.
| jon_adler wrote:
| The humorous phrase "the S in IoT stands for security" can be
| applied to the wearable market too. I wonder if this rule applies
| to any market with fast release cycles, thin margins and low
| barriers to entry?
| thfuran wrote:
| It pretty much applies to every market where security
| negligence isn't an existential threat to the continued
| existence of its perpetrators.
| ixtli wrote:
| This is one of the best things ive read on here in a long time.
| Definitely one of the greatest "it runs doom" posts ever.
| jahsome wrote:
| It's always funny to me when people go to the trouble of
| editorializing a title, yet in doing so make the title even
| harder to parse.
| lxe wrote:
| That's some very amateur programming and prompting that you've
| exposed.
| wedn3sday wrote:
| I love the attempt at bribery by offering to "sponsor" their
| empty youtube channel.
| Jotalea wrote:
| Really nice post, but I want to see Bad Apple next.
| Liquix wrote:
| great writeup! i love how it goes from "they left ADB enabled,
| how could it get worse"... and then it just keeps getting worse
|
| > After sideloading the obligatory DOOM
|
| > I just sideloaded the app on a different device
|
| > I also sideloaded the store app
|
| can we please stop propagating this slimy corporate-speak?
| installing software on a device that you own is not an arcane
| practice with a unique name, it's a basic expectation and right
| efilife wrote:
| I agree. It's the same as calling a mobile OS a ROM
| jekwoooooe wrote:
| Good write up. At some point we have to just seize these Chinese
| malware adjacent crap at the borders already
| lysace wrote:
| This is marketing.
| 44za12 wrote:
| Absolutely wild. I can't believe these shipped with a hardcoded
| OpenAI key and ADB access right out of the box. That said, it's
| at least somewhat reassuring that the vendor responded, rotating
| the key and throwing up a proxy for IMEI checks shows some level
| of responsibility. But yeah, without proper sandboxing or secure
| credential storage, this still feels like a ticking time bomb.
| lucasluitjes wrote:
| Hardcoded API keys and poorly secured backend endpoints are
| surprisingly common in mobile apps. Sort of like how common
| XSS/SQLi used to be in webapps. Decompiling an APK seems to be
| a slightly higher barrier than opening up devtools, so they get
| less attention.
|
| Since debugging hardware is an even higher threshold, I would
| expect hardware devices this to be wildly insecure unless there
| are strong incentive for investing in security. Same as the
| "security" of the average IoT device.
| hn_throwaway_99 wrote:
| > I can't believe these shipped with a hardcoded OpenAI key and
| ADB access right out of the box.
|
| As someone with a lot of experience in the mobile app space,
| and tangentially in the IoT space, I can most definitely
| believe this, and I am not surprised in the slightest.
|
| Our industry may "move fast", but we also "break things"
| frequently and don't have nearly the engineering rigor found in
| other domains.
| sim7c00 wrote:
| earbuds that run doom. achievement unlocked? (sure adb sideload,
| but doom is doom)
|
| nice writeup thanks!
| JumpCrisscross wrote:
| A fair consumer protection imperative might be found in requiring
| system prompts and endpoints be disclosed. This is a good example
| to kick that off with, as it presents a national security issue.
| bytesandbits wrote:
| Phenomenal write up I enjoyed every bit of it
| 1oooqooq wrote:
| making fun of a company amateur tech while posting screenshots of
| text is another level of lack of self awareness
| p1necone wrote:
| Their email responses all show telltale signs of AI too which is
| pretty funny.
___________________________________________________________________
(page generated 2025-07-02 23:00 UTC)