[HN Gopher] An iframe from googlesyndication.com tries to access...
       ___________________________________________________________________
        
       An iframe from googlesyndication.com tries to access the Camera and
       Microphone
        
       Author : authed
       Score  : 391 points
       Date   : 2021-12-19 15:28 UTC (7 hours ago)
        
 (HTM) web link (techsparx.com)
 (TXT) w3m dump (techsparx.com)
        
       | zinekeller wrote:
       | Well, at least that requires clicking (not to diminish this
       | report) but five years ago it's a zero-click proposition (like
       | Forbes: https://www.networkworld.com/article/3021113/forbes-
       | malware-...). While I don't want to diminish their revenue, the
       | fact that blocking online ads significantly strengthens your
       | security posture is not lost to private companies and governments
       | (like US CISA:
       | https://www.cisa.gov/sites/default/files/publications/Capaci...)
       | alike.
        
       | EastSmith wrote:
       | Serious question - why do we need iframes? Can't we disable them?
        
         | jefftk wrote:
         | An iframe allows one party to securely embed something from
         | another party. Ads are one example of this, but so are embedded
         | videos, tweets, etc.
         | 
         | In this case, having the ad in a cross-origin iframe is what
         | keeps it from being able to read the content of the page, which
         | is definitely something you'd want from a privacy/security
         | perspective.
         | 
         | (Disclosure: I work on ads at Google, speaking only for myself)
        
       | d4mi3n wrote:
       | I have to wonder if anti-fingerprinting is the wrong approach to
       | privacy invading advertising. There's an inherent asymmetry
       | between the resources available to those who build these systems
       | and those who try to stop them.
       | 
       | I'd love to see more stuff like CCPA. As a California resident I
       | can simply tell Google that my data is not for sale, and they're
       | obligated to respect that regardless of what fingerprinting
       | happens.
       | 
       | This isn't an ideal solution, but the whole issue of privacy
       | seems like a people/politics problem we keep trying to solve with
       | technology.
        
         | greenyoda wrote:
         | Laws take a long time to enact, especially when there are
         | companies with almost unlimited money lobbying against them.
         | Technology (such as browser updates and ad-blocking extensions)
         | can be deployed immediately and can adapt quickly to changing
         | threats.
        
       | throwaway81523 wrote:
       | The laptop camera can be disabled with small bit of black
       | electrical tape, but I don't understand whose crazy idea it was
       | to put microphones into laptops in the first place, especially
       | without hardware kill switches like the Librems have. The same
       | thing for modern cell phones, of course.
        
         | charcircuit wrote:
         | It's not that crazy of an idea. Laptop users want to be able to
         | record videos, call people, and talk to others.
        
       | bennyp101 wrote:
       | I wonder what would happen if you visited in a browser with it
       | set to allow video/audio with no prompts? Would it bail out with
       | a "oh crap, they actually let us" or would it actually try and do
       | something?
        
       | archsurface wrote:
       | They recently changed their meet application for some reason. I
       | use it without video, voice only; recently it has started trying
       | to force me to use both - I have to refuse both, then once in the
       | meet go and enable voice only. I loathe all things google but am
       | forced to use meet for work.
        
         | userbinator wrote:
         | I doubt you actually work there given your last sentence, but
         | have you tried proposing an alternative (along with some
         | reasons why it would be better) the next time someone asks you
         | to use it?
        
           | Ensorceled wrote:
           | > I doubt you actually work there given your last sentence,
           | 
           | What is your logic? My company uses Meet and it's "mandated"
           | in the sense that it is used for all company meetings. I'm in
           | a position that I might get us to switch if I pushed it, but
           | Teams and Zoom aren't much better. I assure you I work at my
           | company.
        
       | alkonaut wrote:
       | I really don't care what advertisers think is necessary to do in
       | order to ensure targeting/fingerprinting or countering fraud, but
       | running any third party script isn't anything I consider remotely
       | acceptable from an ad. Worst case I could accept that some
       | generic script from the ad network is run - but for the ad
       | network to pipe through the advertisers third party script should
       | mean they are adblocked by the browser without even requiring a
       | plugin.
        
       | htunnicliff wrote:
       | This sounds like one small piece of common fingerprinting
       | techniques.
       | 
       | It would have been nice to see the author address that
       | possibility, but it seems fingerprinting is not mentioned.
        
         | masswerk wrote:
         | IMHO, fingerprinting would explain the enumeration attempt, but
         | not the attempt to access these devices.
        
           | kingcharles wrote:
           | Does accessing them give you extra fingerprinting data
           | though? I would imagine that you can then enumerate at least
           | the resolution of the camera.
        
             | masswerk wrote:
             | This only allows you to specify a preferred resolution, but
             | doesn't return the resolution actually available. Any
             | device info is returned in the MediaDeviceInfo object [1]
             | on enumeration (which doesn't include resolution or similar
             | data).
             | 
             | [1] https://developer.mozilla.org/en-
             | US/docs/Web/API/MediaDevice...
        
       | foota wrote:
       | Is this just click bait? I don't know the intricacies of Google's
       | ad serving, but is this not just someone (e.g., an ads customer)
       | slipping a request for camera and mic access into an ad script?
       | But the title seems to suggest Google is doing something
       | malicious here.
        
         | bogwog wrote:
         | > But the title seems to suggest Google is doing something
         | malicious here.
         | 
         | Aren't they? They're quite literally distributing malware.
        
         | smt88 wrote:
         | Why does it matter whether Google did it or Google spread it
         | around the world? It is equally repulsive behavior on their
         | part.
        
         | johncena33 wrote:
         | Well, it's HN. HN has become a FUD machine.
        
         | oblak wrote:
         | You're right. Google would never compromise anyone to make some
         | money. Never
        
         | kadoban wrote:
         | Why the _hell_ should a Google ads customer be able to "slip"
         | in a request for camera and mic? That that is even possible is
         | a large problem.
        
         | hyperman1 wrote:
         | Isn't Google supposed to vet whatever ads they send into the
         | world?
         | 
         | I don't know if this looks any better if Google is
         | negligent/incompetent instead of malicious.
        
           | dubbelboer wrote:
           | They don't vet anything, anyone can execute any code on
           | tpc.googlesyndicatio.com. See:
           | https://blog.dubbelboer.com/2016/06/10/embed-into-tpc-
           | google...
        
         | CrazyStat wrote:
         | Allowing an ads customer to "just" slip a request for camera
         | and mic access into an ad script is malicious.
        
           | ehnto wrote:
           | Negligent, I wouldn't call it malicious but I would call it
           | negligent.
        
             | CrazyStat wrote:
             | The first time google served malicious JS on behalf of a
             | customer was negligent. Maybe the second and third and
             | fourth and fiftieth times too.
             | 
             | It's not 2003 anymore, we're far past negligence at this
             | point.
        
       | lemoncookiechip wrote:
       | Quite ironic that the website posting the article actually has
       | googlesyndication.com.
        
         | Tepix wrote:
         | Why, did you expect their techs to turn it off just for the one
         | article?
        
         | bibinou wrote:
         | > These messages appeared in the JavaScript console on Safari
         | while browsing multiple pages on techsparx.com. At first I saw
         | it on one page, then checked other pages and got the same
         | messages. This site is using Ezoic's advertising system, which
         | in turn uses Google Ad Manager for some advertising.
         | 
         | Yeah he noticed it... on his own website.
        
       | ok123456 wrote:
       | On modern android devices there's a "Quick settings developer
       | tiles" option called "Sensors Off" that's available after you
       | enable developer mode.
       | 
       | After you enable that, a settings button will appear when you
       | pull down your notification/settings menu for "Sensors Off". This
       | disables the microphone, camera, fingerprint reader,
       | accelerometer and other sensors.
        
       | streptomycin wrote:
       | Shitty ad code barfing errors onto the console is typical,
       | unfortunately. The JS is not written by Google, it's written by
       | the individual advertiser, with very limited oversight.
        
         | jasonjayr wrote:
         | Google (+ apple) Control their mobile platforms and take
         | responsibility for apps served through their respective stores.
         | 
         | Why can't they do the same with their ad networks? Transpile
         | any JS code served from their network into a safe execution
         | environment/api that only permits the resources allowed safely?
        
           | streptomycin wrote:
           | The SafeFrame stuff is an attempt to move in that direction,
           | but it is far from perfect. Hopefully that will continue to
           | improve.
        
         | kadoban wrote:
         | > "with very limited oversight"
         | 
         | I think I found the problem.
        
           | streptomycin wrote:
           | It's a problem, but it's not "the" problem.
           | 
           | The problem is that letting advertisers write their own JS
           | means advertisers are willing to pay more for the ad. If
           | Google banned that practice, or put in a lot of oversight,
           | people would pay less for ads through Google. But some other
           | ad networks would still allow the bad practices, and thus be
           | able to pay higher rates. So sites would just move more ads
           | to those other networks.
           | 
           | That doesn't absolve Google of responsibility, but it does
           | mean that we can't actually solve the problem just by being
           | mad at Google.
        
             | mattgreenrocks wrote:
             | I don't think expecting Google to take responsibility for
             | the ads they serve is unreasonable.
             | 
             | It is profoundly strange that we extol the virtues of
             | succeeding in society and yet also act like said success
             | doesn't come with heavy responsibility.
        
             | luckylion wrote:
             | The value in adsense is absolutely not in the ability to
             | run unvetted javascript served from a google-hosted domain.
             | The value is in the tracking and retargeting.
             | 
             | Serving javascript from a well-connected CDN isn't
             | something that sets anyone apart, you can just sign up with
             | cloudflare and have that working in a few minutes.
        
             | jasonjayr wrote:
             | Those other networks would start getting blocked hard with
             | adblock/DNS block/public shaming of sites using them.
        
               | streptomycin wrote:
               | Ad blockers already block all ad networks. And the status
               | quo is that all ad networks have ads with shitty JS, and
               | public shaming doesn't seem to have done much about it.
        
         | Rygian wrote:
         | It is served by Google. Google has an enormous amount of
         | resources to vet the code that Google serves. They are skirting
         | their obligation of due diligence.
        
           | streptomycin wrote:
           | It's hard to change the status quo when people's livelihoods
           | are at stake. It's not just about Google, it's about all the
           | sites running Google ads, and all the companies advertising
           | through Google. If Google makes things sufficiently less
           | profitable for all those parties, they'll just move
           | elsewhere. Incremental progress is being made, like
           | SafeFrames, but it's never going to be completely solved all
           | at once.
        
             | yjftsjthsd-h wrote:
             | Oh no, how _will_ the poor advertisers survive if someone
             | tries to stop them from shipping malware?
        
               | streptomycin wrote:
               | The problem isn't that advertisers should be allowed to
               | ship malware, it's that it's hard to distinguish malware
               | from non-malware. Also the adjacent problem of "not quite
               | malware, but shitty code that spams error logs and runs
               | way slower than it should".
        
               | hetspookjee wrote:
               | Why should ads be so free to run arbitrary code? It seems
               | to me that in the end Google should be held responsible
               | for anything they serve to others. If they'd be fined for
               | this lack of oversight, perhaps they'd block javascript
               | as a whole until they do have proper oversight in place?
        
               | streptomycin wrote:
               | Because advertisers pay more for it, and publishers like
               | making money. If Google didn't offer it, someone else
               | would, and they might do even worse than Google at
               | sandboxing the code and trying to filter out abusive ads.
               | 
               | Not saying it's a great situation, just explaining why
               | there's not an easy solution.
        
               | hetspookjee wrote:
               | It's an odd argument to say that Google must do it in
               | this poor way, as it is atleast better than how the
               | others would do it. If Google stopped offering it to not
               | waste compute cycles of others or invade too much, that
               | would be a good choice. It seems Google would not like to
               | lead by example but rather take the profits and risk the
               | fine as cost-of-doing-business. If google would stop
               | doing it. It would atleast cause >30% of the worldwide
               | ads to run arbitrary code. A massive improvement on both
               | user privacy and climate. The wasted cycles are not free.
               | 
               | The solution is easy on Google's side. Just don't do it
               | and accept the reduction in revenue. But I guess the
               | economic incentives are just too big, like you say.
        
               | voakbasda wrote:
               | Shitty code should be rejected too, if they have any kind
               | of a standard for of quality. If it runs slow during
               | testing, how do you think users will feel when they run
               | it on their systems?
               | 
               | Code that makes your system appear infected with malware
               | is indistinguishable from actual malware.
        
               | streptomycin wrote:
               | They already try to detect and ban poor performing ads,
               | but it is hard to do perfectly.
        
             | not2b wrote:
             | Banning malware won't make things significantly less
             | profitable. Quite the opposite: it will save their
             | business, because if they allow these practices to persist,
             | or even get worse, everyone will be forced to block ads to
             | protect themselves.
        
               | streptomycin wrote:
               | Malware is already banned. The problem is detecting it
               | with 100% accuracy, or sandboxing it with no ability to
               | break out. A lot of people are working on it, both at
               | Google and other companies.
        
       | overshadow wrote:
       | Another reason to disable JS globally whilst doing heavy surfing
       | and only temporarily whitelisting/enabling it on sites you trust.
        
       | YetAnotherNick wrote:
       | There is extremely weak proof and all the crowd here has very
       | predictable anti chrome response even though it has nothing to do
       | with chrome. I think the domain is checking whether there is
       | camera and mic present not turning it on and accessing content.
        
         | denton-scratch wrote:
         | I don't use Chrome, and I don't give a sh*t about it. What I
         | care about is ad networks, and the websites that vomit those
         | ads into the browsers of their visitors.
         | 
         | <mode style="grumpy-old-man"> I simply won't have it. At the
         | moment, with FF, an adblocker and a JS blocker, I think I'm
         | hard to track (but certainly not impossible). If my blockers
         | get blocked, I can live without the WWW. Be careful, Goo! You
         | may own the web, but the web doesn't own us.
         | 
         | As someone once said, "It's just a fad". </mode>
        
         | luckylion wrote:
         | What are you talking about? chrome is being mentioned by _two_
         | people besides yourself and neither mention is negative. Did
         | you even read the comments before commenting about their
         | predictability?
        
       | bhauer wrote:
       | Does googlesyndication.com serve anything that is in the user's
       | interest? I've had that domain blocked for several years and
       | don't think I've ever noticed it hindering any experience.
        
         | userbinator wrote:
         | It's always been in my HOSTS file too. Ditto for their
         | analytics and "tag manager" domains.
        
       | tehlike wrote:
       | This is not google, but a third party ad network serving ads
       | through google.
       | 
       | Google tries to sandbox the creatives in an attempt to prevent
       | issues exactly like this, and develops browser features to
       | prevent issues exactly like this.
       | 
       | This is likely a script that somehow avoided google's malware
       | scanning pipelines.
       | 
       | This is definitely not google's malintent.
       | 
       | Disclaimer: Ex googler, worked in ads, dealed with problems like
       | this all the time.
        
         | dubbelboer wrote:
         | Not just ads, anyone can execute any code they like on
         | tpc.googlesyndicatio.com. See:
         | https://blog.dubbelboer.com/2016/06/10/embed-into-tpc-google...
        
         | matsemann wrote:
         | Why allow anything in an ad besides text or images? Why allow
         | others to run arbitrary code through your network. Inexcusable
         | by Google imo.
        
           | ma2rten wrote:
           | That would be anti-competitive by Google. These are different
           | ad networks, not advertisers. Ad networks need to be able to
           | do their own attribution and click spam detection.
        
             | tomrod wrote:
             | You missed the question.
             | 
             | Why allow any thing on any advertisement, regardless of
             | network, that is not image or text? Anything else opens up
             | security issues.
        
               | charcircuit wrote:
               | He just told you?
               | 
               | "Ad networks need to be able to do their own attribution
               | and click spam detection."
        
               | tomrod wrote:
               | This doesn't answer why the end product (the person
               | receiving the ad) is exposed to anything but images and
               | text, which was the question.
               | 
               | Step outside technical approach mode and look at it from
               | the highest level possible -- why are end users ever
               | given more than images and text, or, why are ads anything
               | more than a very simple _a href_ tag.
               | 
               | Spam detection and other protection of the ad platform
               | should be done long before this information ever goes to
               | the eyeball product owners.
        
         | eganist wrote:
         | > This is likely a script that somehow avoided google's malware
         | scanning pipelines.
         | 
         | I can't think of a good reason for scripts through google ad
         | syndication to be asking for camera and microphone permissions.
         | I'd assume Google runs these scripts in something like a lab
         | environment to see what's ultimately invoked before deploying
         | them to production? If so, would this be indicative of both a
         | deliberate controls bypass and a ToS violation by the ad
         | network?
         | 
         | Sounds like Google Syndication may have taken care of this by
         | enabling Permissions Policies(1) across its domains? I can't
         | tell because the article references Feature Policy (a
         | predecessor to Permissions Policies(2)) " _in Safari_ " even
         | though Feature and Permissions Policies, as best as I
         | understand them, are delivered _from the origin_ for
         | implementation by the browser. So I 'm kinda confused.
         | 
         | And if they don't implement it, it tells me they're totally
         | fine with this kind of fingerprinting.
         | 
         | (1)https://developer.mozilla.org/en-
         | US/docs/Web/HTTP/Feature_Po...
         | (2)https://www.w3.org/TR/permissions-policy-1/
         | 
         | ---
         | 
         | A prior version of this comment suggested Google should add
         | permissions policies. I since edited it to clarify that I'm
         | quite confused over whether it's something Google already
         | implemented or something Safari overlaid on top of Google
         | syndication origins since I can't verify using the origins
         | themselves. The article seems to suggest it's something Safari
         | specific even though the spec for both FP and PP involves
         | receiving a set of permissions from the origin as a header and
         | implementing them in the browser.
        
           | soared wrote:
           | Yes - banner ads are constantly targeted by malicious actors.
           | My employer pays a vendor something like $200k/mo for
           | creative scanning to avoid issues like this. Google certainly
           | spends tens of millions a year trying to avoid issues like
           | this.
           | 
           | See vendors like "the media trust"
        
             | VRay wrote:
             | Hey, if you want, you can give me $200k/month and I'll scan
             | your ads to make sure they're just flat fucking image files
             | without any arbitrary bullshit code
        
           | tehlike wrote:
           | "In a lab environment" -> Certainly happens, but what if the
           | script targets "specific devices" like "samsung galaxy s10"
           | which google won't be able list exhaustively?
           | 
           | What if bad actors figured out a way to identify google's
           | emulators and avoid doing bad stuff in that situation?
           | 
           | The part i said "google develops browser solutions to prevent
           | issues like this" is exactly what features policy will end up
           | doing. But google's ad systems and chrome features don't
           | always move at the same speed, but you can be sure that
           | whatever ad malware team is finding will help chrome team to
           | strengthen their defense.
        
             | d0gbread wrote:
             | I thought they meant more like sandbox environment. Why
             | would the API to access those things exist at all?
        
               | tehlike wrote:
               | Sandbox is provided through things like
               | ContentSecurityPolicy and Feature Policies. See my
               | comment on this.
        
               | jimbojet wrote:
               | The API exists because zoom.com request microphone and
               | camera through an iframe is a legit use case. As OP said
               | below CSPs exist so that enterprises can lock down those
               | vulnerabilities. But wouldn't make sense to lock it down
               | for all consumers.
        
           | tinus_hn wrote:
           | Javascript is Turing complete so it is impossible to
           | determine what the script is going to do in all environments
           | without running it in all environments.
           | 
           | The script could just detect the test environment and avoid
           | triggering its malicious behavior.
        
             | charcircuit wrote:
             | >Javascript is Turing complete so it is impossible to
             | determine what the script is going to do in all
             | environments without running it in all environments.
             | 
             | No, it just means that it is impossible to do for every
             | program. Google could just make it reject scripts it is
             | unable to handle.
        
           | jimbojet wrote:
           | Malicious script can (1) fingerprint and detect lab machines
           | and therefore not do bad thing -- this is arbitrarily easy if
           | Google test lab is the first to ever execute their script,
           | (2) over time build a graph of IPs, geos, test machine
           | characteristics that essentially allow them to avoid the
           | world's entire test lab infrastructure (there's only a fixed
           | number of test lab providers in the world since it's so
           | expensive to set up this infrastructure), (3) yes, bypassing
           | testing like this is a violation of ToS, but that is fairly
           | meaningless as entities and IPs are cheap to incorporate, (4)
           | not sure what you're saying about permission policies across
           | its domains? Google does have such policies unless I'm
           | missing something.
        
         | musikele wrote:
         | I think you're right, also in the script's URL it's clear the
         | name of the company that owns this particular script. If you
         | know adtech companies good enough, it should be easy to spot.
         | 
         | Note: I also work in adtech, and my daily job is to maintain a
         | library that has to load inside google's safe frame...
        
         | _fat_santa wrote:
         | > This is likely a script that somehow avoided google's malware
         | scanning pipelines.
         | 
         | Why wouldn't google just block access to those API's? I mean I
         | guess that's what this sandbox did.
        
           | tehlike wrote:
           | Some of these APIs are not overridable inside javascript -
           | overriding is pretty much the only way this can be prevented
           | - short of browser features like CSP and FP. This needs to be
           | the first thing that happens, but there's no standard api to
           | run on your browser to do that, so things have flaws.
           | 
           | A static analysis is often relatively easy to circumvent,
           | something like base64Decode(encodedMaliciousScript) can by-
           | pass them.
           | 
           | Google does various runtime/dynamic analysis to figure out
           | issues, but scripts can do interesting things to circumvent
           | those too (like targeting specific devices through user agent
           | and so on).
           | 
           | It's an arms race, often, where google catches up pretty
           | fast, but bad actors move faster.
           | 
           | Feature Policy check addresses these but ad system and chrome
           | features don't always move at the same speed, and often time
           | there are trade offs that needs to be addressed first before
           | it can be widely deployed.
        
       | vklmn wrote:
       | I don't think that it's google's fault. Google sometimes trade
       | ads on auctions, meaning they issue and HTTP request to partners
       | asking "Hey, you want to show an ad here", and partner respond
       | with price and HTML code, the highest bidder wins and HTTP code
       | is inserted.
       | 
       | HTTP contains JavaScript, and theoretically anything can be
       | executed within the browser (I've seen people mining bitcoins!).
       | 
       | Google can't monitor an execute every HTML snippet, but they
       | doing pretty great job sampling responses and evaluating some of
       | them. Fraudsters are smart, and trying to understand if the code
       | is executed on Google's servers, but overall they are loosing.
       | 
       | It seems like a case where google's system didn't work.
       | 
       | By they way, all google partners are listed here:
       | https://developers.google.com/third-party-ads/adx-vendors.
       | Usually, it's possible to track down who's exactly responsible by
       | looking at dev console
        
         | andybak wrote:
         | If Google can't guarantee no malicious javascript then they
         | should strip all javascript.
         | 
         | If I serve any content to my users, then I'm responsible for
         | any malware it contains.
        
         | driverdan wrote:
         | > I don't think that it's google's fault
         | 
         | Of course it is. It's their ad network.
         | 
         | > Google can't monitor an execute every HTML snippet
         | 
         | Of course they can. There's no excuse for allowing this
         | nonsense on their network.
        
           | coffeefirst wrote:
           | Well, they _do_ monitor snippets. There 's a lot more going
           | on here than meets the eye.
           | 
           | The problem is bad actors are really good at evading
           | detection through obfuscation and dynamically serving
           | different code depending on the IP address so the creative
           | behaves normally if it thinks you're a server Chrome instance
           | and does bad stuff for real people.
           | 
           | To make matters worse bad actors have automated their
           | process, so when they discover they're blocked everywhere,
           | they rotate to a new account, domain, change their obfuscated
           | code to look different, and are back up in a few hours. This
           | leaves everyone else playing whack-a-mole.
           | 
           | And even if Google sees through all of that, the code might
           | never actually touch Google, but come from one of the many
           | marketplaces or resellers being rendered through Google's Ad
           | Server. For any given site, the list of what markets they
           | work with is usually public. This site,
           | https://techsparx.com/ads.txt, is doing business with way too
           | many markets - 680 of which are _resellers_ of other markets
           | ' inventory.
           | 
           | This means if you're a bad actor, you can evade anyone
           | capable of seeing through your obfuscation entirely, select
           | for marketplaces that have extremely poor quality control (I
           | see a few), and wind up on this website.
        
         | Malcx wrote:
         | That sill is Google's fault as far as I'm concerned as an end
         | user.
        
         | IsThisYou wrote:
         | Legally it probably isn't Google's fault. But then again, I
         | don't care. I protect myself.
         | 
         | I use a /etc/hosts file[0] that blocks all of Goog's ad empire
         | and thousands more. I never see YT ads or any other ads
         | anymore. It sucks for the creators, but "safety first" as they
         | say.
         | 
         | Maybe one day Goog starts to care about serving ads that are
         | safe and not intrusive, then I may unblock them again.
         | 
         | Also, I just bought my first iPhone.
         | 
         | [0]
         | https://raw.githubusercontent.com/StevenBlack/hosts/master/h...
        
       | tomudding wrote:
       | I think this sounds more like some sort of fingerprinting
       | attempt. It good to see that random access to these kind of
       | resources fails due to new(er) browser controls. However, this
       | does not mean that the fingerprinting actually failed.
       | 
       | There is probably some way to determine if the request was denied
       | automatically by the browser or manually by the user (e.g., time
       | to get "response"), which is definitely something which can be
       | used for fingerprinting.
       | 
       | Which reminds me of fingerprinting by tiny differences in the
       | audio API provided by browsers [0]. Super interesting, but also a
       | bit depressing. Also works for things like canvases and WebGL.
       | 
       | EFF allows you to check how fingerprintable your browser is [1].
       | Do note that the results may not be very accurate.
       | 
       | [0]: https://fingerprintjs.com/blog/audio-fingerprinting/
       | 
       | [1]: https://coveryourtracks.eff.org
        
         | adtehcmadness1 wrote:
         | As someone working on exactly this type of stuff, your'e
         | absolutely right. *.safeframe.googlesyndication.com is Google's
         | implementation of the IAB's safeframe standard[0], which is
         | basically a cross origin iframe with an API that's exposed to
         | the embedded 3rd party code (the ad). This is how its HTML
         | looks like (some attributes removed for readability):
         | <iframe src="https://*.safeframe.googlesyndication.com/safefram
         | e/1-0-38/html/container.html" title="3rd party ad content"
         | sandbox="allow-forms allow-popups allow-popups-to-escape-
         | sandbox allow-same-origin allow-scripts allow-top-navigation-
         | by-user-activation" allow="attribution-reporting"></iframe>
         | 
         | As you can see, it has both sandbox[1] and allow[2] attributes.
         | The former restricts certain behaviors of the embedded code
         | (most notably, navigating the top window without user
         | activation), and the latter restricts it from accessing certain
         | APIs - this why the author saw errors in the console.
         | 
         | The script at https://cdn.js7k.com/ix/talon-1.0.37.js is an ad
         | verification library developed by Verizon Media (formerly
         | Oath), and it does, among other things,, fingerprinting for bot
         | detection purposes (because they want to prevent ad fraud). It
         | was served together with the actual ad media (so called
         | "creative") into the safeframe.
         | 
         | This a relativity begin case. Iv'e seen much more terrible
         | stuff, from fingerprinting for user taking to straight out
         | malware being served in ads. It's a wild west (or web).
         | 
         | [0]: https://www.iab.com/guidelines/safeframe/
         | 
         | [1]: https://developer.mozilla.org/en-
         | US/docs/Web/HTML/Element/if...
         | 
         | [2]: https://developer.mozilla.org/en-
         | US/docs/Web/HTML/Element/if...
        
           | wavefunction wrote:
           | >allow-popups-to-escape-sandbox
           | 
           | That setting is exactly the sort of reason I'm locked in a
           | war to block ads from Google and others. What good is an
           | escapable sandbox, other than for Google?
        
             | adtechmadness2 wrote:
             | well, while I definitely block ads as well (when I don't
             | reverse engineer them), this directive does have a good
             | reason. It means:
             | 
             | "Allows a sandboxed document to open new windows without
             | forcing the sandboxing flags upon them".
             | 
             | If it was absent, when user clicks the ad and it opens a
             | new tab of the advertiser website, it would inherit the
             | sandbox directives from the safeframe, which might break
             | it. To be clear "sandbox" in this context refers to the
             | iframe sandbox[0], not to be confused with the renderer
             | process sandbox[1].
             | 
             | [0]: https://developer.mozilla.org/en-
             | US/docs/Web/HTML/Element/if...
             | 
             | [1]: https://chromium.googlesource.com/chromium/src/+/refs/
             | heads/...
        
             | NavinF wrote:
             | The iframe sandbox is not for you or google. It's for sites
             | that want to protect themselves from ads they embed on the
             | page. You'll also see this used on proxy websites that
             | scrape your requested URL and embed the contents of that
             | page in an iframe.
        
         | adtehcmadness1 wrote:
         | As someone working on exactly this type of stuff, your'e
         | absolutely right. \\*.safeframe.googlesyndication.com is
         | Google's implementation of the IAB's safeframe standard[0],
         | which is basically a cross origin iframe with an API that's
         | exposed to the embedded 3rd party code (the ad). This is how
         | its HTML looks like (some attributes removed for readability):
         | <iframe src="https://\*.safeframe.googlesyndication.com/safefra
         | me/1-0-38/html/container.html" title="3rd party ad content"
         | sandbox="allow-forms allow-popups allow-popups-to-escape-
         | sandbox allow-same-origin allow-scripts allow-top-navigation-
         | by-user-activation" allow="attribution-reporting"></iframe>
         | 
         | As you can see, it has both sandbox[1] and allow[2] attributes.
         | The former restricts certain behaviors of the embedded code
         | (most notably, navigating the top window without user
         | activation), and the latter restricts it from accessing certain
         | APIs - this why the author saw errors in the console.
         | 
         | The script at https://cdn.js7k.com/ix/talon-1.0.37.js is an ad
         | verification library developed by Verizon Media (formerly
         | Oath), and it does, among other things,, fingerprinting for bot
         | detection purposes (because they want to prevent ad fraud). It
         | was served together with the actual ad media (so called
         | "creative") into the safeframe.
         | 
         | This a relativity begin case. Iv'e seen much more terrible
         | stuff, from fingerprinting for user taking to straight out
         | malware being served in ads. It's a wild west (or web).
         | 
         | [0]: https://www.iab.com/guidelines/safeframe/
         | 
         | [1]: https://developer.mozilla.org/en-
         | US/docs/Web/HTML/Element/if...
         | 
         | [2]: https://developer.mozilla.org/en-
         | US/docs/Web/HTML/Element/if...
        
           | dillondoyle wrote:
           | great post.
           | 
           | That verizon JS is surprisingly not very obfuscated so if
           | anyone is interested or just curious to hack around this is a
           | great one to look at!
           | 
           | It looks like they are checking notificationPermission for
           | notifications. stores (this.permissionStatus = "") &
           | (this.notificationPermission = "")
           | 
           | I don't see any requestPermission() in the verizon js. So
           | it's probably not the culprit?
           | 
           | I also don't think that would make sense for them to do it.
           | it's probably a bad faith advertiser.
           | 
           | I'm not sure if cross origin permissions requests can be
           | blocked by the parent safe frame yet? It looks like Chrome is
           | proposing but I can't find any info on if it has been
           | implanted? [1] [2]
           | 
           | -------
           | 
           | I really enjoy fingerprinting. Just feels like 'hacking' in
           | the basic sense of poking around with things. Since I don't
           | know enough to make actual complicated real vulnerability
           | hacking. I've built a pretty big js file for our own ads
           | analytics & tracking.
           | 
           | The verizon js has most basic common things but one that
           | sticks out as cool is cssSelectorCheck & cssRuleCheck checks
           | a few like div:dir(ltr) probably for eastern languages, and
           | stuff like -moz-osx-font-smoothing: grayscale.
           | 
           | I also like the idea of adding HONEYPOT_TAGS looks like they
           | are adding a button to check for auto click publisher fraud.
           | But man they should have obfuscated that name....
           | 
           | One interesting idea to expand on the css testing they have
           | started to use a small amount.
           | 
           | I've played with is placing actual unique CSS features and
           | @supports in styles and then measuring them. Maybe use
           | variables pass to js. Also a couple @media sizes to see if
           | it's lying about size. Can also measure if css/svg animation
           | is paused for view ability.
           | 
           | There are a ton of new css features that are implemented in
           | different browser versions so likely high entropy. Also would
           | love to learn paintWorklet just to know it for design and
           | also seems like a big surface area (svg too).
           | 
           | I'm kind of surprised they aren't doing a RTCPeerConnection
           | to try and get any IPs and it doesn't look like they are
           | doing actual webgl / audio prints.
           | 
           | seeing the mime type checks is validating to me. that's the
           | latest check I added it's pretty fast to execute i have
           | something like 150 different codes/mime types loop through
           | lol. Verizon is more sensible in checking only a couple lmfao
           | 
           | [1] https://docs.google.com/document/d/1iaocsSuVrU11FFzZwy7En
           | JNO... [2] https://dev.chromium.org/Home/chromium-
           | security/deprecating-...
        
         | zagrebian wrote:
         | > this does not mean that the fingerprinting actually failed
         | 
         | When does fingerprinting ever fail? Maybe in Brave and/or Tor,
         | but I wouldn't bet on it.
        
           | [deleted]
        
         | adtechmadness wrote:
         | As someone working on exactly this type of stuff, your'e
         | absolutely right. *.safeframe.googlesyndication.com is Google's
         | implementation of the IAB's safeframe standard[0], which is
         | basically a cross origin iframe with an API that's exposed to
         | the embedded 3rd party code (the ad). This is how its HTML
         | looks like (some attributes removed for readability):
         | <iframe src="https://*.safeframe.googlesyndication.com/safefram
         | e/1-0-38/html/container.html" title="3rd party ad content"
         | sandbox="allow-forms allow-popups allow-popups-to-escape-
         | sandbox allow-same-origin allow-scripts allow-top-navigation-
         | by-user-activation" allow="attribution-reporting"></iframe>
         | 
         | As you can see, it has both sandbox[1] and allow[2] attributes.
         | The former restricts certain behaviors of the embedded code
         | (most notably, navigating the top window without user
         | activation), and the latter restricts it from accessing certain
         | APIs - this why the author saw errors in the console.
         | 
         | The script at https://cdn.js7k.com/ix/talon-1.0.37.js is an ad
         | verification library developed by Verizon Media (formerly
         | Oath), and it does, among other things,, fingerprinting for bot
         | detection purposes (because they want to prevent ad fraud). It
         | was served together with the actual ad media (so called
         | "creative") into the safeframe.
         | 
         | This a relativity begin case. Iv'e seen much more terrible
         | stuff, from fingerprinting for user taking to straight out
         | malware being served in ads. It's a wild west (or web).
         | 
         | [0]: https://www.iab.com/guidelines/safeframe/
         | 
         | [1]: https://developer.mozilla.org/en-
         | US/docs/Web/HTML/Element/if...
         | 
         | [2]: https://developer.mozilla.org/en-
         | US/docs/Web/HTML/Element/if...
        
         | seedie wrote:
         | The result of [1] surprises me. I'm on a very standard Android
         | device with a standard Browser and nevertheless I'm told that
         | my USER AGENT and the HTTP_ACCEPT HEADERS are unique.
         | 
         | [1]: https://coveryourtracks.eff.org/
        
         | aasasd wrote:
         | IIRC FF puts the domain on the 'blocklist' after the first
         | manual choice, at least if the user selects 'block' instead of
         | one-time 'deny' (haven't seen the dialog in a while).
         | 
         | Hopefully _The Browser_ doesn 't pester the user each time,
         | either.
        
         | ravenstine wrote:
         | In my experience, with tools like Cover Your Tracks (apparently
         | this is the new name for Panopticlick), the more you try and
         | thwart fingerprinting, the more unique you appear. Although I
         | still do everything I can to block and filter everything
         | conceivable, I've given up on trying to figure out how
         | identifiable I am on the web because it seems useless. If you
         | don't try then you're identifiable, and if you do then you are
         | probably more identifiable. Whatever.
        
           | fsflover wrote:
           | > the more you try and thwart fingerprinting, the more unique
           | you appear.
           | 
           | Not if you use Tor Browser.
        
             | dustymcp wrote:
             | Then i just get put on another list :)
        
               | beckingz wrote:
               | You get put on a different list each time. :)
        
           | WalterBright wrote:
           | One thing you can do is use different computers for different
           | purposes.
        
             | LVDOVICVS wrote:
             | Just the other day I created a vm on my proxmox server of
             | the Tails .iso. Makes it much easier to fire up rather than
             | reboot something with a USB.
        
           | bombcar wrote:
           | Those anti-fingerprinting tools should make you appear as the
           | most common iPhone as much as possible.
        
             | _hyn3 wrote:
             | What is the most common iPhone? Should the common iPhone
             | browser experience be scaled up to a desktop resolution, or
             | should the desktop browser limit itself to the common
             | iPhone resolution? What about mobile Safari bugs or
             | misfeatures, such as webRTC shortcomings, or CSS bugs, or
             | viewport resizing/scaling/zoom bugs?
             | 
             | There are so many possible variations that it seems like
             | preventing fingerprinting by pretending you're something
             | you're not would be an impossible task and makes you even
             | more unique, not less.
        
               | ipaddr wrote:
               | You could pretend to be something you are not but if you
               | keep giving the same info everything will be grouped
               | together.
        
             | vmception wrote:
             | the basicest of basic
             | 
             | live laugh love as the user-agent
        
           | noduerme wrote:
           | Fingerprinting with any accuracy is hard. As a legitimate use
           | case, I had a corporate client who wanted their management
           | software only accessible to sub-management employees from
           | certain on-site locations. And they wanted this without
           | sending those employees through a VPN or having a static IP
           | for each location. So what I allowed them to do was to let a
           | manager clear a given device's browser fingerprint (e.g. on
           | the computer at a certain desk, or the employee's laptop) and
           | be able to manage or revoke access for a limited number of
           | those at a time.
           | 
           | This was fairly secure because even the same employee was
           | unlikely to get the same fingerprint twice - it was only
           | occasionally more convenient than generating a random hash
           | everytime they opened the browser. It became a huge pain for
           | managers to be called constantly on the weekend to remotely
           | reauthorize the devices they'd just authorized a few hours
           | ago, or when chrome suddenly updated itself for half the
           | employees, so eventually we switched to a looser hybrid of
           | fingerprints and local storage.
        
           | codedokode wrote:
           | > the more unique you appear
           | 
           | If your fingerprint is unique and doesn't change then yes,
           | you stand out. But if your fingerprint changes on every page
           | load, then you become indistinguishable from other users.
        
           | vbezhenar wrote:
           | Mimic latest iPhone. They are very hard to fingerprint.
        
           | tomudding wrote:
           | I think this can also be a good thing, as long as you also
           | use software which makes sure you have a distinct 'unique'
           | fingerprint for each session.
           | 
           | Not that I am a huge fan of Brave, but I think they have
           | implemented something like this for certain (or all) APIs.
           | You will still have a unique fingerprint, but it should not
           | match to any previous fingerprints you had in the past.
           | 
           | Edit: see https://brave.com/privacy-updates/3-fingerprint-
           | randomizatio...
        
           | 1vuio0pswjnm7 wrote:
           | "In my experience, with tools like Cover Your Tracks
           | (apparently this is the new name for Panopticlick), the more
           | you try and thwart fingerprinting, the more unique you
           | appear."
           | 
           | In the interest of fair balance, I have had the opposite
           | experience.
           | 
           | "I've given up..."
           | 
           | That's probably what "tech" companies are hoping you will do.
           | I see this response repeatedly on HN when the fingerprinting
           | topic comes up. I am wondering if the persons submitting
           | these replies want others to "give up".
           | 
           | Is there a difference between users wanting to appear "the
           | same" and a desire by users to stop supplying maximum amounts
           | of free data/information to "tech" companies and exacerbating
           | the problem of online advertising and associated
           | surveillance.
           | 
           | If a user sends no fingerprinting data/information, then she
           | might be "unique" because most users are sending excessive
           | amounts of fingerprinting data/information. However, IMO,
           | that is hardly a sound argument for continuing to send
           | excessive amounts of fingerprinting data/information. I
           | subscribe to the general principle of sending the least
           | amount of information possible to successfully retrieve a
           | page. This might be "unique" user behaviour, but I am
           | confident it is the correct approach. The big picture IMHO is
           | that "tech" companies, generally, are trying to collect
           | data/information about users to inform online advertising.
           | Uniquely identifying users is only a part of what they are
           | trying to do.
           | 
           | It is a bit like telling a user to use/not use an ad blocker
           | based on what other users are doing, so as to avoid being
           | "unique". This might help with avoiding "uniqueness" but
           | clearly there are gains to be had from using an ad blocker
           | that are greater than the value of trying to appear "the
           | same" as every other user.
           | 
           | Imagine users are all trying to appear exactly the same, so
           | they embark upon coordinating with each other to make the
           | exact same choices. It stands to reason that the number of
           | choices each user has to make is going to be a factor in
           | whether this is successful.
           | 
           | If every user is choosing to send large amounts of
           | data/information (e.g., using browser defaults), then every
           | user has to coordinate their choices on every single data
           | point or bit of information. The higher the number of
           | "correct" choices each user has to make, the less likely that
           | all users succeed in being uniform. There are more chances
           | for error. Whereas if we reduce the number of data points and
           | bits of information so that every user is only sending one or
           | two headers, with no Javascript, CSS, etc.,^1 then that is
           | far easier for users to coordinate.
           | 
           | 1. This has been tested heavily by yours truly for decades.
           | One does not need a graphics layer or graphical browser
           | features to make successful HTTP requests. I am not
           | interested in being "invisible", I am interested in reducing
           | the amount of free data/information I give to "tech"
           | companies. Perhaps there is a difference between wanting to
           | "blend in" and wanting to stop "feeding the beast".
           | 
           | "We do not know anything about User A. It looks like she is
           | using TOPS-20 to browse the internet."
           | 
           | Is User A less or more likely to be unique. Probably more. Is
           | User A a more or less viable target for online advertising.
           | To me, it is the second question that matters the most.
        
           | _fat_santa wrote:
           | I attack if from a different direction. All these companies
           | want to fingerprint your device and track you for really one
           | reason at the very end: showing you a targeted ad. Now what
           | happens if they can't deliver that ad (because you have an
           | adblocker installed), well all that tracking and
           | fingerprinting they just did is moot, because there's nothing
           | actionable they can do with it.
           | 
           | That's my rather naive opinion, idk am I just being naive?
        
             | caymanjim wrote:
             | I care a lot less about whether or not I see an ad than I
             | do about the shadow dossier being compiled about me based
             | on my browsing habits. So no, I don't think all the
             | fingerprinting is moot. I'd rather see untargeted
             | advertising than have my personal profile bought and sold.
        
               | judge2020 wrote:
               | Do you have ads blocked on google.com then? Because all
               | ads there are contextual, not personalized.
        
               | AJ007 wrote:
               | That is wrong. Not sure where you got that idea from.
        
               | judge2020 wrote:
               | Search 'hr platform' and you only get ads for HR
               | platforms. At no point will you see totally unrelated ads
               | for stuff you didn't search for, since those will do much
               | worse than contextual ones in the search context.
        
               | shukantpal wrote:
               | Yeah but if you search for something generic, Google will
               | infer what you are searching for based on your profile.
        
               | ghusbands wrote:
               | But the context/question is whether or not the adverts
               | are different, not whether the search results are
               | different.
        
             | kevin_thibedeau wrote:
             | The problem is all activity gets sold to data brokers who
             | build up a profile on you for future targeting.
        
         | dc3k wrote:
         | > https://fingerprintjs.com/blog/audio-fingerprinting/
         | 
         | > It is particularly useful to identify malicious visitors
         | attempting to circumvent tracking
         | 
         | Ah yes, the visitor trying to not be tracked is the malicious
         | one. Barf.
        
           | kevin_thibedeau wrote:
           | The nerve of some people protecting themselves against
           | browser exploits.
        
         | [deleted]
        
       | ajsnigrutin wrote:
       | Best way to solve this would be, to have a physical switch that
       | cuts power to webcam and microphone.... sadly I don't know any
       | laptops who actually implement this.
       | 
       | Atleast some (eg. lenovo), have physical shutters to cover the
       | webcam lens, so even if the cam turns on, it records only a piece
       | of black plastic.
        
       | jessaustin wrote:
       | We have a _chance_ to notice this when it 's mediated by a
       | browser. Native apps don't barf all over the console and thus
       | escape such scrutiny.
       | 
       | I first considered this when a friend told me about a brand of
       | lawnmower of which I had never heard let alone searched (mowing
       | lawns is the least interesting activity I can imagine), and one
       | minute later a podcast app had a big banner at the top by which I
       | could purchase a lawnmower of that exact brand. I don't have
       | important conversations in the vicinity of mobile phones anymore.
        
         | SahAssar wrote:
         | I've heard similar stories many times but every time it seems
         | to be baader-meinhof effect/frequency illusion or that the
         | linking is not via audio but something else. People have man-
         | in-the-middle checked advertising traffic to see if they either
         | stream audio or send spoken keywords to ad servers and they do
         | not seem to do that. It's more likely that it saw your phone
         | and your friends at similar locations and your friend had
         | searched for the brand before, therefore linking your ad
         | profile to the brand.
         | 
         | Do you think the apps on your phone real-time stream all mic
         | audio or that they run speech-to-text on your device?
        
           | colordrops wrote:
           | Would it have to be audio? It could do TTS on the phone then
           | send the text back.
        
             | SahAssar wrote:
             | I said "run speech-to-text on your device". Also TTS would
             | be the other way around as it means text to speech.
        
               | colordrops wrote:
               | Sorry I meant STT
        
           | eloisius wrote:
           | That's almost scarier. Seems like it could leak embarrassing
           | information about what you've been searching or buying to
           | your friends.
        
             | SahAssar wrote:
             | > it could leak embarrassing information about what you've
             | been searching or buying to your friends.
             | 
             | Yes. Say no to tracking, regardless of if it listens to
             | your voice. Even if it does not leak this way it's probable
             | that one of the major ad brokers will leak data in the
             | future.
        
         | marcellus23 wrote:
         | But mobile apps are required to ask for mic/camera permissions.
        
           | kzrdude wrote:
           | There are exploits that circumvent this, of course.
        
             | andybak wrote:
             | I don't think you'd waste an exploit like that to server a
             | lawnmower ad.
        
             | marcellus23 wrote:
             | Sure, but those exist for the web too...
        
         | guptaneil wrote:
         | This is a common belief that phones must be listening to us
         | because ads are so targeted, but the scary truth is they
         | aren't[0] because they don't need to. They have far more
         | effective ways of targeting ads. For example, the reason you
         | probably saw the lawnmower ad is because your friend searched
         | for lawnmowers, and google knows they are friends with you, so
         | they showed you targeted ads too because you might recommend
         | that brand if you talk or you might subconsciously register
         | that brand and reaffirm their decision to buy with something
         | like "oh yeah I've heard of X, they're supposed to be the best"
         | without remembering where you saw that. It's even possible you
         | got the ad before your chat but didn't notice because you had
         | no reason to pay attention to a lawnmower ad.
         | 
         | This is why data privacy is so important even if you feel like
         | you have nothing to hide.
         | 
         | 0: as you note, they technically have the ability to do so and
         | random apps could be but the amount of effort it would take to
         | record, transcribe, and evaluate that much data just isn't
         | worth it when most users voluntarily give their info anyway.
         | This is why I don't use a phone, browser, or email service
         | created by an ad tech company though and it boggles my mind how
         | many people are ok with that.
        
           | jessaustin wrote:
           | In this case, the conservative explanation seemed unlikely
           | for several reasons. This friend would be more precisely
           | described as the elderly friend of my elderly parents.
           | Neither of us are on social media, she doesn't know what
           | podcasts are, and neither of us had even used a web browser
           | in the preceding several hours. Neither of us was in the
           | market for a new riding mower nor had been in the preceding
           | decade. I was on her (remote, inconsistent-cell-reception)
           | farm to help with some cattle. I had never heard of this
           | brand before, and now over a year later I can't think of it.
           | 
           | Is it so unlikely that an ad network sketchy enough to pay
           | its way onto random Android apps would also be sketchy enough
           | to monitor conversations for keywords that can get it paid? I
           | don't think that's unlikely at all.
        
         | avsteele wrote:
         | Has anyone seen any well done research showing these effects?
        
           | goldenkey wrote:
           | I've heard it from more folks than I'd like to. And counting
           | them all as crazy or paranoid is less believable than the
           | alternative.
        
             | ricardobayes wrote:
             | I've only experienced it first-hand on youtube. After
             | trying to learn spanish and talking spanish on the phone
             | (but searched nothing on youtube in Spanish) Youtube offers
             | recommendations in Spanish. Also after I sneezed a couple
             | of times it recommends something related to hay fever. Lots
             | more. I find these way more than circumstancial.
        
             | andybak wrote:
             | > And counting them all as crazy or paranoid
             | 
             | Nobody is saying that. It's simply a quirk of human
             | psychology. We are pattern matching machines with a poor
             | intuitive grasp of probability.
        
             | pumnikol wrote:
             | A while ago, out of the blue, I've been diagnosed with a
             | very rare medical condition. Online ads and recommended
             | videos then started showing relevant and very specific
             | drugs, treatments, therapies, self-help groups etc. for me.
             | I had not performed any online or offline research about
             | the topic, had not talked to anyone about it at all, except
             | the MD, and had bought all drugs with cash. What are the
             | odds? Then at work, in a web meeting, I made a completely
             | spontaneous pun about having a divorce. Minutes later, my
             | social media were completely plastered with ads for divorce
             | lawyers, which they had never been before. I've never been
             | married, don't want to be, and I don't even know anyone who
             | has gotten a divorce in the last five years. And more, and
             | more. If social media companies are selling health and
             | other personal data to advertisers, what's stopping them
             | from selling it to insurances and recruiters? Maybe I am
             | all wrong about this, and maybe many others are also. Or
             | maybe we'll start seeing a big wave of whisteblowing and
             | revelation books once the current crop of software
             | engineers, managers and directors retire and/ or are
             | sufficiently strapped for cash.
        
           | keanebean86 wrote:
           | Not a study but I've heard explanations that big companies
           | have so much data their models are just that good. For
           | instance they know who you're friends with, both your search
           | histories, and location data. Knowing you friend searched for
           | lawnmowers recently and now y'all are physically close it's
           | possible that brand was discussed.
           | 
           | Although I wouldn't be shocked at all if mics were being
           | used. I just feel like that would have been leaked by someone
           | by now.
        
             | clairity wrote:
             | no, they have so much data because their models are so bad.
             | the vast data is used to give the illusion of good models
             | by sheer volume, so little random matches are made more
             | likely. that's why google and the like want to hoover up
             | our data, they're desperate to keep the gravy train rolling
             | long enough to create the good models and keep it going
             | some more.
        
       | 1cvmask wrote:
       | Injecting malware into ads is as old as ad networks. There are
       | even ad networks that hijack the ads of other networks and
       | replace the original ads with their own.
       | 
       | There are also many different types of clickjacking:
       | 
       | https://en.m.wikipedia.org/wiki/Clickjacking
        
       | jefftk wrote:
       | The author is concerned that an ad might be able to
       | surreptitiously turn on the camera or microphone, but these are
       | not accessible by default. In this case, it isn't even getting as
       | far as a permissions prompt because the default Feature Policy
       | doesn't allow camera or mic access in cross-origin iframes. (Ex,
       | for Chrome:
       | https://sites.google.com/a/chromium.org/dev/Home/chromium-se...)
       | Instead, I think the most likely thing happening here is that an
       | advertiser is running a script that is trying to do
       | fingerprinting, and which is blocked by the browser protection.
       | 
       | (That it's an iframe running on
       | https://[random].safeframe.googlesyndication.com tells us it's an
       | ad served through Google Ad Manager, and the contents of the
       | iframe are supplied by the advertiser.)
       | 
       | Disclosure: I work for Google, speaking only for myself
        
         | [deleted]
        
         | IsThisYou wrote:
         | Is it a good idea for Google to allow random advertisers to use
         | privacy sensitive code (like audio access or fingerprinting) in
         | advertiser supplied iframe content?
         | 
         | People may be visiting a trusted site and then are asked to
         | allow audio and video, not realising that it is an iframe
         | asking for the permission.
        
         | [deleted]
        
         | reaperducer wrote:
         | _The author is concerned that an ad might be able to
         | surreptitiously turn on the camera or microphone_
         | 
         | You are correct, that is the author's concern.
         | 
         | The reason the rest of us are concerned is because the general
         | public has been conditioned by Google and others to just press
         | "Accept" any prompt that pops up, no matter how dangerous.
        
           | jefftk wrote:
           | In this case there isn't a prompt: access to the camera and
           | microphone is disallowed by default in cross-origin iframes.
           | The site would have to specifically delegate permission to
           | the ad before it could even trigger a prompt.
        
           | denton-scratch wrote:
           | > just press "Accept" any prompt that pops up
           | 
           | I auto-press [Accept]. I use an ad-blocker. I reject 3rd-
           | party cookies. I disable JS by default, and re-enable it
           | selectively for sites that refuse to work without JS. If that
           | re-enablement involves more than a few clicks, I'll close the
           | site - there are other fish in the sea.
           | 
           | What am I doing wrong?
        
           | ethbr0 wrote:
           | This is a catch-22 though.
           | 
           | If something dangerous to privacy is being widely used in the
           | world, then putting it behind a prompt creates an avalanche
           | of prompts, and results in user apathy.
           | 
           | But not prompting requires you to choose a default, which
           | either default to block and breaks things (if it was actually
           | required) or defaults to allow.
        
             | Hallucinaut wrote:
             | I would happily set all browsers to always deny all ads and
             | untrusted domains the ability to use microphone and camera
             | at all times. So there's no need for an avalanche of
             | alerts, it just shouldn't be permissible for a resource
             | from an untrusted source.
             | 
             | It may be widely used, but for a highly concentrated set of
             | sites. I can't think of an occasion I've used it beyond
             | Google, Microsoft, and Zoom properties. Perhaps Slack and
             | Discord too? So there must be a better way.
        
               | noahtallen wrote:
               | > deny... untrusted domains the ability to use microphone
               | and camera at all times
               | 
               | I believe this is the default for most browsers now,
               | right? (Or have I been spoiled by Firefox?) This is the
               | best way, where the camera is always inaccessible, unless
               | you enable it for a domain which needs it. Since I rarely
               | ever see the camera/mic request option, I don't think
               | we've been conditioned to allow this type of request.
               | (Compared to cookies, which show up on nearly every
               | site.)
               | 
               | If you want to disable permissions dialogues altogether,
               | then what's a trusted domain? Just zoom and a handful of
               | others? If you write yourself a nice app which uses the
               | mic, do you have to email Google to get yourself added to
               | the list of trusted domains? That would be pretty bad for
               | the open web, so permissions dialogues are the
               | alternative
        
             | smichel17 wrote:
             | The solution to this is simple, but not easy: block
             | everything by default and do not prompt to enable it (but
             | do show an indicator of what has been blocked).
             | 
             | This is how Firefox tracker protection, uMatrix, noscript,
             | and a plethora of ad blockers and other privacy tools work.
        
           | jeffbee wrote:
           | What? In what way have you been conditioned to accept
           | prompts?
        
             | fault1 wrote:
             | I think there is most definitely conditioning going on. I
             | watched my fiance click on one of the "accept all cookies"
             | GDPR-prompts (it's become an antipattern) a few days ago.
             | She almost automatically did it without thinking. I went to
             | the same site on my laptop and if you clicked decline, it
             | immediately brought up modal dialogs that made that site
             | unusable. I can see why 99% of people would be conditioning
             | to just hit yes.
        
               | jeffbee wrote:
               | And you blame Google for that, instead of some brain-dead
               | Eurocrat?
        
               | mrtranscendence wrote:
               | Who said anything about blaming google for users clicking
               | "ok" without thinking about it?
        
               | jeffbee wrote:
               | The person to whom I replied!
        
               | theWreckluse wrote:
               | To be fair, he said Google and others. I still don't know
               | how much Google is responsible though.
        
               | hetspookjee wrote:
               | Google goes way further and doesn't even serve a prompt.
               | Instead they show you a opt-out plugin that you can
               | download to run in your browser. Most blatant disregard
               | of rules and feeling better than the rest behaviour if
               | you ask me.
        
               | denton-scratch wrote:
               | Which Eurocrat required stupid popups? And since when did
               | US corporations kow-tow to EU bureaucrats? This is stupid
               | corporations, hiding behind GDPR to plant cookies.
        
         | veeti wrote:
         | Curiously no explanation why this sort of malicious behavior is
         | accepted by "Google Ad Manager" in the first place.
         | 
         | If you haven't already installed:
         | https://addons.mozilla.org/en-US/firefox/addon/ublock-origin...
         | https://chrome.google.com/webstore/detail/ublock-origin/cjpa...
        
           | jefftk wrote:
           | I'm not sure it _is_ allowed; that 's not a part of the
           | business I know much about. Since ads can run arbitrary JS
           | it's hard to enforce policy programmatically.
           | 
           | On the other hand, it's not clear to me that whatever this
           | advertiser is trying to do is having any real effect, aside
           | from causing a console message that it is being blocked.
           | Access to the mic and camera from cross-origin iframes is
           | blocked by default, and you can't even trigger a permissions
           | prompt.
           | 
           | As for installing an ad blocker, even with all the messiness
           | of advertising, I still prefer it to paywalls.
        
             | ineptech wrote:
             | > Since ads can run arbitrary JS it's hard to enforce
             | policy programmatically.
             | 
             | Letting ads run arbitrary JS _is_ the policy, right? It 's
             | not like that's a requirement to make the internet work,
             | that's just a Google policy that trades money for user
             | experience.
        
               | samhw wrote:
               | > Letting ads run arbitrary JS is the policy, right?
               | 
               | I mean, anything on the Web can run arbitrary JS. The
               | entire point is that it's a sandbox environment where
               | arbitrary JS can't do any harm (excluding cases where
               | vulnerabilities are found).
               | 
               | If you're not comfortable with arbitrary JS running on
               | your computer, you'd have to either (a) not use the Web,
               | (b) disable JavaScript, or (c) only visit sites which you
               | have vetted and deem to be trustworthy. None of those are
               | particularly practicable.
               | 
               | Most of us operate on the generally-reasonable assumption
               | that the sandbox is effective, and therefore that we're
               | OK to [click on that random link from HN like you did
               | just now / open that news site which pulls in a bunch of
               | tracking scripts / etc].
               | 
               | Either way, this is not somehow a problem that's specific
               | to Google Ad Manager in any way at all. I don't know what
               | else you could really expect of them.
        
               | smolder wrote:
               | > (a) not use the Web, (b) disable JavaScript, or (c)
               | only visit sites which you have vetted and deem to be
               | trustworthy.
               | 
               | > I don't know what else you could really expect of them.
               | 
               | Your option C is basically how it must work, and mostly
               | does. To be safe online we go to sites we trust. When
               | Google delivers malicious JS through ads, the site
               | operator probably doesn't know Google has harmed the user
               | on their behalf, so their trustworthiness becomes moot.
               | Is there some "safe ads" codeless option for site
               | operators who want to protect their users while still
               | showing ads? Has Google made site operators aware they
               | occasionally deliver malicious JS to users?
        
               | jefftk wrote:
               | Ads are allowed to run arbitrary JS, but that doesn't
               | mean they are allowed to do arbitrary things by policy.
               | That is, the technical restrictions are not able to be as
               | strict as the policy.
               | 
               | A bunch of us were working on a project where ads would
               | be fully declarative, and so no longer able to run
               | arbitrary JavaScript, but this received very little
               | interest outside of Google (advertisers didn't want to
               | move to a new format, publishers didn't care) and we
               | moved on.
               | 
               | (Still speaking only for myself)
        
               | barneygale wrote:
               | I do wish google engineers would do something positive
               | for society and switch to a career in subsistence
               | farming. No one needs ads. Not arbitrary JS ads, not
               | declarative ads, not personalised ads, not any ads.
        
               | jefftk wrote:
               | But what do you see as the alternative for funding sites?
               | The site we're on is funded by (declarative, non-
               | personalized, non-obtrusive) ads. I would rather have ads
               | than paywalls.
        
               | denton-scratch wrote:
               | Once upon a time, children, people made websites with no
               | ads, and no paywalls. Sure, there weren't as many
               | websites; and sure, they didn't have as many sliding
               | panels and other gimmicks.
               | 
               | And for sure, I appreciate being able to buy stuff
               | online. But I seem to be able to do that without viewing
               | ads! Amazing!
        
               | luckylion wrote:
               | "Made for Adsense" sites have no value, if they get shut
               | down, nothing is lost. Normal sites that switched to
               | Adsense have become worse, because now they need clicks
               | and engagement above all else, incentivizing click-bait
               | and low-effort content. Nothing would be lost if they
               | switched to subscription models and provided valuable
               | content. There's little in between in my opinion, it's
               | either "site doesn't use Adsense, it's a company site /
               | personal blog / some institution", "site was made only to
               | make money by adsense with the lowest price content
               | possible, had a bunch of links bought and now provides a
               | passive income to some SEO" and "site used to provide
               | quality content, loses readership to spam-sites because
               | Google has fucked up their sorting algos and now switches
               | to low-value content as well, because anything else can't
               | be financed".
               | 
               | Twitch shows that people are very willing to pay for
               | content. I'd very much be happy to if that removed all of
               | the spam.
        
               | ineptech wrote:
               | > I would rather have ads than paywalls.
               | 
               | No offense but I've heard people who work at ad companies
               | repeat this like a mantra, and it's a false dichotomy,
               | akin to a coal company who dumps slag in rivers saying,
               | "Well we think it's better than letting everyone freeze
               | to death." We're not asking Google to stop advertising
               | altogether and close up shop, just to make the internet
               | ad ecosystem a little less awful and Orwellian.
        
               | jefftk wrote:
               | _> We 're not asking Google to stop advertising
               | altogether_
               | 
               | I think my parent was: "No one needs ads. Not arbitrary
               | JS ads, not declarative ads, not personalised ads, not
               | any ads."
        
               | ineptech wrote:
               | Fair enough, though the suggestion that Google engineers
               | take up farming was probably not very serious. However,
               | the idea that we have a choice between no ads or the most
               | pernicious ads imaginable is certainly a false one.
        
               | jefftk wrote:
               | _> the idea that we have a choice between no ads or the
               | most pernicious ads imaginable is certainly a false one_
               | 
               | I agree with you. I spent a large part of 2018-2019
               | trying to make ads declarative, and am now working on
               | (among other things) on increasing the isolation of
               | conventional ads [1] and implementing cross-site
               | advertising without cross-site identity leakage [2].
               | 
               | But it sounds like you and my parent have very different
               | views: there's a lot of space between "ads should be a
               | lot better" and "ads should not exist".
               | 
               | [1] https://github.com/WICG/webpackage/issues/624
               | 
               | [2]
               | https://github.com/WICG/turtledove/blob/main/FLEDGE.md
        
               | Ansil849 wrote:
               | If a website cannot survive without ads, maybe it doesn't
               | need to actually exist in the first place. The world will
               | go on.
        
               | jefftk wrote:
               | I do think the world would go on, but it's a world I
               | would like less. Some things would move behind paywalls,
               | others would move to the boundary of whatever was
               | considered advertising (sponsored content? Product
               | placement?)
        
               | Ansil849 wrote:
               | I think of it as a bit of social selection for websites.
               | If a website has content people value, they find ways to
               | support the website. If not, well, maybe the site not
               | existing is not a terrible loss.
        
               | shukantpal wrote:
               | The world can move on without having the site shutdown,
               | then. Don't visit those sites and pretend they don't
               | exist.
        
               | _hyn3 wrote:
               | Why is it up to the advertisers and publishers?
        
               | fault1 wrote:
               | how do you think google, an adtech company, makes money?
        
               | jsnell wrote:
               | If they tried to unilaterally make such a change and ban
               | the old formats, both the advertisers and publishers
               | would complain to competition regulators around the
               | world. And whether you think there would be any merit to
               | those complaints or not, the outcome would still be
               | another round of lawsuits with billions on the line.
        
               | denton-scratch wrote:
               | > with billions on the line
               | 
               | Not my money. Why should I care about lawsuits between
               | advertising networks and their advertisers, regulators
               | and so on?
        
               | Tsiklon wrote:
               | I suppose they're the paying customers.
        
             | williamtwild wrote:
             | How about the company you work for disallow arbitrary JS in
             | the ads they serve? We already know the answer though.
             | Bottom line over doing what is right.
        
               | jefftk wrote:
               | As I wrote above, this was something we tried:
               | https://news.ycombinator.com/item?id=29615583
               | 
               | This was the main thing I worked on in 2018-2019, along
               | with many other engineers. I wrote about it some in
               | https://www.jefftk.com/p/value-of-working-in-ads If this
               | is something that users were demanding I could see
               | picking it up again, but as far as we could tell us a
               | time there was minimal interest externally.
        
             | oauea wrote:
             | > Since ads can run arbitrary JS
             | 
             | What a fantastic idea to create a platform where anyone can
             | pay to have code ran on millions of end-user machines,
             | embedded in random websites. What could possibly go wrong?
        
             | denton-scratch wrote:
             | > Since ads can run arbitrary JS it's hard to enforce
             | policy programmatically.
             | 
             | It's not that hard, at least not at my end. I just don't
             | run ads.
             | 
             | Why on earth does goo think that running arbitrary JS on
             | their visitors' computers is OK? I mean, I know this is the
             | policy, and I assume the policy of other ad networks is at
             | least as "liberal". So I'm sorry, chaps, but no ads run on
             | this screen.
             | 
             | I wonder if this is a race to the bottom? I've noticed that
             | TV ads these days are all for animal charities, equity
             | release schemes, and incontinence pads. I don't know what
             | they're running in web ads, but I'm pretty sure that the TV
             | ads are so dire because everyone but old fogeys and poor
             | people skip the ads. I'd assume the same old/poor people
             | are the ones that have to see web ads.
             | 
             | Advertising to poor people has traditionally been a pretty
             | bad pitch. So why isn't the online/TV ad industry
             | crumbling? And how do I bet against their shares?
        
               | javajosh wrote:
               | _> Advertising to poor people has traditionally been a
               | pretty bad pitch_
               | 
               | Is that true? It seems like poor people spend, in
               | aggregate, more than rich people _and_ they tend to buy
               | the cheaper, more mass-produced stuff. The grocery
               | business alone _must_ be build on the commerce of poor
               | people, right?
        
             | hetspookjee wrote:
             | Haha, easy way to create plausible deniability if you just
             | allow everything to run :D. Doesn't that also make it easy
             | for anyone running an ad to run hivemind in the ad itself?
             | 
             | In addition, why is it hard to enforce a policy that
             | disallows any ad to reach out to the camera and microphone?
             | I don't understand why that is hard to enforce.
        
               | jefftk wrote:
               | _> Doesn 't that also make it easy for anyone running an
               | ad to run hivemind in the ad itself?_
               | 
               | Are you talking about crypto mining? That's a good
               | example of something which is against policy but
               | difficult to fully prevent technically. The core problem
               | from someone trying to exploit this, however, is that
               | crypto mining in the browser is minimally profitable, so
               | you need to do a huge amount before seeing noticeable
               | returns. The more you try to do the more likely you are
               | to get caught, so while it does take some scrutiny from
               | publishers and ad networks, it doesn't take very much.
               | 
               | (still speaking only for myself; this isn't something I
               | know very much about)
        
               | hetspookjee wrote:
               | Yes, I am talking indeed about crypto mining. I am also
               | talking mostly about the aspect of wasting compute cycles
               | and thus harming the environment through Google's means.
               | Yes, it is minimally profitable, but that it is possible
               | is weird if you ask me.
        
               | jefftk wrote:
               | Sorry, what I meant was that it is minimally profitable
               | at the best of times. Which means it doesn't take very
               | much enforcement to shift it over to being negatively
               | profitable, because it costs you more in time,
               | engineering, etc. than you would make. And once it is a
               | money losing proposition, the people trying to exploit
               | users are no longer interested.
        
               | mschuster91 wrote:
               | Because ads can use Javascript which is notorious for how
               | hard it is to vet code and how easy it is to hide
               | functionality.
        
               | hetspookjee wrote:
               | But why do they need that? What purpose does an ad with
               | javascript serve?
        
               | mschuster91 wrote:
               | Animations, tracking and fraud detection are the big
               | areas. Ads were historically one of the strongest drivers
               | of Flash, given how easy it made for creatives to
               | implement animations without needing a frontend developer
               | - ad buyers these days would instantly protest against
               | any attempt to remove either of the three use cases.
               | 
               | And given that we are talking about sometimes eight
               | figures worth of ad buying... no network will want to
               | risk offending such clients.
        
               | hetspookjee wrote:
               | I still don't understand it. I was under the impression
               | that animations are also able to be served through HTML5?
               | Both CSS and HTML should be sufficient. We're in the age
               | where we're able to serve an entire sqlite database over
               | a static website, yet an animation does require
               | javascript?
        
               | danachow wrote:
               | > We're in the age where we're able to serve an entire
               | sqlite database over a static website
               | 
               | Making use of the SQLite database is entirely client side
               | and requires JavaScript or WASM (the distinction is
               | unimportant) - it requires running code on the frontend.
               | This is not a great way to state your case.
        
               | jefftk wrote:
               | Animations can be CSS, yes. That's what you do in
               | https://amp.dev/documentation/guides-and-
               | tutorials/learn/a4a... (declarative ad format, no
               | advertiser-written JS)
        
       | Nextgrid wrote:
       | I wonder if it's a fingerprinting attempt gone wrong? I can't see
       | a reasonable even malicious reason for eavesdropping on
       | mic/camera at scale like that, you'll be capturing a ton of data
       | you need to manually process/clean up which takes time/resources,
       | most people won't stay on the page long enough to capture enough
       | sensitive info and even then, eavesdropped conversations seem
       | pretty useless unless you also have the whole context and
       | information on the person you're targeting to be able to
       | effectively misuse that data.
        
         | drclau wrote:
         | > eavesdropped conversations seem pretty useless unless you
         | also have the whole context and information on the person
         | you're targeting to be able to effectively misuse that data
         | 
         | Keywords (i.e. "perfume", "car", "phone", "notebook",
         | "flowers", whatever) would probably be enough to "improve"
         | targeted ads.
         | 
         | However, this would be a huge scandal if true, so your first
         | suggestion, fingerprinting gone wrong, is more likely.
        
       | akersten wrote:
       | The Exhibit A why no one will ever convince me to turn off my ad
       | blocker or switch away from Firefox. It's a great feeling to just
       | not have to worry about this entire class of exploits.
        
         | underscore_ku wrote:
         | also tape your laptop's camera
        
           | tmsbrg wrote:
           | That doesn't protect your microphone from being exposed
           | though.
        
             | 5e92cb50239222b wrote:
             | Disable it in system settings, browser settings, and
             | prevent access through something like firejail. It is not
             | as reliable as a hardware kill-off switch, but puts a lot
             | of barriers to overcome.
        
             | reaperducer wrote:
             | _That doesn 't protect your microphone from being exposed
             | though._
             | 
             | And ripping out your microphone doesn't stop evildoers from
             | viewing the camera. What's your point?
        
             | newbamboo wrote:
             | Don't talk to your computer. When you do talk, talk about
             | stuff that you want them to look into.
        
             | beeboop wrote:
             | this is why we need hardware switches for microphones
        
               | fsflover wrote:
               | https://puri.sm/products/librem-14 - this device has the
               | kill switch.
        
               | michael-ax wrote:
               | fwiw, there's a switch in every ext. mic. jack.
               | 
               | plug in an un-wired connector, cut of the wiring post,
               | smooth with a nail-file or put on a crowning drop of glue
               | so it won't rip your bag and you're done.
        
               | deadbunny wrote:
               | A software switch. You can test by pluggin in your
               | unwired plug then go set to any program and select the
               | internal mic and it'll work fine.
        
               | myself248 wrote:
               | Often these are software, though. Yes there's a switch,
               | but it just tells the audio subsystem to automatically
               | select the plugged-in mic. You can still tell it to
               | select the screen-frame mic instead.
        
             | kadoban wrote:
             | An icepick/paperclip does.
        
               | criddell wrote:
               | You should also destroy any speakers.
               | 
               | https://arxiv.org/ftp/arxiv/papers/1611/1611.07350.pdf
        
               | kadoban wrote:
               | Oof, that's unfortunate. Thanks for the tip.
        
               | hutzlibu wrote:
               | But that hack requires deep OS access and is not trivial,
               | but for those with a nation state agency behind them,
               | good to keep in mind
        
               | russh wrote:
               | Yes, but travel is expensive and time consuming. Not to
               | mention the effort it would take to track down the
               | responsible party.
        
             | aasasd wrote:
             | Well tape the microphone too. With thick soft tape.
        
               | bogwog wrote:
               | Or open up your laptop and (carefully) destroy it.
               | 
               | I can't remember the last time I used the built-in mic on
               | a laptop, much less the last time I bought a laptop with
               | a mic that was actually worth using.
        
               | denton-scratch wrote:
               | > Or open up your laptop and (carefully) destroy it.
               | 
               | OK, I destroyed it (carefully). Now it won't boot. What
               | do I do next?
        
               | bogwog wrote:
               | Get a refund from the guy who sold you that janky ass
               | laptop that won't boot without a microphone.
        
               | denton-scratch wrote:
               | OIC. Well, I destroyed the laptop, not the microphone,
               | because that's what OP said to do. I guess I didn't read
               | the directions carefully enough. /s
        
           | ratww wrote:
           | My company is now giving us those to use in our company
           | laptops:
           | 
           | https://m.media-
           | amazon.com/images/I/61l+gnZORVL._AC_SY355_.j...
           | 
           | They're pretty convenient and look nice
        
             | zulln wrote:
             | Those are too thick for my laptop, so I use these adhesive
             | stickers instead: https://supporters.eff.org/shop/laptop-
             | camera-cover-set-ii
        
             | jspash wrote:
             | Be careful with those on laptops. I had a 2016 Macbook Pro
             | and put one of those one it. About 1 month later I had a
             | nice big crack in display straight down the middle of the
             | screen.
             | 
             | I'm not 100% certain that was the cause. But the guy at the
             | Apple store seemed to think it was, which meant they
             | wouldn't pay for it. And a quick google shows others who
             | are convinced.
             | 
             | The bezel on that laptop was really tiny and I can easily
             | see it might have contributed to the crack. I now have a
             | 2021 model and the bezel is much thicker. But I'm not
             | taking the chance.
        
             | vultour wrote:
             | Our HP laptops have that built in
        
               | [deleted]
        
           | jdavis703 wrote:
           | My laptop is mostly closed and plugged in to an external
           | monitor. The mic is so muffled by then, it would take expert
           | audio recovery to understand what was being said. (In other
           | words, it would be hard to scale it for ad purposes, but
           | obviously a targeted attack could still be devastating).
        
         | iszomer wrote:
         | Or use a computer without a mic/webcam permanently embedded or
         | attached. I'm glad I'm constantly reminded that not having such
         | peripherals can be a good thing.
        
       ___________________________________________________________________
       (page generated 2021-12-19 23:00 UTC)