[HN Gopher] Google, Meta, others will have to explain algorithms...
___________________________________________________________________
Google, Meta, others will have to explain algorithms under new EU
legislation
Author : niklasmtj
Score : 255 points
Date : 2022-04-23 09:56 UTC (13 hours ago)
(HTM) web link (www.theverge.com)
(TXT) w3m dump (www.theverge.com)
| jasfi wrote:
| If you read the article they're asking for more than explaining
| algorithms. Overall they want the the tech providers to be
| responsible.
|
| Explaining algorithms could, in theory, give away a competitive
| advantage. However fairness to users seems to be a priority in
| this decision.
| omegalulw wrote:
| I would love to first see a technical definition of fairness
| from EU that can be used to evaluate algorithms. That is a non-
| trivial detail often overlooked from these discussions.
| rtsil wrote:
| Not technical, but fairness is the opposite of "our algorithm
| is so complicated that we can't prevent it from penalizing
| you even if you are not at fault. Unless you reach the top of
| HN, in which case we will manually intervene to fix things."
| pwdisswordfish9 wrote:
| > Explaining algorithms could, in theory, give away a
| competitive advantage.
|
| Which is good. We could use some more competition on the
| market.
| otherotherchris wrote:
| Things like "fairness" aren't defined in the legislation and
| will be determined in smoke filled rooms by shadowy moneyed
| interests.
|
| Ordinary users will get censored. By the courts, by unelected
| regulators, and by Big Tech AI zealously nuking content to
| avoid arbitrary fines. It's content ID on steroids.
| jasfi wrote:
| I agree that it could get out of hand. We'll have to wait and
| see how it turns out. Since this is an EU law I wonder if it
| applies to content hosted on EU servers only, or any content
| that shows up in their users' results.
| otherotherchris wrote:
| Platforms are responsible for everything shown to a user
| inside the EU.
|
| I suspect that Google and Facebook will not offer country
| specific blocklists like they do for Nazi content in
| Germany. If Hungary bans LGBTQIA content, it'll disappear
| in France. Europe can then have an argument about how they
| "really really not really" believe in free speech.
| anothernewdude wrote:
| _If_ they do business in the EU. Otherwise this is
| without teeth entirely.
| barrucadu wrote:
| I mean, yes? That seems obvious?
|
| EU law applies to companies which operate in the EU.
| jasfi wrote:
| I am worried about the term "disinformation" since that
| can be really subjective. On the other hand anti-vax
| content is harmful, to me, so there's no easy answer.
| [deleted]
| ethbr0 wrote:
| >> _" Large online platforms like Facebook will have to make
| the working of their recommender algorithms (e.g. used for
| sorting content on the News Feed or suggesting TV shows on
| Netflix) transparent to users. Users should also be offered a
| recommender system "not based on profiling.""_
|
| Both of those seem like good ideas and progress. The non-
| profiled recommender system option especially!
|
| It's also really bothered me that tech companies of sufficient
| size can discriminate against legally-protected classes because
| "algorithms are complicated" and government regulators haven't
| pushed.
|
| I'm not a fan of regulating design or use, but I'm a huge
| proponent of requiring transparency and detail on demand.
|
| We'll see how willing the EU is to levy fines for breaches.
|
| It's no doubt a consequence of most huge tech companies being
| American, but it's been refreshing to see the repeated "We have
| a law; You clearly broke it; Here's your fine" follow-through
| thus far from EU enforcement.
| dmitriid wrote:
| > We'll see how willing the EU is to levy fines for breaches.
|
| It has been very slow with GDPR, I expect it to be even
| slower here.
| ohgodplsno wrote:
| Google is rolling out one-click cookie rejection as a
| result of gigantic fines threatened by the French CNIL.
| Having already been slapped with 90M and 60M, seems like
| there's not much of a need for fines. They know Europe
| isn't playing around.
| ethbr0 wrote:
| https://www.privacyaffairs.com/gdpr-fines/
| pmoriarty wrote:
| _" Explaining algorithms could, in theory, give away a
| competitive advantage."_
|
| Why should anyone care if they have a competitive advantage?
|
| If anything I want them to have a disadvantage, lose money, and
| go out of business.
| vmception wrote:
| Nevada Gaming Control Board requires source code of all the
| casino games
|
| Easy to see this concept expanding
| leksak wrote:
| This deserves more attention as it does set a good precedent
| Irishsteve wrote:
| The new regulation requires a hand-wavey style explanation i.e
| build a retrieval / ranking / matching algorithm that learns from
| customer clicks and considers blah blah.
|
| There will be no explanation of the actual algorithm.
| postsantum wrote:
| No explanation - no market. If you don't like it, you can run
| your business in another contract jurisdiction. I hope these
| companies will taste their own medicine, we desperately need
| competition
| johnny22 wrote:
| the person you're replying to is saying that their
| explanation will be trivial and thus useless in determining
| the responses.
|
| I don't know if that's true or not myself though, since I
| haven't read myself.
| rm_-rf_slash wrote:
| Probability ranking principle will never not be the foundation
| for recommender systems. Any statistical model flowing from PRP
| will be more or less the same regardless of whether the
| architecture is neural, boosted trees, etc.
|
| However, if regulation required companies to disclose all of
| the data goes into those models, how they acquire it (tracking
| browser/app behavior, purchase from 3rd parties), and so on,
| that would be the real game changer for consumer privacy and
| protection.
| throwaway4323r wrote:
| unless you can reproduce this is not going to cut. I dont think
| this is going to help. It only creates more useless software jobs
| FpUser wrote:
| I am curious how would they "explain" AI algorithms where it is
| impossible to explain how / why the decision has been made.
| kazamaloo wrote:
| Not that the algorithms are impossible to explain but in some
| cases the real explanations might require explanations, too.
| But I think companies will probably get away with hand-wavy
| explanations like you get this recommendation because you
| watched this movie neglecting all the
| sourcing/ranking/filtering workflows.
| neatze wrote:
| If I had to guess it probably similar to SEC or NYSE required
| explanation when you do suspicious trades.
| narrator wrote:
| "We run it through our deep learning model. Here's 50 gigabytes
| of neural net weights."
| mnau wrote:
| > Here's 50 gigabytes of neural net weights.
|
| .. from few months ago. Weights change daily, most likely
| updated by another NN.
|
| I guess it's nice that lawmakers understand that at some point
| these companies used algorithms to search or sort stuff, but
| industry has already moved to another level. We might be able
| to explain specific result of neural networks (Shapely values
| or something like that), but the actual algorithm (=NN)... no
| way.
| ernirulez wrote:
| EU is becoming more like an authoritarian state. They put
| constraints on companies but allow governments to have full
| control and surveillance over their citizens. It's so hopocrit
| naoqj wrote:
| Not trying to defend the EU here, but isn't this true of every
| government?
| smrtinsert wrote:
| You're saying governments shouldn't have the ability to govern
| its people, or inquire as to how global scale companies impact
| their citizens? That doesn't make any sense whatsoever.
| frereubu wrote:
| "... allow government to have full control and surveillance
| over their citizens."
|
| This is hyperbole.
|
| This suggestion is the logical next step of the part of GDPR
| where it says that citizens should be able to understand how
| automated decisions are made about theme and their data. This
| is about transparency for citizens, not governments dictating
| how algorithms should work.
| rtsil wrote:
| As a EU citizen, I fully expect my country and the EU to "put
| constraints" (i.e. regulate) on companies, so everything seems
| to work as expected.
| Epa095 wrote:
| I don't see how it's hypocritical to give democratically
| elected governments different possibilities that random private
| companies.
|
| It's government's job to put constraints on companies, stopping
| them from becoming the absolute assholes they become if they
| have no limitations. That does not make them authoritarian.
| mediascreen wrote:
| Sure. On the other hand you can usually chose which companies
| to interact with. When it comes to governments the
| relationship is not optional. Your government usually have
| more ways to affekt your life than most private companies has
| (like no fly lists).
|
| A few years ago the true extent of the Swedish program for
| tracking left wing sympathisers became known. It ran from the
| sixties up until 1998. For example, if your car was seen
| outside of a left wing publication you could end up on a list
| somewhere. That caused you to be automatically excluded from
| 5-10% of all jobs without you never finding out about it
| until 20-30 years afterwards. Imagine wanting to become a
| police officer, a pilot or and engineer and never
| understanding that the reason you didn't get an interview was
| because you had parked in the wrong spot one day years
| before. Or that your sister briefly dated a left wing
| journalist at some point.
| netizen-936824 wrote:
| Some people are sad that they can't set up businesses which
| exploit the populace as easily in the US I guess?
| rafale wrote:
| The tyranny of majority is a real threat. Power should be
| "shared" (or under contention). Companies just want to take
| your cash, governments can take your freedom.
| arnvald wrote:
| > The greater the size, the greater the responsibilities of
| online platforms
|
| > as a rule, cancelling subscriptions should be as easy as
| signing up for them
|
| Overall I like these principles, but we'll see in a few years how
| they're enforced in practice. It's been 4-5 years since we've had
| GDPR and I still see sites that require tens of clicks to disable
| all advertising cookies (and the most I've seen was 300+ clicks).
| Even Google only this week announced they'll add "reject all"
| button to their cookie banners.
|
| I expect it'll be similar in this case, companies will do bare
| minimum to try to stay compliant with the regulation, and it will
| take a few years to see real differences, but I hope it's at
| least a step in the right direction.
| mijamo wrote:
| 4-5 years is nothing for the law. You have murders from 15
| years ago still in process in courts. But eventually things
| settle. It just takes time. It's a bit like ents
| FollowingTheDao wrote:
| > as a rule, cancelling subscriptions should be as easy as
| signing up for them
|
| Before I sign out for any service this is the first thing I
| check.
| wave-creator wrote:
| Do you think the EU will enforce the law for non-US and non-EU
| companies like TikTok will disclose them? It will be interesting
| to see if they will uphold the law equally to all.
| manquer wrote:
| All the big guys roughly know what kind of pipelines every one
| has , they hire from each other etc.
|
| The level of disclosure is not going to break a lot of
| competitive advantage.
|
| basically need to say what input sources and feedback they use
| and modular blocks on what different steps go into the pipe,
| nobody is asking them to expose the actual weights of billion
| parameter ml model they all probably have .
|
| Even if hypothetically they did expose that level of detail it
| is useless for regulators as they don't have resources to run
| the model , and testing a model for side effects in depth is
| hard .
| jdrc wrote:
| Requiring an alternative to algorithmic sorting (chronological)
| is good even though most sites do it already. "Explaining the
| algorithms" sounds like an impossible-to-implement, feel-good
| clause.
|
| Requiring transparency for bans and censorship though will
| probably have a major effect if people start asking nosy
| questions and exposing corporate and government abuses of power.
| Many EU governments will regret that users can expose them , that
| will be fun to watch. It will also make it very hard for
| companies like reddit to function: could reddit be legally liable
| for actions of its moderators?
|
| the other clauses are the typical wishful thinking by EU
| legislators who think that you can legislate the solution to
| unsolved or unsolvable tech problems
| frereubu wrote:
| IANAL, so happy to be corrected, but my understanding is that EU
| and US law work in quite different ways. EU law sets general
| rules, and law courts decide what that means with reference to
| existing legal precendents. US law is very, very specific about
| what each clause means and how it should be interpreted.
|
| Every time I see these kinds of discussions I wonder if quite a
| few of the disagreements are due to e.g. US commenters worried by
| the relative lack of specific details.
| krastanov wrote:
| Did you flip EU and US in your comment? My understanding is the
| exact opposite of what you wrote:
|
| - US, common law, https://en.wikipedia.org/wiki/Common_law
|
| - EU, civil law,
| https://en.wikipedia.org/wiki/Civil_law_(legal_system)
|
| Citing: Civil law is a legal system originating in mainland
| Europe and adopted in much of the world. The civil law system
| is intellectualized within the framework of Roman law, and with
| core principles codified into a referable system, which serves
| as the primary source of law. The civil law system is often
| contrasted with the common law system, which originated in
| medieval England, whose intellectual framework historically
| came from uncodified judge-made case law, and gives
| precedential authority to prior court decisions.
| dontblink wrote:
| Regarding the rules surrounding fake information, I wonder why
| the be EU hasn't taken a similar stance against Fox News
| equivalencies?
| dehrmann wrote:
| Rules around fake information scare me because they're a limit
| on speech, and as Russia has recently shown, fake information
| is anything a dictator doesn't approve of.
| Broken_Hippo wrote:
| _Russia has recently shown, fake information is anything a
| dictator doesn 't approve of._
|
| This isn't an issue of "limits on speech", but rather,
| another reminder that one shouldn't enable folks to become
| dictators. Not having some reasonable limits on actual
| misinformation makes us all less free, however, because we
| cannot put our trust in some organizations.
| dehrmann wrote:
| > Not having some reasonable limits on actual
| misinformation makes us all less free
|
| This is a step towards "freedom is slavery."
| hnbad wrote:
| If I had a penny for every time someone referenced George
| Orwell without understanding his politics or even having
| read his books, I would have a lot of pennies.
|
| I take it you believe the tolerance paradox also gives
| off 1984 vibes?
| walkhour wrote:
| Because we wouldn't have any media whatsoever to consume.
| vampiretooth1 wrote:
| What actually constitutes a full explanation of the algorithm?
| Article doesn't get into this enough, it mentions a high level
| overview is required but not much else. I can imagine that it's
| not going to require sharing the codebase or IP, of course.
| somewhereoutth wrote:
| A good start. However let's go further, simply ban personal
| tracking and personalized algorithmic feeds. This would combat
| the echo chamber effect and social media can become a broad
| community experience, like TV and newspapers. It would also
| cripple tech advertising revenues, thus redressing the balance
| with traditional media.
| tester89 wrote:
| > personalized algorithmic feeds
|
| But there are good uses, like for music. I can't really think
| of a downside for music tbh, it's not like music tends to
| spread extremism, and on the upside lesser known artists have a
| better shot at being discovered through the algorithm.
| vimsee wrote:
| I usually have few reasons for being concerned about the
| future.
|
| I think wars (even with the on-going war that Russia started),
| climate issues (even with the high consumption present today)
| and poverty (even with many countries still in it) will all
| have a trend of declining. However, this echo chamber fueled
| with miss-information is one of the things I care for.
|
| I am so happy the EU has power and will to make good changes
| that gives mutual benefit to everyone when other parts of the
| world does not.
| marcinzm wrote:
| So should I be prevented from following only anti-capitalistic
| people on twitter or only following right wing subreddits?
| Should people also be banned from subscribing only to a single
| newspaper versus a mix of newspapers with different political
| leanings? What about looking at only socialist web pages that
| link to other such web pages. Should web pages we forced to
| link to pages with other political leanings?
| somewhereoutth wrote:
| No, that's not what I meant. More that it should be a
| conscious decision to knowingly consume content with a
| particular bias - as you do when you pick up a certain
| newspaper or turn over to a specific tv channel, as opposed
| to being algorithmically presented with a stream of content
| that may veer in a direction without you being aware.
| hairofadog wrote:
| One of the things that frustrates me about discussions of
| censorship here on HN is that there's a _lot_ of intense focus
| on censorship via deleting a tweet or Facebook post, but no
| focus given to the more insidious problem of censorship by
| algorithm.
|
| I am wholeheartedly in favor of a free marketplace of ideas
| where (we would hope) good ideas win out over bad, but as it
| is, once you're deemed by an algorithm to be susceptible to a
| certain category of extremist information, that's all you're
| ever going to see again; the competing ideas are never going to
| have a chance.
|
| Algorithmic distribution of ideas is sorta like distributing
| ideas via gasoline-powered leaf blower directly to the face. I
| am free to speak my competing ideas, and so technically I
| haven't been censored, but no audience is going to hear me over
| the leaf blower.
| visarga wrote:
| We need to get some level of control over the criteria for
| ranking and filtering. A third one is the UI - it is the
| place where all sort of dark patterns hide.
|
| I'd like to see the browser put in a sandbox and its
| inputs/outputs sanitised and de-biased before being presented
| to the user. Could also protect privacy more. We need more
| browser innovation. A neural net should be in every browser
| ready to apply semantic rules.
| pessimizer wrote:
| I don't think people should be forced into the public square by
| law. If you want to live in an echo chamber, you should be able
| to. We don't forcibly close convents. If I want to choose the
| _" Smart Feed(tm),_ although I can choose not to, that should
| be my choice to make.
|
| I don't know TikTok, but people seem to like its choices.
| somewhereoutth wrote:
| I think the distinction is that generally it should be a
| _conscious_ choice to be in the echo chamber - and not the
| easy unknowing default choice (if you even have a choice) for
| smart feeds.
| ben_w wrote:
| I've seen the difference between what YouTube presents to me
| when I'm logged in v.s. when I'm on a clean computer it can't
| associate with me, and I do value the personalisation -- when
| not logged in it shows me a hundred duds for every one thing I
| care about, and logged in it's about 50:50.
|
| How much of this improvement is a mysterious machine learning
| algorithm and how much is it just looking for new things from
| my subscription list, I'm not sure, and that's important: being
| trapped in a torrent of self-reinforcing falsehoods is
| something I fell for in my teenage-goth-New-Age phase, which
| Carl Sagen condemned in _The Demon-Haunted World_ , and which
| people in general have been falling for with every sychophant
| and propagandist from soothsayers to tabloids telling them what
| they want to be so.
| PolygonSheep wrote:
| > I'm not sure, and that's important: being trapped in a
| torrent of self-reinforcing falsehoods is something I fell
| for in my teenage-goth-New-Age phase, which Carl Sagen
| condemned in The Demon-Haunted World.
|
| Genuinely curious here: how can you tell you've escaped one
| set of self-reinforcing falsehoods while being sure you
| haven't fallen into another, different set?
| ben_w wrote:
| You can't be certain of not falling into a different set of
| false beliefs, but you can look for inconsistencies in your
| beliefs and be less and less wrong.
|
| Spotting inconsistencies in my beliefs is what pushed me
| out of New Age mode, and ironically what pushed me into it
| in the first place (from Catholicism).
| EnderShadow8 wrote:
| An alternative might be for personalisation to be opt-in
| rather than opt-out even when signed in with an account
| (which shouldn't even be necessary for many services anyway)
| cush wrote:
| I think it's fair to use personal data collected on the same
| site. Without it, most sites would be rendered useless.
| somewhereoutth wrote:
| Indeed - that would be a legitimate use of cookies etc. In
| fact, if that were enforced, we can get rid of the annoying
| cookie warnings.
| BeFlatXIII wrote:
| How does a ban on personalized algorithmic feeds work if each
| user is subscribed to a different set of others?
| somewhereoutth wrote:
| "news from friends" might be ok if it is presented without
| algorithmic curation - i.e. strictly on time order or
| similar.
| tjbiddle wrote:
| > "Dark patterns" -- confusing or deceptive user interfaces
| designed to steer users into making certain choices -- will be
| prohibited. The EU says that, as a rule, cancelling subscriptions
| should be as easy as signing up for them.
|
| This is an excellent addition.
| eli wrote:
| It's a great idea but my understanding is that they have not
| yet defined the term. And that sounds very hard to define.
| aaaaaaaaaaab wrote:
| "I know it when I see it."
| bryanrasmussen wrote:
| in common law systems I know it when I see it is good
| enough, but I believe most of EU is under a Napoleonic
| system where you should define what you mean.
| [deleted]
| jamesrr39 wrote:
| Nitpick: most of it falls under the Civil law system,
| some of which is Napoleonic. Wikipedia has a pretty nice
| map of the breakdown: https://en.wikipedia.org/wiki/Civil
| _law_(legal_system)#/medi...
| XorNot wrote:
| The requirement of "subscribe is as easy as unsubscribe"
| is a metric which you could argue about in court, but
| would be very hard to game.
|
| i.e. if signup is "email and credit card number" then
| you're going to be hard pressed to explain why a similar
| option to cancel does not exist and isn't accessible in
| as many clicks, with equivalent screen real-estate usage.
| q-big wrote:
| > i.e. if signup is "email and credit card number" then
| you're going to be hard pressed to explain why a similar
| option to cancel does not exist
|
| So you argue that to cancel a subscription, you should
| have to provide your credit card number again. If a check
| on the credit card fails for some obscure reason, you
| cannot cancel your subscription.
|
| This is what "subscribe is _as easy as_ unsubscribe "
| also means.
| kaba0 wrote:
| I mean, that is precisely the court's job to interpret
| the law. Your take is just a deliberately twisted one, it
| wouldn't stand a chance in a court setting the same way
| as a willful offense can't be defined that precisely, yet
| there is generally no problem with it.
| XorNot wrote:
| Sure: but if it fails, then the card is invalid, and the
| card no longer can be billed. Again - you wouldn't get
| away with saying "well we couldn't verify the number" as
| standard practice - you'd just get sued and then
| punitively fined if it was found to be a lie.
| eli wrote:
| I'm just not a big fan of laws where just about everyone
| could arguably be breaking them in some small way. That's a
| lot of faith to put in regulators to always act honorably.
|
| Vaguely worded laws can also lead conservative corporate
| counsels to make decisions like geoblocking all of the EU
| ugjka wrote:
| I hope this makes Google Pay app subscription cancellations
| actually cancel them instead of postponing them for 3 months or
| so
| bratwurst3000 wrote:
| Windows is fuckinh the worst at this. The whole system
| experience is at some point ,, please login to the mircrosoft
| produkt you never signed for" or whatever new noninteresting
| feature they have. That there isnt something like a windows
| version striped off that stuff is a shame .
|
| Oh and firewall or defender that puts a big !! Everywhere so it
| seems that my system will explode anytime
|
| Are they aware that people use it for working?
| bgro wrote:
| Remember when Xbox let you sign up for Live online, but you had
| to do a 3 hour interrogation on the phone to cancel? And calls
| would cost like 25 cents a minute or something crazy.
|
| Or the auto renewing subscriptions that either cancel your
| service immediately the second you turn off auto renew, even if
| you paid for the current time allotment, or they just prevent
| or ignore your request to not renew.
|
| I feel like reverse charging didn't exist back then.
|
| There's also entitled devs that say your email domain or VOIP
| number isn't good enough when signing up for their service.
| There's no reason for anybody to use an email from their
| perfect in test whitelist of gmail or Microsoft domains... And
| why would anybody ever have a voip number unless they were a
| terrorist?
| Terretta wrote:
| Or DirecTV recently:
|
| _"Hey we couldn't process your card due to a temporary error
| so we went ahead and cancelled your $59 for AllTheThings plan
| you had for the last 10 years as a loyal customer. We're very
| much not at all sorry that plan isn't available any more. Now
| AllTheThings costs $129, but don't worry, just click to
| reactivate, we'll try your card again." ... "AllTheThings
| processed successfully for $129, thank you for your custom."_
| SantalBlush wrote:
| There's also this classic attempt at cancelling Comcast
| service. [1] What a nightmare.
|
| [1] https://m.youtube.com/watch?v=yYUvpYE99vg
| jpz wrote:
| Sounds extremely subjective. How do you measure it and where do
| you draw the line? All marketing is somewhat coercive.
|
| How do you get economic and business growth (things which are
| good for people - jobs and employment) without marketing and
| advertising?
| nathias wrote:
| It's objective its just very widespread. Amazon is probably
| the greatest offender, but most of the platforms and BigTech
| is just dark patterns all the way down.
| einszwei wrote:
| I am very tired of the cookie/tracking popups on many websites
| that don't have option to "reject all" but just "accept all"
| and "customise". Main example being Google Search.
|
| Looking at this, I am hopeful but not too optimistic.
| bratwurst3000 wrote:
| The << i dont care about coockies >> plugin for firefox is
| superb at geting rid of that problem
| layer8 wrote:
| I wouldn't count on sites not tracking you until you
| actually saved your "custom preferences".
| alwayslikethis wrote:
| Of course you can't.
|
| My recommendation:
|
| 1. Install "I don't care about cookies"
|
| 2. Install "Temporary containers"
|
| This requires that you use special containers for things
| you do wish to have cookies for such as HN for the login.
| Other than that, you can safely click accept for all
| websites, since it won't persist anyways.
| nottorp wrote:
| > I am very tired of the cookie/tracking popups on many
| websites that don't have option to "reject all" but just
| "accept all" and "customise". Main example being Google
| Search.
|
| And The Verge on this very article :)
| andrei_says_ wrote:
| The people writing the articles are different from the MBAs
| forcing the financial and technological decisions.
|
| "Integrity" has different meanings for each group. For the
| latter, the meaning is likely closer to "bring in enough
| revenue to keep the publication running." Applying dark
| patterns does not conflict with this.
| nottorp wrote:
| Well it's fine with me. I only open Verge links when
| they're on HN and the title feels interesting. Which is
| pretty rare.
| adam0c wrote:
| use a vpn, any incognito browser, stop using google. simple.
| jsnell wrote:
| From a couple of days ago: https://blog.google/around-the-
| globe/google-europe/new-cooki...
| einszwei wrote:
| Wow, best UX change from Google in sometime now.
| hnbad wrote:
| Don't let the prose fool you. They're doing this because
| what they did before was in violation and the walls were
| closing in.
|
| This reminds me of supermarkets in Germany loudly
| announcing that they would abandon plastic bags to save
| the environment ... a few weeks before legislation came
| into effect banning them from selling plastic bags.
|
| Why wait until you're potentially facing fines if you can
| move slightly ahead and sell it as a voluntary good thing
| you do for your users/customers?
| lin83 wrote:
| Amazon's attempts to get users to unwittingly sign up to Prime
| is one of the most egregious examples I encounter on a regular
| basis. As a European I cannot wait to see it gone.
| psyc wrote:
| After the last iOS update, Apple nagged the shit out of me to
| setup Apple Pay, for two days. No way to say 'fuck off' -
| only 'remind me'. No obvious way to stop the nagging. Finally
| I gave them just the tip, and then pulled out before the
| money shot, and that seems to have shut them up for now.
| cyral wrote:
| Apple Pay is legitimately useful though. You can use it to
| pay at physical businesses if you forget your wallet
| (double click and face ID to turn your phone into a "tap to
| pay" card basically). There are also lots of apps/sites
| that support it so you don't have to type in your card
| number or even your shipping info sometimes.
| wrboyce wrote:
| How is that even relevant? Your phone was trying to onboard
| you to a _free to use_ feature. If you can't see the
| difference here, then I suspect there probably was a button
| labelled "fuck off" and you didn't see that either.
| Honestly.
| joe_guy wrote:
| Apple makes money off the interchange fee. It might be
| "free" to the end user, but the corporate motivation is
| the same as Prime's -- money.
|
| And please don't ad hominem attack people you're
| responding to.
| wrboyce wrote:
| Yeah you're right, there was no need for the last bit.
| I'm still struggling to see the relevance though, trying
| to get me to buy things is very different from trying to
| get me to use a feature you profit from (in my opinion).
| You also have to bare in mind that HN represents the more
| technical users, plenty of people probably do need the
| popups to discover these features. Saying that, a "no
| thanks, don't remind me again" button would be a nice
| inclusion - perhaps with a secondary confirmation.
| tjoff wrote:
| Feature you profit from?
|
| You feel you profit from facebook tracking as well?
|
| Regardless, these dark patterns are truly disgusting and
| how some can defend them so mindlessly just because they
| apparently found a use for a product is quite disturbing.
| wrboyce wrote:
| "You" in that context was the entity pushing the feature.
| I'm not sure what point you think you're making about
| Facebook tracking tbh, but I don't use Facebook so you're
| asking the wrong person; to claim I am mindlessly
| defending dark patterns is nonsense.
| psyc wrote:
| Windows regularly tries to nag me into free features I
| don't want too.
|
| At no time has the term 'dark pattern' ever been
| necessarily dependent on getting you to pay money.
|
| Your argument is that I sound stupid, so I must be wrong?
|
| There's no button.
|
| https://www.cultofmac.com/538999/apple-under-fire-apple-
| pay-...
|
| https://www.wsj.com/articles/apple-insists-iphone-users-
| enro...
|
| My other peeve is when streaming apps put a button in the
| bottom-right of an ad, same size and style as the 'skip'
| button one reflexively clicks. Except it turns out to be
| an 'engage even moar' button.
| wrboyce wrote:
| Apologies for the implication you're stupid, I didn't
| really mean that and it was uncalled for at any rate.
|
| I don't disagree regards dark patterns, your example just
| felt a bit irrelevant to the specific topic being
| discussed (Amazon pushing a paid for product / cancelling
| a paid subscription).
| psyc wrote:
| I can understand why you would make the distinction.
| Making distinctions is good, in general. However from my
| perspective as a frustrated user being antagonized by
| 'my' devices, it's all the same battle to me.
| KMag wrote:
| "Free to use", but presumably comes with a user agreement
| that opens you to some financial liability. There's a
| (granted small) chance that a bug, security incident, or
| fraud lands you in a Kafkaesque debt nightmare.
|
| I had a bit of a nightmare where one of the credit
| reporting agencies was convinced my residential address
| was inside my bank. Their online system referred me to
| their phone system or sending them mail. Their phone
| system referred me to their online system or sending them
| mail. I sent them mail 3 times and got no reply. An
| online cheat guide for getting to an actual human through
| their phone system didn't work, and I eventually just
| started hitting random keys in their phone system and got
| to a human who was able to sort it out.
|
| You can't even get a secured credit card (backed by a
| cash deposit) without a credit check (I looked into it),
| which is going to fail if your residential address is
| wrong.
|
| Opening a financial account that might misreport
| something to a credit agency shouldn't be taken lightly.
| pessimizer wrote:
| I feel like I remember it being pretty easy to cancel Prime,
| though. Have things changed?
| fmx wrote:
| Yes: https://www.msn.com/en-us/autos/news/yes-it-seems-
| amazon-did...
| [deleted]
| tjoff wrote:
| Doesn't excuse tricking people into it.
| bcrosby95 wrote:
| If it's easy to accidentally sign up for something, does that
| mean it has to be easy to accidentally cancel something?
| Because that would be hilarious.
| haswell wrote:
| I'm imagining a scenario where you're about to check out,
| and decide not to finish the transaction because you wanted
| to add something else to your very first.
|
| So you click the cancel button.
|
| Only you find out you've cancelled Prime.
| jumpifzero wrote:
| In principle looks good but lots of potential for going wrong.
|
| Just hope this doesn't backfire. The cookie law was also a thing
| the EU created with good intentions after some politicians
| decided "omg cookies are bad" and we ended up still using cookies
| but pop-ups in every single website basically forcing you to
| accept the use of cookies.
| FollowingTheDao wrote:
| "They have electrolytes."
| LightG wrote:
| Good. More.
| TavsiE9s wrote:
| That's going to be interesting to see if the whole credit scoring
| industry will have to disclose their algorithms as well.
|
| Looking at you, Schufa.
| daniel-cussen wrote:
| tester89 wrote:
| I didn't think I would see the day with a troll on HN. I
| doubt it's an AI too because of the coherency of what they're
| saying.
| daniel-cussen wrote:
| Trolling whom? Trolling whom?
|
| These guys send letters every month about artificial debt
| they self-inflicted and demanding payment under threat of
| theft, and I'm the troll?
|
| International debt was under 1% before Ukraine, these dudes
| were asking 20% a year, like the sixteenth power of what
| the international market was asking. They can perfectly
| well get credit at under 1%, they can perfectly well
| function with lower rates.
|
| They just love usury.
|
| They love money.
| [deleted]
| MeteorMarc wrote:
| It will be fun to see Google's algorithm for ranking search
| results.
| otherotherchris wrote:
| "Here's 100PiB of unlabeled neural net weights. Knock
| yourselves out."
| jasfi wrote:
| They want to know how the algorithms work, not the data
| itself.
| otherotherchris wrote:
| Google doesn't know what the algorithm is anymore. The
| whole site is a black box.
| notacoward wrote:
| Same at FB as far as I could tell while I was there. "The
| algorithm" is a misnomer, popularized by the press but
| really kind of silly. There are really thousands of
| pipelines and models developed by different people
| running on different slices of the data available. Some
| are reasonably transparent. Others, based on training,
| are utterly opaque to humans. Then the weights are all
| combined to yield what users see. And it all changes
| _every day_ if not every hour. Even if it could all be
| explained in a useful way, that explanation would be out
| of date as soon as it was received.
|
| I'm not saying that to defend anyone BTW. This complexity
| and opacity (which is transitive in the sense that a
| combined result including even one opaque part itself
| becomes opaque) is very much _the problem_. What I 'm
| saying is that it's likely impossible for the companies
| to comply without making fundamental changes ... which
| might well be the intent, but if that's the case it
| should be more explicit.
| manquer wrote:
| What needs to be shared is a high level arch not nuts and
| bolts.
|
| At a broad level:
|
| what are the input sources like IP address , clicks on
| other websites etc you use to feed the model.
|
| What is the overall system optimized for , like some
| combination of engagement , view time etc, just listing
| them if possible in a order of preference is good enough
|
| Alternatively what does your human management measure and
| monitor as the business metrics of success .
|
| I want to know what behaviors (not necessarily how ) are
| used , I want to know what is feed trying to optimize for
| , more engagement, more view time to etc
|
| This is not adversarial, knowing this helps as modify
| user behavior to make the model work better.
|
| Users already have some sense of this and work around it
| blindly , for example YouTube has heavy emphasis on
| resent views and search . I (and am sure others) would
| use signed out user to see content way outside my
| interest area so my feed isn't polluted with poor
| recommendations. I may have watched 1000's hours of
| educational content but google would still think some how
| to video I watched once means I need to only see that
| kind of content.
|
| Google knows it is me sure even am signed out, but they
| don't use it change my feed that's the important part and
| knowing that can help improve my user experience
| jasfi wrote:
| I doubt it, they should know what the various algorithms
| are, especially the most important ones that drive most
| of the ranking. But their competitive advantage would be
| on the line.
| [deleted]
| ekianjo wrote:
| > Google doesn't know what the algorithm is anymore
|
| You are an insider?
| [deleted]
| tyingq wrote:
| They haven't talked much detail since Matt Cutts left,
| but over time they did sort of outline the basics. That
| the core ranking is still some evolution of PageRank,
| weighting scoring of page attributes/metadata and flowing
| it down/through inbound links as well. But then altered
| via various waves of ML, like Vince (authority/brand
| power), Panda (inbound link quality), Penguin (content
| quality), and many others that targeted other attributes
| (page layout, ad placement, etc).
|
| Even if some of that is off, the premise of a chain of
| some ML, and some not ML, processors means they probably
| can't really tell you exactly why anything ranks where it
| does.
| zo1 wrote:
| Those sound like awesome potential features. Allow users
| to assign 0-100% weights for each of those scoring
| adjustments during search,and show them the calcs (if you
| can).
| tyingq wrote:
| Supposedly there's thousands of different features that
| are scored, and those are just the rolled-up categories
| that needed their own separate ML pipeline step.
|
| Like, maybe, for example, a feature is _" this site has a
| favicon.ico that is unique and not used elsewhere"_ (page
| quality). Or _" this page has ads, but they are below the
| fold"_ (page layout). Or _" this site has > X amount of
| inbound links from a hand curated list of 'legitimate
| branded sites'"_ (page/site authority).
|
| Google then picks a starting weight for all these things,
| and has human reviewers score the quality of the results,
| order of ranking, etc, based on a Google written how-to-
| score document. Then tweaks the weights, re-runs the ML
| pipeline, and has the humans score again, in some
| iterative loop until they seem good.
|
| There's a never-acted-on FTC report[1] that describes how
| they used this system to rank their competition
| (comparison shopping sites) lower in the search results.
|
| [1] http://graphics.wsj.com/google-ftc-report/
|
| Edit: Note that a lot of detail is missing here. Like
| topic relevance, where a site may rank well for some
| niche category it specializes in. But that it wouldn't
| necessarily rank well for a completely different topic,
| even with good content, since it has no established
| signals it should.
| dehrmann wrote:
| > and those are just the rolled-up categories that needed
| their own separate ML pipeline step.
|
| AKA ensemble models.
| dehrmann wrote:
| It's clear the public and lawmakers like the idea of
| knowing how the algorithm works, but what you posted is
| about as deep as people can reasonably understand at a
| high level. I don't think they realize how complex a
| system built over 20 years that's a trillion-dollar
| company's raison d'etre can be.
| amelius wrote:
| "Bad programmers worry about the code. Good programmers
| worry about data structures and their relationships."
|
| -- Linus Torvalds
| ohgodplsno wrote:
| Linus never had to deal with hundreds of petabytes of
| search data, nor ML black boxes, to be fair.
| wetpaws wrote:
| Data is already an algorithm
| thedeadfish wrote:
| Google manually adjusts its results for censorship reasons.
| This is probably why google has gotten so much worse, they
| don't want information to be freely accessible, they only
| want things they approve of to be seen.
| otherotherchris wrote:
| I reckon you're right, but I doubt that it's manual or
| under Google's control. Google is too important a tool of
| control to be left in the hands of Silly Valley idealists.
|
| I've always wondered why Sergey Brin and Larry Page retired
| when they did, it coincides almost exactly with the
| beginning of the SERP quality decline. Wonder what sort of
| conversation they had with intelligence to quietly walk to
| the door, cash out, and say nothing about the company
| since.
| blihp wrote:
| What happened was they got what they wanted: full control
| of running the business. Then they quickly learned that
| was actually a lot of work and not very much fun, made
| some fairly unpopular decisions (business, product and
| policy) with a fair amount of public backlash, put Sundar
| in charge and backed away.
| simion314 wrote:
| >"Here's 100PiB of unlabeled neural net weights. Knock
| yourselves out."
|
| You need to give the user an explanation on why you blocked
| his account, but if Google is kind enough to add on top the
| secret neural network then some people would be happy to have
| a look at it and find even more garbage in it.
| NullPrefix wrote:
| >Violation of guidelines and community standards.
| ohgodplsno wrote:
| Which is itself detected by a 80PiB neural network, based
| on the 60TB output of new rules that another neural
| network spits out every week based on the temperature
| outside of the corner office and the taste of Sundar's
| coffee this morning.
| [deleted]
| swayvil wrote:
| I see a vast technical writing documentation project in their
| future.
| jdrc wrote:
| But they must be short and easy to understand by users. Like
| this:
|
| "Our algorithms use gradient descent. Data flows through our
| connected tubes, slowly wiggling their size until the data
| starts flows back and forth faster."
| ivrrimum wrote:
| Proven wrote:
| anothernewdude wrote:
| "Linear Algebra"
___________________________________________________________________
(page generated 2022-04-23 23:00 UTC)