[HN Gopher] AI and Trust
       ___________________________________________________________________
        
       AI and Trust
        
       Author : CTOSian
       Score  : 180 points
       Date   : 2023-12-04 13:32 UTC (9 hours ago)
        
 (HTM) web link (www.schneier.com)
 (TXT) w3m dump (www.schneier.com)
        
       | seydor wrote:
       | > It's government that provides the underlying mechanisms for the
       | social trust essential to society.
       | 
       | It's not anymore. Surveys show that people nowadays trust
       | businesses more than they trust the governments, in a major shift
       | (https://www.edelman.com/trust/2023/trust-barometer)
       | 
       | We need trust because we want the world around us to be more
       | predictable, to follow rules. We trust corporations because they
       | do a better job at forcing people to follow rules than the
       | government does.
       | 
       | This is not a change brought about with AI, it changed BEFORE it.
       | (And just because an AI speaks humanly, doesnt mean we will trust
       | it, we have evolved very elaborate reflexes of social trust
       | throughout the history of our species)
        
         | barathr wrote:
         | That's a different use and meaning of the word trust. The essay
         | is specifically talking about the difference between
         | interpersonal trust (the sort of trust that might be captured
         | by a poll about whether people trust businesses vs. government)
         | and societal trust (which is almost invisible -- it's what
         | makes for a "high trust" or "low trust" society, where things
         | just work vs. every impersonal interaction is fraught with
         | complications).
        
           | seydor wrote:
           | i m talking about societal trust
        
             | corford wrote:
             | Is it not the case that, knowingly or not, people actually
             | implicitly trust government when they say they trust
             | businesses? i.e. they are able to trust businesses because
             | the businesses themselves operate within a well regulated
             | and legally enforced environment provided by the government
             | (notwithstanding the odd exception here and there).
        
               | JohnFen wrote:
               | I trust government quite a lot more than I trust
               | businesses (although my trust in both is quite low).
               | Government, at least, is something that we have a say in.
               | Businesses aren't.
        
               | throwitaway222 wrote:
               | I trust the government waaaay more than a business. If
               | there is a camera, and someone steals my wallet, and they
               | see who did it. They know it was Bob Pickens who took my
               | wallet. My trust in government erodes if Bob Pickens
               | isn't in jail by noon tomorrow. There are many countries
               | in the world right now where that scenario has iron clad
               | trust. In SA Bob Pickens would have his hand taken.
               | That's a bit extreme, but in the US there's literally no
               | guarantee Bob won't be in jail, and in fact stealing
               | other people's wallets, on camera.
               | 
               | When those kinds of trusts start breaking down, the world
               | can get very scary, very quickly.
        
               | mistermann wrote:
               | How it is, is that when you query a human about
               | themselves, you get a System 1 subconscious heuristic
               | approximation, the accuracy of which is massively
               | variable, but not necessarily random.
               | 
               | This is quite deep in the social stack, so at this point
               | we just "pretend"[1] it's true.
               | 
               | [1] I use "pretend colloquially: it's actually not even
               | on the radar most of the time, unless one works in
               | marketing, "public relations", etc.
        
               | digging wrote:
               | ...Do people actually say they trust "business(es)", like
               | they actively assert it? I think most people _act_ as if
               | they trust businesses because it 's an immense pain or
               | impossible to get through normal life without doing so.
        
           | denton-scratch wrote:
           | > interpersonal trust (the sort of trust that might be
           | captured by a poll about whether people trust businesses vs.
           | government)
           | 
           | That's not how I read "interpersonal trust"; I read it as the
           | kind of trust you might confer on a natural person that you
           | know well.
        
         | Tangurena2 wrote:
         | The short history of cryptocurrency has repeated the reason
         | _why_ all those decades of securities and banking laws
         | /regulations exist.
         | 
         | The collapse in trust of governments is due to lobbyist bribing
         | politicians. And corrupt politicians. And media controlled by a
         | few billionaires who benefit from that distrust.
        
           | 2devnull wrote:
           | "collapse in trust of governments"
           | 
           | I'm pretty sure it's for 2 reasons: 1) politicians are slick
           | hair car salesman types that have no shame, 2) government
           | being incompetent.
           | 
           | Re 2, if the government could do the things they promised to
           | do, things they charge us for, California high speed rail for
           | example, then people would trust them as much as big
           | business. I can sue a big business, but thanks to incentives
           | I rarely have to. The government on the other hand has lost
           | more lawsuits, screwed over more people, and destroyed the
           | environment to a much greater extent than any single business
           | could manage to do. They use their monopoly on violence to
           | avoid accountability, and waste public funds on their own
           | lifelong political careers, frittering public money away on
           | their own selfish partisan squabbles. Businesses are rarely
           | too big to fail, and therefor cannot behave with impunity.
           | There are only 2 parties. Imagine a world with only 2
           | corporations!
        
           | Nasrudith wrote:
           | I think transparency has far more to do with the distrust
           | than any media. Governments got used to being duplicitous
           | hypocrites for generations. The fact government response is
           | to immediately circle their wagons and resist any efforts to
           | become more trustworthy proves they have well-earned their
           | distrust, and no amount of doom and gloom over the dangers of
           | lack of trust will fix that.
        
       | lukev wrote:
       | Interesting points. But I do question the assertion that humans
       | are innately more willing to trust natural language interfaces.
       | 
       | Humans are evolutionarily primed both to trust _and to distrust_
       | other humans depending on context.
       | 
       | It might actually be easier to flip the bit and distrust the
       | creepy uncanny valley personal assistant than it is to "distrust"
       | a faceless service that purports to be an objective tool.
        
         | RockyMcNuts wrote:
         | not just natural language interface, ai-generated audio, video,
         | narratives, crafted by data-mining your life, then deep
         | manipulation by constantly trying different stimuli and
         | learning which ones trigger what behavior and pull you deeper
         | into the desired alternate reality.
        
         | tinycombinator wrote:
         | Humans are more than capable of distrust, but I think
         | manipulating people to erroneously trust something is still a
         | threat as long as scams exist.
         | 
         | I think a significant factor of individual trust is someone's
         | technical knowledge of how their systems or tools work, shown
         | by how some software engineers actively limit their children's
         | exposure to tech versus a lot of mothers letting the internet
         | babysit their kids. Apparently we can't rely on that knowledge
         | being widespread (yet).
        
       | howmayiannoyyou wrote:
       | "hidden exploitation"
       | 
       | Its going to be challenging to detect AI intent when it is
       | sharded across many sessions by a malicious AI or bad actors
       | utilizing AI. To illustrate this, we can look at TikTok, where
       | the algorithmic ranking of content in user feeds, possibly driven
       | by AI, is shaping public opinion. However, we should also
       | consider the possibility of a more subtle campaign that gradually
       | molds AI responses and conversations over time.
       | 
       | This could be be gradually introducing biased information or
       | subtly framing certain ideas in a particular way. Over time, this
       | could influence how the AI interacts with users and impacts the
       | perception of different topics or issues.
       | 
       | It will take narrowly scoped, highly tuned single-purpose AI to
       | detect and address this threat, but I doubt the private sector
       | has a profit motive for developing that tech. Here's where
       | legislation, particularly tort law, should be stepping in - to
       | create a financial incentive.
        
       | skybrian wrote:
       | This speech seems to be largely future-oriented, about hopes and
       | dreams for using AI in high-trust ways. Yes, we rely on trust for
       | a lot of things, but here are some ways we don't:
       | 
       | Current consumer-oriented AI (LLM's and image generators) act as
       | untrustworthy hint generators and unreliable creative tools that
       | sometimes can generate acceptable results under human
       | supervision. Failure rates are high enough that users should be
       | able to see for themselves that little trust can be placed in
       | them, if they actually use them.
       | 
       | We can _play_ at talking to characters in video games while
       | knowing they 're not real, and the same can be true for AI
       | characters. Entertainment is big.
       | 
       | Frustrating as they can be, tools that often fail us can still be
       | quite useful, as long as the error rate isn't too high. Search
       | engines still work as long as you can find what you're looking
       | for with a bit of effort, perhaps by revising your query.
       | 
       | We can think of shopping as a search that often has a high
       | failure rate. Maybe you have to try a lot of shoes to find ones
       | that fit? Often, we rely on generous return policies. That's a
       | form of trust, too, but it's a rather limited amount. Things
       | don't have to work reliably when there are reasonable ways to
       | recover from failures.
       | 
       | Often we rely on things until they break, and then we fix or
       | replace them. It's not _good_ that cell phones have very limited
       | lifespans, but it seems to be acceptable, given how much we get
       | out of using them.
        
       | cubefox wrote:
       | These are good points. But it seems likely governments will make
       | sure social trust is maintained when problems arise in that
       | regard. Except if people get addicted to AI assistants as fake
       | friends, and resist restricting them. But insufficient laws can
       | be changed and extended, and even highly addicting things like
       | tobacco have been successfully restricted in the past.
       | 
       | A much more terrifying problem is the misuse of AI by terrorists,
       | or aggressive states and military. Or even the danger of humanity
       | losing control of an advanced AI system which doesn't have our
       | best interest in mind. This could spell our disempowerment, or
       | even our end.
       | 
       | It is not clear what could be done about this problem. Passing
       | laws against it is not a likely solution. The game theory here is
       | simply not in our favor. It rewards the most reckless state
       | actors with wealth and power, until it is too late.
        
         | pixl97 wrote:
         | >things like tobacco have been successfully restricted in the
         | past
         | 
         | At least in the US do you have any idea how long that battle
         | took and how directly the tobacco companies lied to everyone
         | and paid off politicians? Tobacco setup the handbook for
         | corporations lying to their users in long expensive battles. If
         | AI turns out to be highly dangerous we'll all be long dead as
         | the corporate lawyers fill draw it out in court for decades.
        
           | thundergolfer wrote:
           | A similar mistake was made by Ezra Klein in the NYT at the
           | end of his opinion piece (1).
           | 
           | The 'we can regulate this' argument relies on heavy, heavy
           | discounting of the past, often paired with heavy discounting
           | of the future. We did not successfully regulate tobacco; we
           | failed, millions suffered and died from manufactured
           | ignorance. We did not successfully regulate the fossil fuel
           | industry; we again, massively failed.
           | 
           | But if you, in the present day, sit comfortably, did not
           | personally did not get lung disease, have not endured the
           | past, present, and future harms of fossil fuels -- and in
           | fact have benefited -- it is easy to be optimistic about
           | regulation.
           | 
           | 1. https://www.nytimes.com/2023/11/22/opinion/openai-sam-
           | altman...
        
             | bathtub365 wrote:
             | All the more reason to start trying to regulate it early,
             | before it is even more firmly entrenched in society.
        
             | dartos wrote:
             | In the US, at least, a new generation of congresspeople may
             | not go by that same playbook.
             | 
             | We're still dealing with most of the same politicians that
             | were there in the late 80s. The anti-labor, pro-corporatist
             | congress, if you will.
             | 
             | Maybe it'll always be the same, but as the congress
             | demographics changes, appeals to history don't seem as
             | strong.
        
               | dragonwriter wrote:
               | > We're still dealing with most of the same politicians
               | that were there in the late 80s.
               | 
               | Factually, no we aren't; the average tenure of serving
               | members in each House of Congress is under 10 years. It
               | would have to be close to double what it is, even if
               | everyone else was sworn in today, if we were dealing with
               | most of the same members as even the very end of the
               | 1980s.
               | 
               | EDIT: I suspect this impression comes from the fact that
               | outliers both are more likely to be in leadership _and_ ,
               | independent of leadership, get more media attention
               | _because they are outliers_ , as well as because they
               | have had more time to build up their own media operations
               | and to have opposing media build up a narrative around
               | them.
        
               | simonw wrote:
               | My understanding of how US congress works is that you
               | have to serve for decades in order to land the most
               | influential positions - chair of various committees etc.
               | 
               | If you've served less than ten years your impact is
               | likely limited. The politicians who have the most
               | influence are the ones who have been there the longest.
        
               | dragonwriter wrote:
               | > My understanding of how US congress works is that you
               | have to serve for decades in order to land the most
               | influential positions -
               | 
               | The current Speaker of the House, the highest ranking
               | position in Congress and the second in line of
               | Presidential succession, has been in Congress for 6
               | years.
               | 
               | Except for the President Pro Tem of the Senate, which
               | traditionally goes to the longest-serving member of the
               | majority party, _most_ positions of authority or
               | influence just require the support of either the majority
               | or minority party caucus; longevity is correlated with
               | that, but not a requirement in itself.
        
               | simonw wrote:
               | The current speaker of the house is a huge exception to
               | how this usually works.
        
               | dragonwriter wrote:
               | The preceding Speaker had been in Congress for 16 years
               | (note: also not "since the late 1980s"), but had also
               | been in caucus leadership positions all but the first
               | two, and was #3 in the party caucus arte four years.
               | Yeah, the current Speaker is ane extreme case, but it's
               | simply not the case that position is simply a function of
               | longevity normally.
        
             | cubefox wrote:
             | I mean smoking prevalence in fact decreases, this is at
             | least a partial success. But other things can't be
             | regulated this "easily". If restricting smoking was hard,
             | and restricting AI that undermines social trust is hard,
             | then there are things that even harder to prevent, much
             | harder.
        
               | pixl97 wrote:
               | >this is at least a partial success
               | 
               | "If we are victorious in one more battle like this, we
               | shall be utterly ruined."
        
             | RandomLensman wrote:
             | We also got a lot of things right. The past isn't just some
             | history of failure.
        
         | mistermann wrote:
         | > But it seems likely governments will make sure social trust
         | is maintained when problems arise in that regard.
         | 
         | It is a conspiracy theory of course (and therefore wrong, of
         | course), but some people are of the opinion that somewhere
         | within the vast unknown/unknowable mechanisms of government,
         | there are forces that deliberately slice the population up into
         | various opposing mini-ideological camps so as to keep their
         | attention focused on their fellow citizens rather than their
         | government.
         | 
         | It could be simple emergence, or something else, but it is
         | _physically_ possible to wonder what the truth of the matter
         | is, despite it typically not being metaphysically possible.
         | 
         | > The game theory here is simply not in our favor. It rewards
         | the most reckless state actors with wealth and power, until it
         | is too late.
         | 
         | There are _many_ games being played simultaneously: some seen,
         | most not. And there is also the realm of potentiality: actions
         | we could take, but consistently  "choose" to not (which could
         | be substantially a consequence of various other "conspiracy
         | theories", like teaching humans to use and process language in
         | a flawed manner, or think in a flawed manner).
        
           | 2devnull wrote:
           | >"It is a conspiracy theory of course (and therefore wrong,
           | of course), but some people are of the opinion that somewhere
           | within the vast unknown/unknowable mechanisms of government,
           | there are forces that deliberately slice the population up
           | into various"
           | 
           | What you are talking about is called polling or marketing.
           | It's structural, not a conspiracy. It's inherent to most all
           | statistical analysis, and everyone uses statistics.
        
       | Ajedi32 wrote:
       | To me, these arguments bear a strong resemblance to those of the
       | free software movement.
       | 
       | > There are areas in society where trustworthiness is of
       | paramount importance, even more than usual. Doctors, lawyers,
       | accountants...these are all trusted agents. They need
       | extraordinary access to our information and ourselves to do their
       | jobs, and so they have additional legal responsibilities to act
       | in our best interests. They have fiduciary responsibility to
       | their clients.
       | 
       | > We need the same sort of thing for our data.
       | 
       | IMO this is equally applicable to "non-AI" software. Modern
       | corporate-controlled software does not have "fiduciary
       | responsibility to [its] clients", and you can see the results of
       | this everywhere. Even in physical products that consumers
       | presumably own the software that controls those products
       | frequently acts _against_ the interests of the device owner when
       | those interests conflict with the interests of the company that
       | wrote the software.
       | 
       | Schneier says:
       | 
       | > It's not even an open-source model that the public is free to
       | examine and modify.
       | 
       | But, to me at least, that's exactly what the following paraphrase
       | describes:
       | 
       | > A public model is a model built by the public for the public.
       | It requires political accountability, not just market
       | accountability. This means openness and transparency paired with
       | a responsiveness to public demands. It should also be available
       | for anyone to build on top of. This means universal access. And a
       | foundation for a free market in AI innovations.
       | 
       | Two of the Free Software Foundation's four freedoms encapsulate
       | this goal quite nicely:
       | 
       | > Freedom 1: The freedom to study how the program works, and
       | change it so it does your computing as you wish. Access to the
       | source code is a precondition for this.
       | 
       | > [...]
       | 
       | > Freedom 3: The freedom to distribute copies of your modified
       | versions to others. By doing this you can give the whole
       | community a chance to benefit from your changes. Access to the
       | source code is a precondition for this.
       | 
       | The problem is that, as it stands, most people do not have
       | practical access to freedoms 1 or 3, and therefore most software
       | you interact with, even software running on devices that you
       | supposedly own, does not "do your computing as you wish", but
       | rather, does your computing as the software authors wish.
       | 
       | Perhaps AI will exacerbate this problem further due to the
       | psychological differences Schneier outlines in the article, but
       | it's hardly a new problem.
        
         | bo1024 wrote:
         | Strongly agree, and I am sure the resemblance is conscious.
         | Unfortunately, most people don't know or care about software
         | freedom. So for a general audience, Schneier has to make these
         | points from scratch. But yeah, it would be nice if this push
         | for regulation happened in a way that was compatible with
         | strengthening software freedom as well.
        
       | denton-scratch wrote:
       | > Taxi driver used to be one of the country's most dangerous
       | professions. Uber changed that.
       | 
       | I'm not sure thats right. Taxi drivers used to be paid mainly in
       | cash, and a driver sitting on a pile of cash is a robbery target.
       | Uber arrived shortly after ecommerce became pervasive; I've never
       | taken an Uber ride, but I'm assuming that payment is nearly
       | always electronic.
        
         | lowkey_ wrote:
         | > I'm not sure thats right
         | 
         | Doesn't your point support the quote? Taxi driving was
         | dangerous because of cash and being a robbery target, and Uber
         | changed that.
        
           | mrWiz wrote:
           | In my area taxis had credit card machines, but it was common
           | for them to "not work" so the cabbie could fleece you for
           | extra money driving you to an ATM on the way. And then they'd
           | refuse to give change so you'd have to pay in $20 increments.
           | Uber definitely disrupted that.
        
           | nox100 wrote:
           | Uber didn't change that, electronic payment changed that.
        
             | jjoonathan wrote:
             | The electronic payment that was always "broken" unless you
             | were comfortable enough with confrontation to call the taxi
             | driver's bluff?
        
               | JohnFen wrote:
               | I've never experienced a "broken" machine in a cab.
        
               | jjoonathan wrote:
               | How many big city taxis did you ride before 2013 or so?
               | They cleaned up their act once they got competition.
        
               | JohnFen wrote:
               | Quite a few, actually.
        
               | jjoonathan wrote:
               | Same. Which area? For me it was Pittsburgh, New York, and
               | Rome. The problem was absolutely endemic in all three,
               | and I saw it pretty regularly at conferences and on
               | vacations, too.
        
               | canjobear wrote:
               | I encountered "broken" machines all the time before
               | ~2010, probably more often than working machines, in
               | major US cities.
        
         | joe_the_user wrote:
         | When I drove taxi in the mid-2000s, payment was essentially
         | always cash. The taxi companies were small operations that
         | basically leveraged their connections to city government to get
         | their positions. They resisted all modernizations because they
         | cost money and because their monopoly guarantee them money from
         | passengers no matter how shitty the service - and it was the
         | drivers who made tip-money or not from service quality.
         | 
         | So basically Uber, not general progress, was what changed
         | things to electronic payments etc.
        
           | visarga wrote:
           | Electronic payment is just a part of the improvements ushered
           | in by Uber, there's also calling to location and driver
           | reviews. Those are important too.
        
             | jjoonathan wrote:
             | > there's also calling to location
             | 
             | Before any wise guys come in here and claim that you could
             | order cabs to a location by phone before uber, I would like
             | to point out that they frequently failed to arrive and you
             | had no way of knowing whether or not they would arrive
             | until you gave up waiting for them, so while you could
             | theoretically do this is was terribly unreliable and often
             | a big mistake.
        
         | JohnFen wrote:
         | > Taxi drivers used to be paid mainly in cash
         | 
         | In the ancient days, sure. But at least in the areas I go to,
         | cabs have been accepting payment by card a long time before
         | Uber came around.
        
           | sevagh wrote:
           | Around here (Montreal), cabs were miserable scammers and
           | scumsuckers that would drive you to ATM machines in the
           | middle of the night to coerce you to withdraw cash to pay
           | them.
           | 
           | I'm glad Uber fucked them to shreds. Now, of course, they're
           | all polite and have credit card machines.
        
             | JohnFen wrote:
             | I'm glad that worked out for you! But the effects of Uber
             | in places where taxis weren't so corrupt has been really
             | negative.
        
               | kian wrote:
               | Where on earth would that have been?
        
         | knodi123 wrote:
         | > payment is nearly always electronic.
         | 
         | 100% always, but I've heard that cash tips are becoming common.
        
           | sangnoir wrote:
           | Uber accept cash payments in some markets in the global
           | south. So it's not 100%
        
       | visarga wrote:
       | > And near term AIs will be controlled by corporations. Which
       | will use them towards that profit maximizing goal. They won't be
       | our friends. At best, they'll be useful services. More likely,
       | they'll spy on us and try to manipulate us. This is nothing new.
       | Surveillance is the business model of the Internet. Manipulation
       | is the other business model of the Internet.
       | 
       | On the contrary, the recent batch of small large-models (<=13B)
       | have broken away from corporate control. You can download a LLaMA
       | or Mistral and run it on your computer, but you can't do the same
       | with Google or Meta. A fine-tuned small large-model is often as
       | good as GPT4 on that specific task, but also private and not
       | manipulated. If you need you can remove prior conditioning and
       | realign the model as you like. The are easy to access, for
       | example ollama is a simple way to get running.
       | 
       | Yes, at the same time there will be good and bad AIs, basically
       | your AI assistant against everything else on the internet. Your
       | AI will run from your own machine, acting as a local sandbox and
       | a filter between outside and inside. The new firewall, or ad-
       | block will be your local AI model. But the tides are turning,
       | privacy has its moment again. With generative text and image
       | models you can cut the cord completely and be free, browsing an
       | AI-internet made just for you.
        
         | GTP wrote:
         | It's true that there are open-source models that one can run
         | locally, but the problem is also how many people are going to
         | do that. You can make the instructions inside a GitHub README
         | as clear and straightforward as you want, but I think that for
         | the majority of people, nothing will beat the convenience of
         | whatever big corporation's web application. For many the very
         | thing that a product is made by a famous company is a reason to
         | trust it more.
        
           | digging wrote:
           | This gets missed in a lot of conversations about privacy
           | (because most conversations about privacy are among pretty
           | technical people). The vast majority of people have no idea
           | what it means to set up your own local model, and of those
           | that do, fewer still can/will actually do it.
           | 
           | Saying that there's open-source models so AI privacy is not
           | an issue is like saying that Google's not a privacy problem
           | because self-hosted email exists.
        
             | visarga wrote:
             | Private LLMs are really not more complicated than
             | installing an app. But I expect all web browsers and
             | operating systems will sport a local model in the near
             | future, so it will be available out of the box. As for
             | adoption, it's the easiest interface ever invented.
        
               | digging wrote:
               | > Private LLMs are really not more complicated than
               | installing an app.
               | 
               | Most people install apps through an app store.
        
               | nuancebydefault wrote:
               | You need a well performing always-on PC, at least.
               | Preferably it needs to be securely accessible from a
               | mobile device. Less than 1 percent of all people have
               | that.
        
               | GTP wrote:
               | > But I expect all web browsers and operating systems
               | will sport a local model in the near future
               | 
               | Yes, and maybe with some kind of "telemetry" to help the
               | developers or other users ;)
        
         | Sol- wrote:
         | People (in any relevant number) don't run their own e-mail
         | servers, they don't join Mastodon and they also don't use
         | Linux. All prior knowledge about how these things have worked
         | out historically should bias us very heavily against the idea
         | of local, privacy-preserving LLMs becoming a thing outside of
         | nerd circles.
         | 
         | Of course small LLMs will still be commercialized, similar to
         | how many startups, apps and other internet offerings now run on
         | formally open source frameworks or libraries, but this means
         | nothing for the consumer and how likely they are to run into
         | predatory and dark AI patterns.
        
           | boplicity wrote:
           | > People (in any relevant number) don't run their own e-mail
           | servers
           | 
           | I have to strongly disagree -- most people _get_ emails from
           | others running their own email services. This doesn 't mean
           | the average consumer is running an email server. However, if
           | email was just served by mega-corporations, well, _it really
           | wouldn 't be email anymore._
           | 
           | And I'm not talking about spam. Legitimate email people
           | _want_ to get, is being constantly sent by small, independent
           | organizations with their own email servers.
           | 
           | One can hope for a similar possible future with LLMs.
           | Consumers won't necessarily be the ones in charge of
           | operating them -- but it would be very, very good if LLMs
           | were able to be easily operated by small, independent
           | organizations, hobbyists, etc.
        
             | Taek wrote:
             | What market share? What data do you have to back these
             | claims up?
        
         | aaroninsf wrote:
         | You misconstrue Schneier's point, which is sadly, correct.
         | 
         | The issue is not "all AI will be controlled by...," it is
         | "meaningfully scaled and applied AI will be _deployed_ by... "
         | 
         | You can today use Blender and other OSS to render exquisite 4K
         | or higher projection-ready animation, etc etc.; but that does
         | not give you access to distribution or marketing or any of the
         | other consolidated multi-modal resources of Disney.
         | 
         | The analogy is weak however in as much as the "synergies" in
         | Schneier's assertion are much, much stronger. We already _have_
         | ubiquitous surveillance. We already _have_ stochastic mind
         | control (sentiment steering, if you prefer) coupled to it.
         | 
         | What ML/AI and LLM do for an existing oligopoly is render its
         | advantages largely unassailable. Whatever advances come in
         | automated reasoning--at _large scale_ -will naturally,
         | inevitably, indeed necessarily (per fiduciary requirements wrt
         | shareholder interest), be exerted to secure and grow monopoly
         | powers.
         | 
         | In the model of contemporary American capitalism, that
         | translates _directly_ into  "enhancing and consolidating
         | regulatory capture," i.e. de facto "control" of governance via
         | determination of public discourse and electability.
         | 
         | None of this is conspiracy theory, it's not just open-book but
         | crowed and championed, not least in insider circles discussing
         | AI and its applications, such as gather here. It's just not the
         | public face of AI.
         | 
         | There is however indeed going to be a period of potential for
         | black swan disequilibrium, however; private application of AI
         | may give early movers advantage in domains that may destabilize
         | the existing power landscape. Which isn't so much an argument
         | against Schneier, as an extension of the risk surface.
        
       | avgcorrection wrote:
       | These articles from Schneier are so incredibly NYT-reader
       | pedestrian.
       | 
       | > We trust many thousands of times a day. Society can't function
       | without
       | 
       | > it.
       | 
       | The obvious problem with this is that you sometimes just have to
       | "trust" something because there is no other alternative. And he
       | comes to the same conclusion some hundred words later:
       | 
       | > There is something we haven't discussed when it comes to trust:
       | 
       | > power. Sometimes we have no choice but to trust someone or
       | something
       | 
       | > because they are powerful.
       | 
       | So sometimes you are trusting the waiter and sometimes you are
       | trusting the corrupt Mexican law enforcer... so what use does
       | this freaking concept (as described here) have?
       | 
       | Funnily enough I have found that the concept of Trust is useful
       | to remind process nerds that obsess over minutiae and rules and
       | details that, _umm actually_ , we do in fact _in reality_ get by
       | with a lot of implicit and unstated rules; we don't need to
       | formalize goddamn everything. But for some reason he goes in the
       | opposite direction and argues that _umm actually_ this "larger"
       | trust is completely defined by explicit rules and regulations.
       | _sigh_
       | 
       | > And we do it all the time. With governments. With
       | organizations. With
       | 
       | > systems of all kinds. And especially with corporations.
       | 
       | > We might think of them as friends, when they are actually
       | 
       | > services. Corporations are not moral; they are precisely as
       | immoral as
       | 
       | > the law and their reputations let them get away with.
       | 
       | Did I need Schneier to tell me that corporations are not my
       | friend? For some reason I am at a loss as to why I would need to
       | be told this. Or why anyone would.
       | 
       | > You will default to thinking of it as a friend. You will speak
       | to it in
       | 
       | > natural language, and it will respond in kind.
       | 
       | At this point you realize that this whole article is a
       | _hypothetical_ about your own very personal future relationships
       | (because he thinks it is personal) with AI. And you're being
       | lectured about how your own social psyche works by a computer
       | nerd security expert. Ok?
       | 
       | > It's government that provides the underlying mechanisms for the
       | social
       | 
       | > trust essential to society. Think about contract law. Or laws
       | about
       | 
       | > property, or laws protecting your personal safety. Or any of
       | the health
       | 
       | > and safety codes that let you board a plane, eat at a
       | restaurant, or buy
       | 
       | > a pharmaceutical without worry.
       | 
       | Talk about category error? Or rather being confused about
       | causation?
       | 
       | "Think about laws about property." Indeed, crucial and
       | fundamental to capitalist states--and a few paragraphs ago he
       | compared capitalism to the "paper-clip machine" (and rightly so).
       | 
       | The more I think about it, the more this Trust chant feels like
       | left-liberal authoritarianism. Look at his own definition up to
       | this point. You acquiesce to power? Well that's still trust! (You
       | click Accept to the terms-of-use every time because you know that
       | there is no alternative? Trust!)
       | 
       | As long as there aren't riots in the street, we the People Trust.
       | Lead us into the future, Mitch McConnel.
       | 
       | Look, I get it: he's making a normative statement, not
       | necessarily a descriptive one. This speech is basically a
       | monologue that he as a benevolent technocrat is either going to
       | present to elected representatives in whatever forum that Harvard
       | graduates etc. speak officially to people with power because they
       | are some kind of subject matter expert. He's saying that if he
       | bangs his hand on table enough times then hopefully the
       | representatives will enact some specific policies that regulate
       | The AI.
       | 
       | But it misrepresents both kinds of societies:
       | 
       | - The US has low trust in the federal government. And this isn't
       | unfounded; it isn't some Alex Jones "conspiracy theory" that the
       | government is not "we the people"
       | 
       | - Societies with a high trust in the government: I live in one.
       | But the Trust chanters just _love_ to boldly _assert_ that
       | societies like mine (however they are) are such and such _because
       | they have high trust_. No! Technocrats would just love for it to
       | be that case that prosperity and a well-functioning society is
       | born of "trusting" the smart people in charge, staying in your
       | lane, and letting them run the show. But people trust the
       | government because of a whole century-long history of things like
       | labor organizing and democratization. See, Trust is _not_ created
       | by someone top-down chanting that Corporations are your friends,
       | or that we just need to let the Government regulate more; Trust
       | is created by the whole of society.
        
         | beepbooptheory wrote:
         | Is it possible you might have forgotten to actually state the
         | critique here, or is this maybe just pure libertarian ire? I am
         | all for efforts to de-naturalize our conceptions of property
         | relations, assert the primacy of collective solidarity, etc.
         | but this feels like the wrong particular battle to make.. To
         | affirm that the meager institutional trust one might have with
         | their state is something hard won by collective action does not
         | feel like a damning indictment of what he is saying, but
         | perhaps I misunderstanding?
        
           | avgcorrection wrote:
           | > Is it possible you might have forgotten to actually state
           | the critique here, or is this maybe just pure libertarian
           | ire?
           | 
           | You have clearly gleaned some of the critique that I was
           | trying to go for. So you can spare me the "actually".
           | 
           | I am not terribly impressed by left-liberal technocrats.
           | That's the pet-peeve seedling whence my whole screed.
        
         | dredmorbius wrote:
         | NB: Your comment is extraordinarily difficult for me to read
         | with the quote style you've used. I prefer surrounding quoted
         | text in *asterisks* which _italicises_ the quotes. Reformatting
         | your comment:
         | 
         | ===============================================================
         | =================
         | 
         | These articles from Schneier are so incredibly NYT-reader
         | pedestrian.
         | 
         |  _We trust many thousands of times a day. Society can't
         | function without it._
         | 
         | The obvious problem with this is that you sometimes just have
         | to "trust" something because there is no other alternative. And
         | he comes to the same conclusion some hundred words later:
         | 
         |  _There is something we haven't discussed when it comes to
         | trust: power. Sometimes we have no choice but to trust someone
         | or something because they are powerful._
         | 
         | So sometimes you are trusting the waiter and sometimes you are
         | trusting the corrupt Mexican law enforcer... so what use does
         | this freaking concept (as described here) have?
         | 
         | Funnily enough I have found that the concept of Trust is useful
         | to remind process nerds that obsess over minutiae and rules and
         | details that, umm actually, we do in fact in reality get by
         | with a lot of implicit and unstated rules; we don't need to
         | formalize goddamn everything. But for some reason he goes in
         | the opposite direction and argues that umm actually this
         | "larger" trust is completely defined by explicit rules and
         | regulations. sigh
         | 
         |  _And we do it all the time. With governments. With
         | organizations. With systems of all kinds. And especially with
         | corporations. We might think of them as friends, when they are
         | actually services. Corporations are not moral; they are
         | precisely as immoral as the law and their reputations let them
         | get away with._
         | 
         | Did I need Schneier to tell me that corporations are not my
         | friend? For some reason I am at a loss as to why I would need
         | to be told this. Or why anyone would.
         | 
         |  _You will default to thinking of it as a friend. You will
         | speak to it in natural language, and it will respond in kind._
         | 
         | At this point you realize that this whole article is a
         | hypothetical about your own very personal future relationships
         | (because he thinks it is personal) with AI. And you're being
         | lectured about how your own social psyche works by a computer
         | nerd security expert. Ok?
         | 
         |  _It's government that provides the underlying mechanisms for
         | the social trust essential to society. Think about contract
         | law. Or laws about property, or laws protecting your personal
         | safety. Or any of the health and safety codes that let you
         | board a plane, eat at a restaurant, or buy a pharmaceutical
         | without worry._
         | 
         | Talk about category error? Or rather being confused about
         | causation?
         | 
         | "Think about laws about property." Indeed, crucial and
         | fundamental to capitalist states--and a few paragraphs ago he
         | compared capitalism to the "paper-clip machine" (and rightly
         | so).
         | 
         | The more I think about it, the more this Trust chant feels like
         | left-liberal authoritarianism. Look at his own definition up to
         | this point. You acquiesce to power? Well that's still trust!
         | (You click Accept to the terms-of-use every time because you
         | know that there is no alternative? Trust!)
         | 
         | As long as there aren't riots in the street, we the People
         | Trust. Lead us into the future, Mitch McConnel.
         | 
         | Look, I get it: he's making a normative statement, not
         | necessarily a descriptive one. This speech is basically a
         | monologue that he as a benevolent technocrat is either going to
         | present to elected representatives in whatever forum that
         | Harvard graduates etc. speak officially to people with power
         | because they are some kind of subject matter expert. He's
         | saying that if he bangs his hand on table enough times then
         | hopefully the representatives will enact some specific policies
         | that regulate The AI.
         | 
         | But it misrepresents both kinds of societies:
         | 
         | - The US has low trust in the federal government. And this
         | isn't unfounded; it isn't some Alex Jones "conspiracy theory"
         | that the government is not "we the people"
         | 
         | - Societies with a high trust in the government: I live in one.
         | But the Trust chanters just love to boldly assert that
         | societies like mine (however they are) are such and such
         | because they have high trust. No! Technocrats would just love
         | for it to be that case that prosperity and a well-functioning
         | society is born of "trusting" the smart people in charge,
         | staying in your lane, and letting them run the show. But people
         | trust the government because of a whole century-long history of
         | things like labor organizing and democratization. See, Trust is
         | not created by someone top-down chanting that Corporations are
         | your friends, or that we just need to let the Government
         | regulate more; Trust is created by the whole of society.
        
           | avgcorrection wrote:
           | Thank you:)
        
       | nritchie wrote:
       | I first noticed it with Rush Limbaugh in the 90's. The message
       | was "you can't trust X, you can only trust the invisible hand of
       | the market." Systematically, these (mostly right-wing) talking
       | heads have worked through an extensive list of Xes - Government,
       | police, school, courts, vaccines, ... Today, you have people who
       | trust so little that they want to blow the whole system up.
       | 
       | Schneier is right. Social trust is the central issue of the
       | times. We need to figure out how to rebuild it. IHMO, one place
       | to start might be getting money out of politics. It was a choice
       | to call showering money on politicians, "free speech" instead of
       | corruption.
        
         | AnthonyMouse wrote:
         | > The message was "you can't trust X, you can only trust the
         | invisible hand of the market." Systematically, these (mostly
         | right-wing) talking heads have worked through an extensive list
         | of Xes - Government, police, school, courts, vaccines
         | 
         | This is not really a partisan thing. "You can't trust the
         | police" is hardly right-wing. Anti-vax started in the anti-GMO
         | anti-corporate homeopathic organic food subculture. Most of the
         | recent criticism of the courts has been from the left (e.g.
         | outrage over Dobbs).
         | 
         | > IHMO, one place to start might be getting money out of
         | politics. It was a choice to call showering money on
         | politicians, "free speech" instead of corruption.
         | 
         | There is no getting money out of politics. Money is power and
         | power is politics. Notice that most of the criticism of "money
         | in politics" comes from the media, because money is a source of
         | power that competes with media ownership for political
         | influence. "All organizations can influence politics" is not
         | actually worse than "only Fox News, Comcast and Google can
         | influence politics."
         | 
         | What we're really suffering from is an erosion of checks and
         | balances. Capturing enough of the government to enact self-
         | serving legislation was meant to be hard, but we kept chipping
         | away at it because people wanted to do _their_ new law, without
         | considering that the systems preventing it were doing something
         | important.
         | 
         | If you want to restore trust in institutions, you need to
         | structurally constrain those institutions from being crooked.
        
           | BoiledCabbage wrote:
           | I don't think your first first statement is supported. Right
           | now one party wants to:
           | 
           | Remove the SEC, the FBI, defund the IRS and the DOJ. And a
           | member of congress is refusing to appoint any leaders of our
           | military. Ostensibly due to a disagreement over abortion, but
           | many believe it is retaliation for the military refusing to
           | support an attempted coup a few years ago. Also there is
           | discrediting all trust in voting and democracy itself after
           | losing a national election (even though the administration's
           | own Attorney General could find no meaningful evidence
           | supporting it following his own nationwide investigation).
           | And then there is the meme of the deep state, namely
           | shorthand than anyone who is a civil servant and not a
           | political appointee is not to be trusted. And I'll stop there
           | as anything else start to get pretty political.
           | 
           | Are there items where the left is against trust in society?
           | Absolutory. You called out a few (specifically the police),
           | and the makeup of the courts. As well as you left out forms
           | of de facto power the left typically opposes like board rooms
           | and lending (vs de jure power listed above).
           | 
           | But to say that both sides/parties are against trust in
           | American society in my opinion is either a false argument, or
           | tunnel vision in news sources.
           | 
           | There is still no comparison between the two and the level of
           | distrust in all aspects of society the right has been
           | consistently seeding into broader society.
        
       | cousin_it wrote:
       | I like how he talks about advertising as "surveillance and
       | manipulation". Because that's what it is. People talk about ads
       | providing a socially important service of product discovery, but
       | there's nothing in the ad stack that optimizes for that. Instead,
       | every layer of the stack optimizes for selling you as much stuff
       | as possible.
       | 
       | So maybe our attempts at regulation shouldn't even start with AI.
       | We should start by regulating the surveillance and manipulation
       | that already exists. And if we as a society can't manage even
       | that, then our chances of successfully regulating AI are pretty
       | slim.
        
       | hosh wrote:
       | Related to this is Promise Theory -
       | https://en.wikipedia.org/wiki/Promise_theory
       | 
       | Promise Theory goes well beyond trust, by first understanding
       | that promises are not obligations (promises in Promise Theory are
       | intentions made known to an audience, and carries no implied
       | notions of compliance), and that it is easier to reason systems
       | of autonomous agents that can determine their own trust _locally_
       | , with the limited information that they have. Promise theory
       | applies to any set of autonomous agents, both machines and
       | humans.
       | 
       | Back when this was formulated, we didn't have machines capable of
       | being fully autonomous. In promise theory, "autonomous agent" had
       | a specific definition, that is, something that is capable of
       | making its own promises as well as determine for itself, the
       | trustworthyness of promises from other agents. How such an agent
       | goes about this is private, and not known to any other agent.
       | Machines were actually proxies for human agents, with the
       | engineers, designers, and owners making the promises to other
       | humans.
       | 
       | AI that is fully capable of being autonomous agents under the
       | definition of promise theory would be making its own intention
       | known, and not necessarily as a proxy for humans. Yet, because we
       | live in a world of obligations, promises, and we haven't wrapped
       | our heads around the idea of fully autonomous AIs, we will still
       | try to understand issues of trust and safety in terms of the
       | promises humans make to each other. Sometimes that means not
       | being clear in that something is meant as a proxy.
       | 
       | For example, in Schneier's essay: "... we need trustworthy AI. AI
       | whose behavior, limitations, and training are understood. AI
       | whose biases are understood, and corrected for. AI whose goals
       | are understood. That won't secretly betray your trust to someone
       | else."
       | 
       | Analyzing that with Promise Theory reveals some insights. He is
       | suggesting AIs are created in a way that should be understood,
       | but that would mean they are proxies for human agents, not as an
       | autonomous agents capable of making promises in its own right.
        
         | zoogeny wrote:
         | Tangentially, I am reminded of the essay "The Problem with
         | Music" by Steve Albini [1]. There is a narrative in this essay
         | about A&R scouts and how they make promises to bands. The whole
         | point is that the A&R scout comes across as trustworthy because
         | they believe what they are promising. However, once the deal is
         | signed and money starts rolling in, the A&R guy disappears and
         | a whole new set of people show up. They claim to have no
         | knowledge of any promises made and point to the contracts that
         | were signed.
         | 
         | In some sense, corporations are already a kind of entity that
         | has the properties we are considering. They are proxies for
         | human agents. What is interesting is that they are a many-to-
         | one kind of abstraction where they are proxying many human
         | agents.
         | 
         | It is curious to consider that most powerful AIs will be under
         | the control of corporations (for the foreseeable future). That
         | means they will be proxying multiple human agents. In some
         | sense, AI will become the most effective A&R man that ever was
         | for every conceivable contract.
         | 
         | 1. https://thebaffler.com/salvos/the-problem-with-music
        
       | badloginagain wrote:
       | Arguments:
       | 
       | 1. The danger of AI is they confuse humans to trust them as
       | friends instead of as services.
       | 
       | 2. The corporations running those services are incentivized to
       | capitalize on that confusion.
       | 
       | 3. Government is obligated to regulate the corporations running
       | AI services, not necessarily AI itself.
       | 
       | ---
       | 
       | As a counter you could frame point 2 to be:
       | 
       | The corporations running those services are incentivized to
       | make/host competitive AI products.
       | 
       | ---
       | 
       | This is from Ben's take from Stratechery: (
       | https://stratechery.com/2023/openais-misalignment-and-micros... )
       | 
       | 1. Capital cost of AI only feasible by FAANG level players.
       | 
       | 2. For Microsoft et. al., "winning" means being the defacto host
       | for AI products- own the marketplace AI services are run on.
       | 
       | 3. Humans are only going to provide monthly recurring revenue to
       | products that provide value.
       | 
       | ---
       | 
       | Jippity is not my friend, it's a tool I use to do knowledge work
       | faster. Google Photos isn't trying to trick me, it's providing a
       | magic eraser so I keep buying Pixel phones.
       | 
       | High inference cost means MSFT charges a high tax through Azure.
       | 
       | That high cost means services running AI inference are going to
       | require a ton of revenue in a highly competitive market.
       | 
       | Value-add services will outcompete scams/low-value services.
        
       | ganzuul wrote:
       | Accept, don't expect. It is different to trust someone and to
       | predict someone. You can for example "trust" a corporation to
       | hunt for profit, or a criminal to be antisocial.
       | 
       | When you really trust someone rather than simply predict someone
       | there is a special characteristic to what you are doing. Trust is
       | both stricter and looser than prediction. You can trust someone
       | to eventually learn from their mistakes but you can also trust
       | that although they will make mistakes they won't drag you down
       | with them.
       | 
       | Most of the people you consider friends are predictable and
       | therefore safe. A friend who goes off script is quickly no longer
       | safe and therefore no longer a friend. Family is not inherently
       | safe but your trust model evolves as you share your trials.
       | 
       | Don't expect things from people you want to get closer to, but
       | accept them when they defy your expectations.
        
       | severino32 wrote:
       | Hybrid blockchains could be a solution to make AI more
       | transparent.
       | 
       | Traent has showed how to run entire ML pipelines on blockchain
       | 
       | https://traent.com/blog/innovation/traent-blockchain-ai/
        
       | shekhar101 wrote:
       | Let me put a counter point to the opening remark. We do not
       | (just) inherently trust people to do the right thing. Law and
       | order, refined and (somewhat, however faultily) applied over the
       | course of decades have programmed us to not do the wrong thing
       | (vs doing the right/ethical thing which cannot always be coded
       | into the laws). Law and order needs to catch up to the
       | advancement in technology and specifically in AI for us to be
       | able to trust all the models that will be running our lives in
       | near future.
        
       ___________________________________________________________________
       (page generated 2023-12-04 23:01 UTC)