[HN Gopher] AI and Mass Spying
       ___________________________________________________________________
        
       AI and Mass Spying
        
       Author : hendler
       Score  : 349 points
       Date   : 2023-12-05 14:09 UTC (8 hours ago)
        
 (HTM) web link (www.schneier.com)
 (TXT) w3m dump (www.schneier.com)
        
       | naveen99 wrote:
       | The limiting factor is usually, someone has to pay for the spying
       | and the punishment. A lot (most?) of troublemakers just aren't
       | worth the trouble of spying or punishing.
        
         | sonicanatidae wrote:
         | I suspect this will be the hoovering approach. Suck it all up,
         | then figure out what you care to act on.
         | 
         | The government is a lot of things, and none of them are subtle.
         | 
         | Source: The ironically named PATRIOT ACT and similar.
        
         | ilovetux wrote:
         | The problem is that if AI enables mass spying, then the costs
         | will no longer be prohibitive to target an individual because
         | the infrastructure could be built out once and then reused to
         | target individuals at scale with AI doing all of the
         | correlation behind the scenes.
        
           | ben_w wrote:
           | That would resolve the spying cost, but not the punishment
           | cost. My go-to example here is heroin: class A drug in the
           | UK, 7 years for possession and life for supply, so far as I
           | can see nobody has anything good to say about it, and it has
           | around three times as many users in the UK as the entire UK
           | prison population.
           | 
           | Could implement punishment for that specific crime, at huge
           | cost, but you can't expand that to all crimes. Well, I
           | suppose you could try feudalism mark 2 where most of the
           | population is determined to be criminal and therefore spends
           | their life "having to work of their debt to society", but
           | then you have to find out the hard way why we stopped doing
           | feudalism mark 1.
        
             | barrysteve wrote:
             | A computer owned society doesn't really need jails. You can
             | deny 90% of services to a criminal and track and limit
             | their movement digitally.
             | 
             | We are already in the jail.
        
               | ben_w wrote:
               | I think prisoners are usually denied voting rights? Might
               | be wrong about that.
               | 
               | Certainly don't get many travel opportunities.
        
           | hruzgar wrote:
           | they most likely already do this and honestly it's really
           | really scary
        
         | kozikow wrote:
         | I don't think it's too far-fetched. To see where it's going
         | look at social score in China.
         | 
         | You say something wrong about a party, suddenly you can't board
         | a plane, take a mortgage, enter some buildings, ...
         | 
         | Your credit score would look at how compliant you are with
         | policies that can get increasingly nonsensical.
        
           | stillwithit wrote:
           | Humans existed under religious nonsense, and other forms of
           | nonsense (sure, sure legal racism and sexism up until the
           | last 30-40 years and obviously politically contrived social
           | norms means the "right people won" free market capitalism)
           | 
           | What's one more form of BS hallucination foisted upon the
           | meat based cassettes we exist as?
        
           | datadrivenangel wrote:
           | Same thing happens in the US. Post a tweet too critical of
           | the government, and you might get investigated and added to a
           | no-fly list. Background checks can reveal investigations, so
           | you may end up not getting a job because the government
           | didn't like your tweet...
        
         | iAMkenough wrote:
         | AI reduces the labor involved, reducing barriers to invest time
         | or money.
         | 
         | Spying isn't just for troublemakers either. It's probably worth
         | the trouble to the vindictive ex-husband willing to install a
         | hidden microphone in his ex-wife's house and in order to have
         | access to a written summary of any conversations related to
         | him.
        
         | nathanfig wrote:
         | Fining offenses is a great way to fund finding more offenses.
        
       | sambull wrote:
       | next time they try to root out whatever 'vermin' defined; it will
       | be a quick natural language prompt trained on ingested data from
       | the last 2 decades to get that list of names and addresses /
       | networks. AI is going to make targeting groups with differing
       | ideologies dead simple. It will be used.
        
       | Spivak wrote:
       | This happened during the last big technological advancement --
       | search. Suddenly it became possible for a government to
       | theoretically sift through all of our communications and people
       | online made constant reference to it talking directly to their
       | "FBI agent."
       | 
       | But it was and still is a nothingburger and this will be the same
       | because it doesn't enable anything except "better search." We've
       | had comparable abilities for a decade now. Yes LLMs are better
       | but semantic search and NLP have been around a while and the
       | world didn't end.
       | 
       | All the examples of what an LLM could do are just querying
       | tracking databases. Uncovering organizational structure is just a
       | social graph, correlating purchases is just querying purchase
       | databases, listing license plates is just querying the camera
       | systems. You don't need an LLM for any of this.
        
         | theodric wrote:
         | It will eventually end, though, accompanied by the chatter of a
         | gaggle of naysayers chicken-littling the people trying to raise
         | the alarm. I'm delighted to be here to witness the death of
         | liberty and descent of the West into the throes of everything
         | it once claimed to represent the polar opposite of, and also
         | delighted to be old enough that I'll likely die before it
         | becomes Actual Big Brother levels of oppressive.
        
         | ryanackley wrote:
         | Search has become a mass surveillance tool for the government.
         | That is the article's point. If you think it's a nothingburger,
         | you aren't aware of how often a person's Google searches are
         | used to establish criminal intent in criminal trials. Also,
         | they can be used to bolster probable cause for search warrants
         | and arrests.
         | 
         | Also, check out geofence warrants. Essentially, the government
         | can ask google for the IP's of people who searched for
         | particular terms within a geographic area.
         | 
         | Of course, don't commit crimes but this behavior by the
         | government raises the spectre of wrong search, wrong place,
         | wrong time. This is one of the article's points, it causes
         | people to self-censor and change their searches out of fear of
         | their curiosity being misconstrued as criminal intent.
        
           | Spivak wrote:
           | > a person's Google searches are used to establish criminal
           | intent in criminal trials. > > can ask google for the IP's of
           | people who searched for particular terms within a geographic
           | area
           | 
           | These aren't mass surveillance. The threat of search is
           | government systems passively sifting through all information
           | in existence looking for "criminal activity" and then
           | throwing the book at you.
           | 
           | In both of these cases the government is asking Google to run
           | a SQL query against their database that wouldn't be aided by
           | an LLM or even the current crop of search engines.
        
             | ryanackley wrote:
             | It is mass surveillance. It's just not being looked at by
             | anyone until you are targeted by the government. If you are
             | targeted, your entire life is within keystrokes of the
             | authorities. This is the same thing the article is saying.
             | 
             | The article is making the point that it's not feasible to
             | spy on every person to monitor them for wrongdoing
             | currently. It doesn't scale and it's not cost effective.
             | With AI that will change because it can be automated. The
             | AI can listen to voice, monitor video cameras, and read
             | text to discern a level of intent.
        
               | Spivak wrote:
               | > it's not feasible to spy on every person to monitor
               | them for wrongdoing currently
               | 
               | Sure it is! That's the whole point of search being the
               | previous big technical hurdle. YouTube monitors every
               | single video posted in real time for copyright
               | infringement. We've had the capability to do this kind of
               | monitoring for huge swaths of crimes for a decade and it
               | hasn't turned into anything. We could for example catch
               | every driver in real time for all across the country for
               | speeding but we don't.
               | 
               | Mass is the opposite of targeted surveillance. If you
               | need to be targeted and get a warrant to look at the data
               | then it's not mass. And AI isn't going to change the
               | system that prevents it right now which is the rules
               | governing our law enforcement bodies.
        
               | ryanackley wrote:
               | I get the impression you didn't bother reading the
               | article.
               | 
               | Your two examples are flawed and don't address what the
               | article is saying. The algorithm to check for copyright
               | violations is relatively simple and dumb. Speed cameras:
               | many countries do use speed cameras (i.e. Australia, UK).
               | The problem with speed cameras is that once you know
               | where they are, you simply slow down when approaching.
               | 
               | Again, mass vs. targeted surveillance is irrelevant now.
               | You've already been surveilled. It's just a matter of
               | getting access to the information.
        
       | HackerThemAll wrote:
       | Soon in the name of "security" you'll have your face scanned on
       | average every few minutes and it's going to be mandatory in many
       | aspects of our lives. That's the pathetic world IT has helped to
       | build.
        
         | brandall10 wrote:
         | Some of us have this already w/ our cell phones.
         | 
         | I know that's not what you mean, but in a way it may have
         | preconditioned society.
        
         | acuozzo wrote:
         | > That's the pathetic world IT has helped to build.
         | 
         | It's inevitable, I reckon, but it would have taken much longer
         | without F/OSS.
        
         | Taylor_OD wrote:
         | Ha. People have been scanning their finger print or face to
         | open their phone for years.
        
       | RandomLensman wrote:
       | Not sure why this sidesteps the potentially quite different legal
       | settings for spying and surveilling.
        
       | miyuru wrote:
       | The TV show "Person of Interest" portrayed this beautifully and
       | it came out 12 years ago.
       | 
       | Strange and scary how fast the world develops new technology.
        
         | cookiengineer wrote:
         | The amazing part is that so many large scale cyber attacks
         | happened meanwhile that were 1:1 fiction in the series back
         | then.
         | 
         | The Solarwinds incident, for example, was the identical attack
         | and deployment strategy that was the Rylatech hack in the
         | series. From execution to even the parties involved. It's like
         | some foreign state leaders saw those episodes and said "yep
         | that's a good idea, let's do that".
        
         | forward1 wrote:
         | It's far worse still: films like Enemy of the State (1998)
         | actually inspired spy technology.
         | 
         | https://slate.com/technology/2019/06/enemy-of-the-state-wide...
        
         | rambambram wrote:
         | This TV show immediately captured me and has always been in the
         | back of my mind since. Then, it seemed like a future far far
         | away, but now you remind me of it... I think it's scarily close
         | already.
         | 
         | Around 2013 I came up with some hardware ideas about offline
         | computing and even contemplated to name some versions after the
         | characters in 'Person of Interest'.
         | 
         | I can really recommend this series, since it's a good story,
         | has good actors and fits the zeitgeist very well.
         | 
         | edit: I also think it's time for me to get a malinois shepherd.
         | ;)
        
         | salawat wrote:
         | Hell, Stargate SG-1 had a few episodes that touched on the
         | absolute hell of a Federal Government that had access to
         | everything, or a computer system with RW access to people's
         | gray matter and it:s own unknown optimization function
         | (shrinking environmental protection dome resulting in live
         | updates of people's consciousness on a societal scale to keep
         | them in the dark as to it's happening ).
        
       | jacobwilliamroy wrote:
       | A friend of mine was recently a witness for the FBI. He was
       | working in a small office in the middle of nowhere and happened
       | to have a very loud argument with the suspect. A few minutes
       | later he left the building and when he was about to start his
       | car, he got a call from an agent asking him if he wanted to be a
       | witness in the case they were working on.
        
         | jdthedisciple wrote:
         | This stopped to soon.
         | 
         | What happened next?
        
           | jacobwilliamroy wrote:
           | The suspect was allegedly embezzling covid relief money and
           | the argument was about things like "why are we using company
           | time to go to your house and install the new flat screen TV
           | you just bought?"
           | 
           | The moral of the story is that you should never steal money
           | from the U.S. government because that is one thing that they
           | will not tolerate and I do not know the limits of what they
           | will do in order to catch you.
           | 
           | Also the suspect was convicted (so they probably aren't a
           | suspect anymore) and last I heard was being flown to
           | Washington D.C. for sentencing. That person is probably in
           | some kind of prison now but I haven't been following the
           | story very closely.
        
         | sonicanatidae wrote:
         | Mine was walking into the client's site. This was many years
         | ago. They had Novell Server issues, that's how long ago this
         | was.
         | 
         | I walked in, cops everywhere. Man in a suit waves an FBI badge
         | at me and asks why I'm there. I explained the ongoing work and
         | he said, "Not today" and forced me off the premises.
         | 
         | The next day I was called back by the client to "rebuild their
         | network". When I got there, every single piece of hardware that
         | contained anything remotely like storage had been disassembled
         | and the drives imaged, then just left in pieces. lol
         | 
         | I spent that day rebuilding it all, did get the Novell server
         | working again.
         | 
         | A week later, they were closed forever and I believe the owner
         | and CFO got nailed for healthcare fraud.
         | 
         | I was asked to testify in a deposition. My stuff was pretty
         | basic and mostly what I knew about how they used the tech. What
         | I saw around there and if I saw any big red signs declaring
         | FRAUD COMMITTED HERE!
        
       | troupo wrote:
       | Will? It already has. China has had its surveillance for ages.
       | And it's been spreading in other countries, too. Example:
       | https://www.404media.co/fusus-ai-cameras-took-over-town-amer...
        
       | gumballindie wrote:
       | I beg to differ. The correct term is not "will" but "is".
        
       | yonaguska wrote:
       | It's already happening. See this DHS memo issues on August 8th -
       | page 3.
       | 
       | https://www.dhs.gov/sites/default/files/2023-09/23_0913_mgmt...
       | 
       | Fortunately the DHS has put together an expert team of non-
       | partisan, honest, Americans to spearhead the effort to protect
       | our democracy. Thank you James Clapper and John Brennan- for
       | stepping up to the task.
       | 
       | https://www.dhs.gov/news/2023/09/19/secretary-mayorkas-annou...
       | 
       | And just in time for election season in the US AI is going to be
       | employed to fight disinformation- for our protection of course.
       | https://www.thedefensepost.com/2023/08/31/ussocom-ai-disinfo...
        
         | lp0_on_fire wrote:
         | That James Clapper and John Brennan continue to be lauded by
         | the media and their sycophants in government is one of the most
         | disappointing things to happen in my lifetime. Both should be
         | frog marched straight to prison along with their enablers.
        
           | whamlastxmas wrote:
           | Everything the media does is disappointing. It's all wildly
           | dishonest and damaging and done at the direction of a few
           | billionaires.
        
       | px43 wrote:
       | Never in the history of humanity has such powerful privacy tech
       | existed for anyone who wants to use it.
       | 
       | Using common off the shelf, open source, heavily audited tools,
       | it's trivial today, even for a non-technical 10 year old, to
       | create a new identity and collaborate with anyone anywhere in the
       | world. They can do research, get paid, make payments, and
       | contribute to private communities in such a way that no existing
       | surveillance infrastructure can positively link that identity to
       | their government identity. Every day privacy tech is improving
       | and adding new capabilities.
        
         | hackeman300 wrote:
         | Care to elaborate?
        
           | maxrecursion wrote:
           | That guy clearly has never been around 10 years olds, and
           | vastly over estimates their intelligence.
           | 
           | I'm fact, all evidence points to younger generations being
           | less tech savvy because they don't have to troubleshoot like
           | the older generations did. Everything works, and almost
           | nothing requires any technical configurations.
        
         | whelp_24 wrote:
         | Never before in history has it been necessary. It used to be
         | possible to travel like a hundred miles and dissappear. Before
         | credit was ubiquitous, money was hard to trace, and before that
         | is was essentially untraceable. And cameras didn't used to be
         | everywhere tracking faces for criminals and frequent shoppers.
         | I don't know what privacy technologies you are talking about
         | that are super effective, and i have been bit older than 10 for
         | a while.
        
           | 127361 wrote:
           | Now here in the UK they are using people's passport photos
           | for facial recognition, at least to stop shoplifting. It
           | won't be long before this is expanded to other things due to
           | feature creep.
        
         | crazygringo wrote:
         | > _Never in the history of humanity has such powerful privacy
         | tech existed for anyone who wants to use it._
         | 
         | True.
         | 
         | > _it 's trivial today, even for a non-technical 10 year old_
         | 
         | Not even close. It's difficult even for a technical 30 year
         | old.
         | 
         | You're talking about acquiring cash that has passed through
         | several people's hands without touching an ATM that recorded
         | its serial numbers. Using it to acquire Bitcoin from a
         | stranger. Making use of multiple VPN's, and making _zero_
         | mistakes where _any_ outgoing traffic from your computer can be
         | used to identify you -- browser fingerprinting, software
         | updates, analytics, MAC address. Which basically means a brand-
         | new computer you 've purchased in cash somewhere without
         | cameras, that you use for nothing else -- or _maybe_ you could
         | get away with a VM, but are you _really_ sure its networking
         | isn 't leaking _anything_ about your actual hardware? Receiving
         | Bitcoin, and then once again finding a stranger to convert that
         | back into cash.
         | 
         | That is a _lot_ of effort.
        
           | 127361 wrote:
           | Also stylometric analysis of your writing can be used to
           | identify you.
        
             | JohnFen wrote:
             | This is one thing AI really can help with: rewriting what
             | you wrote in order to make stylometric analysis worthless.
        
               | willismichael wrote:
               | Feed your writing into AI so that it can rewrite it so
               | that AI can't identify you by your writing?
               | 
               | Sounds like a startup idea to me. When we're ready for
               | the evil phase, let's classify everybody by their inputs
               | to the system and then sell the results to the highest
               | bidder.
        
             | Der_Einzige wrote:
             | That's why I run everything I write through a random open
             | source LLM with random settings and a custom decoder /s
             | 
             | I'm kidding, but the reality is such techniques will fool
             | almost all stylometric analysis,
             | 
             | Also most actual stylinetric analysts work for spooks or
             | are spooks.
        
         | yoyohello13 wrote:
         | I think "trivial" is a stretch.
        
         | zxt_tzx wrote:
         | > They can do research, get paid, make payments, and contribute
         | to private communities in such a way that no existing
         | surveillance infrastructure can positively link that identity
         | to their government identity.
         | 
         | I can't help but wonder if we live in the same universe. If
         | anything, in my part of the world, I am seeing powerful
         | surveillance tech going from the digital sphere and into the
         | physical sphere, often on the legal/moral basis that one has no
         | expectation of privacy in public spaces.
         | 
         | Would love for OP to elaborate and prove me wrong!
        
       | darklycan51 wrote:
       | Think about it this way.
       | 
       | Every service has access to the IPs you've used to log on, most
       | services require an email, phone number some debit/credit cards
       | and or similar personal info. Link that with government databases
       | on addresses/real names/ISP customers and you basically can get
       | most peoples accounts, on virtually any service they use.
       | 
       | We then also have things such as the patriot act in effect, the
       | government could if they wanted run a system to do this
       | automatically, where every message is scanned by an AI that
       | catalogues them.
       | 
       | I have believed for some time now that we are extremely close to
       | a complete dystopia.
        
       | Nilrem404 wrote:
       | _Surprised Pikachu Face_
       | 
       | Seriously nothing new or shocking about this piece. Spying is
       | spying. Surveillance is surveillance. If you've watched the news
       | at all in the past 2 decades, you know this is happening.
       | 
       | Anyone who assumes that any new technology isn't going to be used
       | to target the masses by increasingly massive and powerful
       | authoritarian regimes is woefully naive.
       | 
       | Another post stating what we all already know isn't helping or
       | fostering any meaningful conversation. It will just be rehashes.
       | Let me skip to the end here for you:
       | 
       | There is nothing we can do about it. Nothing will change for the
       | better.
       | 
       | Go make a coffee or tea
        
       | thesuperbigfrog wrote:
       | The new Google, Meta, Microsoft, etc. bots won't just crawl the
       | web or social networks--they will crawl specific topics and
       | people.
       | 
       | Lots of cultures have the concept of a "guardian angel" or
       | "ancestral spirits" that watch over the lives of their
       | descendants.
       | 
       | In the not-so-distant technofedualist future you'll have a
       | "personal assistant bot" provided by a large corporation that
       | will "help" you by answering questions, gathering information,
       | and doing tasks that you give it. However, be forewarned that
       | your "personal assistant bot" is no guardian angel and only
       | serves you in ways that its corporate creator wants it to.
       | 
       | Its true job is to collect information about you, inform on you,
       | and give you curated and occasionally "sponsored" information
       | that high bidders want you to see. They serve their creators--not
       | you. Don't be fooled.
        
         | JohnFen wrote:
         | > In the not-so-distant technofedualist future you'll have
         | [...]
         | 
         | I guarantee that I won't. That, at least, is a nightmare that I
         | can choose to avoid. I don't think I can avoid the other
         | dystopian things AI is promising to bring, but I can at least
         | avoid that one.
        
           | justinclift wrote:
           | Wonder if some kind of ai-agent thing(s) will become so
           | widely used by people, that government services come to
           | assume you have them?
           | 
           | Like happened with mobile phones.
        
             | JohnFen wrote:
             | At least in my part of the US, it's not hard to do without
             | smartphones at all. Default assumptions are that you have
             | one, but you can still do everything you want to do if you
             | don't.
        
           | lurker_jMckQT99 wrote:
           | I guarantee that you will. That is a nightmare that you can
           | not choose to avoid unless you are willing to sacrifice your
           | social life.
           | 
           | Remember how raising awareness about smartphones, always on
           | microphones, closed source communication services/apps
           | worked? I do not.
           | 
           | I run an Android (Google free) smartphone with a custom ROM,
           | only use free software apps on it.
           | 
           | How does it help when I am surrounded by people using these
           | kind of technologies (privacy violating ones)? I does not.
           | How will it help when everyone will have his/her personal
           | assistant (robot, drone, smart wearable, smart-thing,
           | whatever) and you (and I) won't? It will not.
           | 
           | None of my friends, family, colleagues (even the
           | security/privacy aware engineers) bother. Some of them
           | because they do not have the technical knowledge to do so,
           | most of them because they do not want to sacrifice any bit of
           | convenience/comfort (and maybe rightfully so, I am not
           | judging them - life is short, I do get that people do not
           | want to waste precious time maintaining arcane infra,
           | devices, config,... themselves).
           | 
           | I am a privacy and free software advocate and an engineer;
           | whenever I can (and when there is a tiny bit of will on their
           | side or when I have lever), I try to get people off
           | surveillance/ad-backed companies services.
           | 
           | It rarely works or lasts. Sometimes it does though so it is
           | worth (to me) keep on trying.
           | 
           | It generally works or lasts when I have lever: I manage
           | various sports team, only share schedules etc via Signal ;
           | family wants to get pictures from me, I will only share the
           | link (to my Nextcloud instance) or photos themselves via
           | Signal, etc.
           | 
           | Sometimes it sticks with people because it's close enough to
           | whatsapp/messenger/whatever if most (all) of their contacts
           | are their. But as soon as you have that one person that will
           | not or can not install Signal, alternatives groups get
           | created on whatsapp/messenger/whatever.
           | 
           | Overcoming the network effect is tremendously hard to
           | borderline impossible.
           | 
           | Believing that you can escape it is a fallacy. It does not
           | mean that is not worth fight for our rights, but believing
           | that you can escape it altogether (without becoming and
           | hermit) would be setting, I believe, an unachievable goal
           | (with all the psychological impact that it can/will have).
           | 
           | Edit: fixed typos
        
             | asdff wrote:
             | Think about it in terms of what is rational. If there were
             | serious costs to having your data leaked out like this
             | people would rationally have a bit more trepidation. On the
             | other hand, we are in the era where everyone by now has
             | probably been pwned a half dozen times or more, to no
             | effect usually on your real life. You might get disgusted
             | that instagram watches what you watch to serve you more of
             | that stuff and keep you on longer, other people love that
             | sort of content optimization, I literally hear them gloat
             | how their social media content feeds at this point have
             | been so perfectly honed to show them whatever hobbies or
             | sports they are interested in. Take a picture and it pushes
             | to 5 services and people love that. Having an app already
             | pull your contacts for you and match them up to existing
             | users is great in the eyes of most people.
             | 
             | You are right that on the one hand these things could be
             | used for really bad purposes, but they are pretty benign.
             | Now if you start going "well social media posts can
             | influence elections," sure, but so can TV, newspapers, the
             | radio, a banner hauled by a prop plane, whatever, not like
             | anythings changed. If anything its a safer environment for
             | combating a slip to fascism now vs in the mid century when
             | there were like three channels on TV and a handful of radio
             | programs carefully regulated by the FCC and that's all the
             | free flow of info you have short of smuggling the printed
             | word like its the 1400s.
             | 
             | Given all of this, I can't really blame people for
             | accepting the game they didn't create for how it is and
             | gleaming convenience from it. Take smartphones out of the
             | equation, take the internet out, take out computers, and
             | our present dystopia is still functionally the same.
        
         | yterdy wrote:
         | That's just your phone.
        
           | thesuperbigfrog wrote:
           | >> That's just your phone.
           | 
           | That is how most people will interface with their "personal
           | assistant bot".
           | 
           | Don't be surprised if it listens to all your phone
           | conversations, reads all your text messages and email, and
           | curates all your contacts in order to "better help you".
           | 
           | When you login to your $LARGE_CORPORATION account on your
           | laptop or desktop computer, the same bot(s) will be there to
           | "help" and collect data in a similar manner.
        
             | passion__desire wrote:
             | It already does. I asked a friend about a medical condition
             | on whatsapp. I started getting ads about quack solutions
             | immediately on instagram.
        
               | pacifika wrote:
               | Your life insurance just went up.
        
         | otteromkram wrote:
         | This could be applied to any gadget with "smart" prefix in the
         | name (eg - Smartphone, smart TV, smart traffic signals) today.
         | 
         | I wish people would stop believing that "smart" things are
         | _always_ better.
         | 
         | But, we're basically being trained for the future you
         | mentioned. Folks are getting more comfortable talking to their
         | handheld devices, relying on mapping apps for navigation (I'm
         | guilty), and writing AI query prompts.
        
         | kaibee wrote:
         | You're just describing TikTok/Youtube algorithm.
        
           | thesuperbigfrog wrote:
           | That's only a small piece of it.
        
         | tech_ken wrote:
         | Poetic as this is, I always feel like if we can imagine it then
         | it won't happen. The only constant is surprise, we can only
         | predict these types of developments accidentally
        
           | thesuperbigfrog wrote:
           | It's starting to happen now.
           | 
           | Here is one example: https://www.microsoft.com/en-
           | us/microsoft-copilot
           | 
           | "AI for everything you do"
           | 
           | "Work smarter, be more productive, boost creativity, and stay
           | connected to the people and things in your life with Copilot
           | --an AI companion that works everywhere you do and
           | intelligently adapts to your needs."
           | 
           | If Microsoft builds them, then Google, Apple, and Samsung
           | will too. How else will they stay competitive and relevant?
        
             | tech_ken wrote:
             | I mean by this definition I'd say it happened when they
             | introduced Siri or Hey Google. The creation of these tools
             | and their massive/universal adoption a la web-crawlers is
             | still a large gap though. Getting to point where you
             | consider them as a dark "guardian angel" or "ancestral
             | spirit" goes even a step farther I think
        
               | thesuperbigfrog wrote:
               | >> The creation of these tools and their
               | massive/universal adoption a la web-crawlers is still a
               | large gap though.
               | 
               | It only takes a decade or so.
               | 
               | Consider people who are young children now in "first
               | world nations". They will have always had LLM-based tools
               | available and voice assistants you can ask natural
               | language questions.
               | 
               | It will likely follow the same adoption curves as
               | smartphones, only faster because of existing network
               | effects.
               | 
               | If you have smartphone with a reasonably fast connection,
               | you have access to LLM tools. The next generations of
               | smartphones, tablets, laptops, and desktops will all have
               | LLM tools built-in.
        
         | notnullorvoid wrote:
         | Big companies like Google are already doing this without AI.
         | Will AI make the services more tempting? Yes, but there's also
         | a lot of headway in open source AI and search, which could
         | serve to topple people's reliance on big tech.
         | 
         | If everyone had a $500 device at home that served as their own
         | self hosted AI, then Google could cease to exist. That's a
         | future worth working towards.
        
       | 127361 wrote:
       | Time to decentralize everything. I think we are already in the
       | early stages of this new trend. We can run AI locally and hard
       | drives are so large we can have a local copy of an entire
       | library, with millions of ebooks, in our own home now.
       | 
       | That is in addition to generating our own energy off grid (so no
       | smart meter data to monitor), thanks to the low cost of solar
       | panels as well.
       | 
       | Bye bye Big Brother.
        
         | JohnFen wrote:
         | I don't see how that leads to the reduction of the problem,
         | though. Governments and corporations will still use AI for the
         | things they want to use AI for.
        
         | jtbayly wrote:
         | Until you walk out your front door...
         | 
         | Or use the internet for anything...
        
         | jodrellblank wrote:
         | > " _That is in addition to generating our own energy off grid
         | (so no smart meter data to monitor), thanks to the low cost of
         | solar panels as well._ "
         | 
         | Terence Eden is in the UK:
         | https://shkspr.mobi/blog/2013/02/solar-update/
         | 
         | This says his house uses 13kWh/day and you can see from the
         | graph by dividing the monthly amount by 31 days that the solar
         | panels on the roof generate around 29/day during summer and
         | 2.25/day in winter. They would need five or six rooves of solar
         | panels to generate enough to be off-grid. And that's not
         | practical or low cost.
        
           | pacifika wrote:
           | You just need a few sheds full of batteries
        
           | pacifika wrote:
           | You "just" need a few sheds full of batteries
        
       | 1-6 wrote:
       | AI allows companies to skirt laws. For example, a company may be
       | forbidden from collecting information on individual people but
       | that rule doesn't apply for aggregated data.
       | 
       | AI can be a deployed 'agent' that does all the collection and
       | finally send scrubbed info to its mothership.
        
       | _Nat_ wrote:
       | Seems inevitable enough that we may have to accept it and try to
       | work within the context of (what we'd tend to think of today as)
       | mass-spying.
       | 
       | I mean, even if we pass laws to offer more protections, as
       | computation gets cheaper, it ought to become easier-and-easier
       | for anyone to start a mass-spying operation -- even by just
       | buying a bunch of cheap sensors and doing all of the work on
       | their personal-computer.
       | 
       | A decent near-term goal might be figuring out what sorts of
       | information we can't reasonably expect privacy on (because
       | someone's going to get it) and then ensuring that access to such
       | data is generally available. Because if the privacy's going to be
       | lost anyway, then may as well try to address the next concern,
       | i.e. disparities in data-access dividing society.
        
         | 127361 wrote:
         | Living off-grid is how I'm dealing with the whole situation
         | nowadays.
        
           | potsandpans wrote:
           | How's it working out for you? I have similar plans and have
           | most of the big pieces budgeted / ideated. But realistically
           | I'm still 1 to 2 years out.
        
           | floxy wrote:
           | Except for the posting on HN?
        
         | iainmerrick wrote:
         | _even by just buying a bunch of cheap sensors and doing all of
         | the work on your personal-computer._
         | 
         | The cynical response: _you_ won 't be able to do that, because
         | buying that equipment will set off red flags. Only existing
         | users -- corporations and governments -- will be allowed to
         | play.
        
         | JohnFen wrote:
         | > we may have to accept it and try to work within the context
         | of (what we'd tend to think of today as) mass-spying.
         | 
         | We do have to live in the nightmare world we're building (and
         | as an industry, we have to live with ourselves for helping to
         | build it), but we don't have to accept it at all. It's worth
         | fighting all this tooth and nail.
        
       | jillesvangurp wrote:
       | Both were always going to be kind of inevitable as soon as the
       | technology would get there. Rather than debating how to stop this
       | (which is mostly futile and requires all of us to be nice, which
       | we just aren't), the more urgent debate is how to adapt to this
       | being the reality.
       | 
       | Related to this is the notion of ubiquitous surveillance. Where
       | basically anywhere you go, there is going to be active
       | surveillance literally everywhere and AIs filtering and digging
       | through that constantly. That's already the case in a lot of our
       | public spaces in densely populated areas. But imagine that just
       | being everywhere and virtually inescapable (barring Faraday
       | cages, tin foil hats, etc.).
       | 
       | The most feasible way to limit the downsides of that kind of
       | surveillance is a combination of legislation regulating this, and
       | counter surveillance to ensure any would be illegal surveillance
       | has a high chance of being observed and thus punished. You do
       | this by making the technology widely available but regulating its
       | use. People would still try to get around it but the price of
       | getting caught abusing the tech would be jail. And with
       | surveillance being inescapable, you'd never be certain nobody is
       | watching you misbehaving. The beauty of mass, multilateral
       | surveillance is that you wouldn't ever be sure nobody is not
       | watching you abuse your privileges.
       | 
       | Of course, the reality of states adopting this and monopolizing
       | this is already resulting in 1984 like scenarios in e.g. China,
       | North Korea, and elsewhere.
        
         | conductr wrote:
         | > Both were always going to be kind of inevitable as soon as
         | the technology would get there
         | 
         | This is my take on everything sci-fi or futuristic. Once a
         | human conceives something, its existence is essentially
         | guaranteed as soon as we figure out how to do it.
        
           | broscillator wrote:
           | Its demise is also inevitable, so it would be a matter of
           | being wise in figuring out how long it takes us to see/feel
           | the downsides, or how long until we (or it) build something
           | "better".
        
           | Der_Einzige wrote:
           | Yup. AI is the ultimate "life imitates art" technology.
           | That's what it is by definition!
        
         | Syonyk wrote:
         | > _...the more urgent debate is how to adapt to this being the
         | reality._
         | 
         | Start building more offline community. Building things that are
         | outside the reach of AI because they're in places you entirely
         | control, and start discouraging (or actively evicting...) cell
         | phones from those spaces. Don't build digital-first ways of
         | interacting.
        
           | asquabventured wrote:
           | This is the way.
        
           | pixl97 wrote:
           | Might work, might not. If someone keeps their cell phone
           | silenced in their pocket, unless you're strip searching you
           | won't know it's there. Does the customer have some app on it
           | listening to the environment and using some kind of voice
           | identification to figure out who's there. Do you have smart
           | TVs up on the walls at this place, because hell, they're
           | probably monitoring you too.
           | 
           | And that's only for cell phones. We are coming to the age
           | where there is no such thing as an inanimate object. Anything
           | could end up being a spying device feeding data back to some
           | corporation.
        
             | Syonyk wrote:
             | > _Does the customer have some app on it listening to the
             | environment and using some kind of voice identification to
             | figure out who 's there._
             | 
             | This is no different from "So-and-so joined the group, but
             | is secretly an FBI informer!" sort of problems, in
             | practice. It's fairly low on my list of things to be
             | concerned about, but as offline groups grow and are then,
             | of course, talked about by a compliant media as "Your
             | neighbor's firepit nights could be plotting terrorist
             | activities because they don't have cell phones!" when
             | prompted, it's a thing to be aware of.
             | 
             | Though you don't need a strip search. A decent NLJD (non-
             | linear junction detector) or thermal imager should do it if
             | you cared.
             | 
             | I'm more interested in creating (re-creating?) the norms
             | where, when you're in a group of people interacting in
             | person, cell phones are off, out of earshot. It's possibly
             | a bit more paranoid than needed, but the path of consumer
             | tech is certainly in that direction, and even non-technical
             | people are creeped out by things like "I talked to a friend
             | about this, and now I'm seeing ads for it..." - it may be
             | just noticing it since you talked about it recently (buy a
             | green car, suddenly everyone drives green cars), or you may
             | be predictable in ways that the advertising companies have
             | figured out, but it's not a hard sell to get quite a few
             | people to believe that their phones are listening. And,
             | hell, I sure can't prove they _aren 't_ listening.
             | 
             | > _Do you have smart TVs up on the walls at this place..._
             | 
             | I mean, I don't. But, yes, those are a concern too.
             | 
             | And, yes. Literally everything can be listening. It's quite
             | a concern, and I think the only sane solution, at this
             | point, is to reject just about all of that more and more.
             | Desktop computers without microphones, cell phones that can
             | be powered off, and flat out turning off wireless on a
             | regular basis (the papers on "identifying where and what
             | everyone is doing in a house by their impacts on a wifi
             | signal" remain disturbing reads).
             | 
             | I really don't have any answers. The past 30 years of tech
             | have led to a place I do not like, and I am not at all
             | comfortable with. But it's now the default way that a lot
             | of our society interacts, and it's going to be a hard sell
             | to change that. I just do what I can within my bounds, and
             | I've noticed that while I don't feel my position has
             | changed substantially in the past decade or so (if
             | anything, I've gotten further out of the center and over to
             | the slightly paranoid edge of the bell curve), it's a lot
             | more crowded where I stand, and there are certain issues
             | where I'm rather surprisingly in the center of the bell
             | curve as of late.
        
           | mindslight wrote:
           | > _Building things that are outside the reach of AI because
           | they 're in places you entirely control_
           | 
           | This sounds great in principle, but I'd say "outside the
           | reach of AI" is a much higher bar than one would naively
           | think. You don't merely need to avoid its physical nervous
           | system (digital perception/control), but rather prevent _its
           | incentives_ leaking in from outside interaction. All the
           | while there is a strong attractor to just give in to the
           | "AI" because it's advantageous. Essentially regardless of how
           | you set up a space, humans themselves become agents of AI.
           | 
           | There are strong parallels between "AI" and centralizing
           | debt-fueled command-capitalism which we've been suffering for
           | several decades at least. And I haven't seen any shining
           | successes at constraining the power of the latter.
        
             | Syonyk wrote:
             | Oh, I'm aware it's a high bar. Like most people here, I've
             | worked my life in tech, and I'm in the deeper weeds of it.
             | 
             | But I don't see an alternative unless, as you note, one
             | just gives into the "flow" of the AI, app based, "social"
             | media, advertising and manipulation driven ecosystem that
             | is now the default.
             | 
             | I'm aware I'm proposing resisting exactly that, and that
             | it's an uphill battle, but the tradeoff is retaining your
             | own mind, your own ability to think, and to not be
             | "influenced" by a wide range of things chosen by other
             | people to cross your attention in very effective ways.
             | 
             | And I'm willing to work out some of what works in that
             | space, and to share it with others.
        
           | asdff wrote:
           | Good luck building things with out leaving an ai reachable
           | paper trail. You'd have to grow your own trees, mine your own
           | iron and coal, refine your own plastic from your own oil
           | field.
        
             | Syonyk wrote:
             | Sounds fun to me and my social group. We not-quite-joke
             | about the coming backyard refineries. I'm working on the
             | charcoal production at the moment (not a joke, I have some
             | small retorts in weekly production, though I'm mostly
             | aiming for biochar production instead of fuel charcoal
             | production).
             | 
             | Realistically, though, if all you have to work with are my
             | general flows of materials in and out, I'm a lot less
             | worried than if you have, say, details of home audio, my
             | social media postings, etc (nothing I say here is
             | inconsistent with my blog, which is quite public). And
             | there are many things I don't say in these environments.
        
         | stvltvs wrote:
         | I don't agree with the fatalistic attitude. At the very least
         | it makes it too easy to strengthen surveillance. Legal
         | restrictions can be imposed.
        
       | bonyt wrote:
       | I think another aspect of this is mass criminal law enforcement
       | enabled by AI.
       | 
       | Many of our criminal laws are written with the implicit
       | assumption that it takes resources to investigate and prosecute a
       | crime, and that this will limit the effective scope of the law.
       | Prosecutorial discretion.
       | 
       | Putting aside for the moment the (very serious) injustice that
       | comes with the inequitable use of prosecutorial discretion, let's
       | imagine a world without this discretion. Perhaps it's contrived,
       | but one could imagine AI making it at least possible. Even by the
       | book as it's currently written, is it a better world?
       | 
       | Suddenly, an AI monitoring public activity can trigger an AI
       | investigator to draft a warrant to be signed by an AI judge to
       | approve the warrant and draft an opinion. One could argue that
       | due process is had, and a record is available to the public
       | showing that there was in fact probable cause for further
       | investigation or even arrest.
       | 
       | Maybe a ticket just pops out of the wall like in _Demolition Man_
       | , but listing in writing clearly articulated probable cause and
       | well-presented evidence.
       | 
       | Investigating and prosecuting silly examples suddenly becomes
       | possible. A CCTV camera catches someone finding a $20 bill on the
       | street, and finds that they didn't report it on their tax return.
       | The myriad of ways one can violate the CFAA. A passing mention of
       | music piracy on a subway train can become an investigation and
       | prosecution. Dilated pupils and a staggering gait could support a
       | drug investigation. Heck, jaywalking tickets given out as though
       | by speed camera. Who cares if the juice wasn't worth the squeeze
       | when it's a cheap AI doing the squeezing.
       | 
       | Is this a better world, or have we just all subjected ourselves
       | to a life hyper-analyzed by a motivated prosecutor.
       | 
       | Turning back in the general direction of reality, I'm aware that
       | arguing " _if we enforced all of our laws, it would be chaos_ "
       | is more an indictment of our criminal justice system than it is
       | of AI. I think that AI gives us a lens to imagine a world where
       | we actually do that, however. And maybe thinking about it will
       | help us build a better system.
        
         | yterdy wrote:
         | The software that already exists along these lines already
         | exhibit bias against marginalized groups. I have no trouble
         | foreseeing a filter put on the end of the spigot that exempts
         | certain people from the inconvenience of such surveillance.
         | Might need a new law (it'll get passed).
        
           | whythre wrote:
           | Sounds like the devil is in the details. Often the AI seems
           | to struggle with darker skin... are you suggesting we sift
           | who can be monitored/prosecuted based on skin darkness? That
           | sounds like a mess to try to enshrine in law.
           | 
           | Strong (and unhealthy) biases already exist when using this
           | tech, but I am not sure that is the lever to pull that will
           | fix the problem.
        
         | otteromkram wrote:
         | If it increases ticket issuance for passenger vehicle noise
         | violations (eg - "sport" exhausts, booming stereo system,
         | motorcycles), I'm down.
        
           | namaria wrote:
           | "If it hurts people I hate I accept"
           | 
           | - Every endorsement of authoritarian rule ever
        
             | pmg102 wrote:
             | Feels pretty legit though. My freedom-from is impacted by
             | other people's freedom-to: by curtailing their freedom,
             | mine is expanded. Sure they won't like it - but I don't
             | like it the other way round either.
        
               | AlexandrB wrote:
               | This doesn't add up. _At best_ your overall freedom
               | remains the same. You gain quiet, you lose the freedom to
               | make noise yourself. Seems like a net-negative to me.
               | 
               | Consider how little freedom you would have if laws were
               | enforced to the lowest common denominator of what people
               | find acceptable.
        
               | anigbrowl wrote:
               | I can go into the countryside and make noise all day. I
               | don't see that there's a pre-existing freedom to inflict
               | loud noises on my neighbors for no useful purpose.
        
               | 0134340 wrote:
               | I'd argue that if we want to support individual growth
               | and creativity, freedom-to should have higher priority
               | than freedom-from, which consciously or not has seems to
               | be the traditional default in the US perhaps due to its
               | culture of supporting innovation and its break-away past.
               | I believe some refer to these as positive and negative
               | freedoms, respectfully.
        
               | zdragnar wrote:
               | This is also why a number of people truly revolt against
               | the idea of higher density living. If the only way to
               | have your freedom-from is to be free from other people,
               | then you move away from other people.
               | 
               | I've watched it play out on my mother-in-law's street.
               | What was once a quiet dead end street is now a noisy,
               | heavily trafficked road because a large apartment
               | building was put up at the end.
               | 
               | The number of freedom-to people have significantly
               | decreased her quality of life blasting music as they walk
               | or drive by at all hours, along with a litany of other
               | complaints that range from anti-social to outright
               | illegal behavior. Even setting aside the illegal stuff,
               | she is significantly less happy living where she is now.
        
             | okasaki wrote:
             | Effectively enforcing laws we agreed to is hardly
             | authoritarian.
        
               | pixl97 wrote:
               | You'd disagree about 10 seconds after they did...
               | 
               | If suddenly you could be effectively found and prosecuted
               | for every single law that existed it is near a 100%
               | probability that you'd burn the government to the ground
               | in a week.
               | 
               | There are so many laws no one can even tell you how many
               | you are subject to at any given time at any given
               | location.
        
             | newscracker wrote:
             | Reminds me of this quote attributed to a past Peruvian
             | president and general, Benavides:
             | 
             | "For my friends, everything; for my enemies, the law."
        
             | anigbrowl wrote:
             | False equivalence. GP complained about a specific behavior,
             | not about specific people.
        
         | trinsic2 wrote:
         | Yea this is a good point. If justice is executed by systems,
         | rather than people (the end result from this scenario), we have
         | lost the ability to challenge the process or the people
         | involved in so many ways. It will make challenging how the law
         | is executed almost impossible because there will be no person
         | there to hold responsible.
        
           | bonyt wrote:
           | I think that's a good reason to question whether this would
           | be due process.
           | 
           | Why do we have due process? One key reason is that it gives
           | people the opportunity to be heard. One could argue that
           | being heard by an AI is no different from being heard by a
           | human, just more efficient.
           | 
           | But why do people want the opportunity to be heard? It's
           | partly the obvious, to have a chance to defend oneself
           | against unjust exercises of power, and of course against
           | simple error. But it's also so that one can feel heard and
           | not powerless. If the exercise of justice requires either
           | brutal force or broad consent, giving people the feeling of
           | being heard and able to defend themselves encourages broad
           | consent.
           | 
           | Being heard by an AI then has a brutal defect, it doesn't
           | make people feel heard. A big part of this may come from the
           | idea that an AI cannot be held accountable if it is wrong or
           | if it is acting unfairly.
           | 
           | Justice, then, becomes a force of nature. I think we like to
           | pretend justice is a force of nature anyway, but it's really
           | not. It's man-made.
        
             | zbyte64 wrote:
             | "it doesn't make people feel heard" isn't a real emotion,
             | it includes a judgement about the AI. According to
             | "Nonviolent Communication" p235; "unheard" speaks towards
             | the feelings "sad, hostile, frustrated" and the needs
             | "understanding" & "consideration". Everyone agrees AI would
             | be more efficient, but people are concerned that the AI
             | will not be able to make contextual considerations based on
             | a shared understanding of what it's like to live a human
             | life.
        
               | bonyt wrote:
               | That's true! I suspect it will be difficult to convince
               | people that an AI can, as you suggest, make contextual
               | considerations based on a shared understanding of what
               | it's like to live a human life.
        
             | fn-mote wrote:
             | > Being heard by an AI then has a brutal defect, it doesn't
             | make people feel heard.
             | 
             | This is a hypothesis.
             | 
             | I would say that the consumers of now-unsexed "AI" sex-
             | chat-bots (Replika) felt differently. So there are actually
             | people who feel heard talking to an AI. Who knows, if it
             | gets good enough maybe more of us would feel that way.
        
           | tempsy wrote:
           | It's not that "justice is executed by systems", it's that
           | possible crimes will be flagged by AI systems for humans to
           | then review.
           | 
           | eg AI will analyze stock trades for the SEC and surface
           | likely insider trading. Pretty sure they already use tools
           | like Palantir to do exactly this, it's just that advanced AI
           | will supercharge this even further.
        
             | pixl97 wrote:
             | >, it's that possible crimes will be flagged by AI systems
             | for humans to then review.
             | 
             | Eh, this is problematic for a number of reasons that need
             | addressed when adding any component that can increase the
             | workload for said humans. This will cause people to take
             | shortcuts that commonly lead to groups that are less able
             | to represent and defend themselves legally taking the brunt
             | of the prosecutions.
        
         | lordnacho wrote:
         | This is a good point, it reminds me of how VAR has come into
         | football. Before VAR, there were fewer penalties awarded. Now
         | that referees have an official camera they can rely on, they
         | can enforce the rules exactly as written, and it changes the
         | game.
        
         | DebtDeflation wrote:
         | >Suddenly, an AI monitoring public activity can trigger an AI
         | investigator to draft a warrant to be signed by an AI judge to
         | approve the warrant and draft an opinion.
         | 
         | Or the AI just sends a text message to all the cops in the area
         | saying "this person has committed a crime". Like this case
         | where cameras read license plates, check to see if the car is
         | stolen, and then text nearby cops. At least when it works and
         | doesn't flag innocent people like in the below case:
         | 
         | https://www.youtube.com/watch?v=GUvZlEg8c8c
        
         | JCharante wrote:
         | I don't think it'd be chaos. I think the laws would be
         | adjusted.
        
           | bonyt wrote:
           | This is fair. I just wonder if we're about getting to the
           | point where we should be talking about how they would be
           | adjusted.
        
           | digging wrote:
           | I think that's an optimistic view, but even if it's right, it
           | will be years-to-decades of semi-chaos before the laws are
           | updated appropriately.
        
         | okasaki wrote:
         | > Is this a better world,
         | 
         | If the same monitoring is present on buses and private planes,
         | homeless hostels and mega-mansions then it absolutely is
         | better.
        
           | stronglikedan wrote:
           | Private property? Nah, nothing better about that.
        
           | AlexandrB wrote:
           | You're describing a hypothetical world that will never exist.
           | Basically _if_ we solve all corruption and inequality in
           | enforcement between economic /power classes - all-pervasive
           | surveillance will be a net benefit.
           | 
           | It's like pondering hypotheticals about what would happen if
           | we lived in Middle Earth.
        
           | bonyt wrote:
           | "The law, in its majestic equality, forbids rich and poor
           | alike to sleep under bridges, to beg in the streets, and to
           | steal their bread."
        
             | okasaki wrote:
             | I mean, presumably the AI wouldn't just be monitoring
             | people sleeping under bridges, but would also be able to
             | effectively cut through tax evasion bullshit, insider
             | trading, bribery, etc.
        
         | kafrofrite wrote:
         | IIRC, in [1] it mentioned a few examples of AI that exhibited
         | the same bias that is currently present in the judicial system,
         | banks etc.
         | 
         | [1] https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction
        
           | dorchadas wrote:
           | This is honestly what scares me the most. Our biases are
           | _built in_ to AI, but we pretend they 're not. People will
           | say "Well, it was the algorithm/AI, so we can't change it".
           | Which is just awful and should scare the shit out of
           | _everyone_. There was a book [0] written almost _fifty years
           | ago_ that predicted this. I still haven 't read it, but
           | really need to. The author claims it made him a pariah among
           | other AI researchers at the time.
           | 
           | [0]https://en.wikipedia.org/wiki/Computer_Power_and_Human_Rea
           | so...
        
             | pixl97 wrote:
             | https://en.wikipedia.org/wiki/Computers_Don%27t_Argue while
             | not about AI directly and supposedly satirical really
             | captures how the system works.
        
         | jodrellblank wrote:
         | > " _Heck, jaywalking tickets given out as though by speed
         | camera._ "
         | 
         | This has been a thing since 2017: https://futurism.com/facial-
         | recognition-china-social-credit
         | 
         | - "Since April 2017, this city in China's Guangdong province
         | has deployed a rather intense technique to deter jaywalking.
         | Anyone who crosses against the light will find their face,
         | name, and part of their government ID number displayed on a
         | large LED screen above the intersection, thanks to facial
         | recognition devices all over the city."
         | 
         | - "If that feels invasive, you don't even know the half of it.
         | Now, Motherboard reports that a Chinese artificial intelligence
         | company is partnering the system with mobile carriers, so that
         | offenders receive a text message with a fine as soon as they
         | are caught."
        
           | mattnewton wrote:
           | There are lots of false positives too, like this case where a
           | woman who's face appeared in a printed advertisement on the
           | side of a bus was flagged for jaywalking.
           | https://www.engadget.com/2018-11-22-chinese-facial-
           | recogniti...
        
             | flemhans wrote:
             | Just checking ChatGPT out of interest:
             | 
             | Top Left Panel: This panel shows the pedestrian crossing
             | with no visible jaywalking. The crossing stripes are clear,
             | and there are no pedestrians on them.
             | 
             | Top Center Panel: Similar to the top left, it shows the
             | crossing, and there is no evidence of jaywalking.
             | 
             | Top Right Panel: This panel is mostly obscured by an
             | overlaid image of a person's face, making it impossible to
             | determine if there is any jaywalking.
             | 
             | Bottom Left Panel: It is difficult to discern specific
             | details because of the low resolution and the angle of the
             | shot. The red text overlays may be covering some parts of
             | the scene, but from what is visible, there do not appear to
             | be any individuals on the crossing.
             | 
             | Bottom Right Panel: This panel contains text and does not
             | provide a clear view of the pedestrian crossing or any
             | individuals that might be jaywalking.
        
         | nox100 wrote:
         | I can't wait! I live in a lawless cesspool of a city called San
         | Francisco where even well to do people regularly break the law.
         | You can stand at 16th and Mission and watch 3 of 4 cars going
         | north on the Mission ignore the signs that say "Only Busses and
         | Taxis allowed past 16th and everyone else is supposed to turn
         | right". Please either (a) take down the signs or (b) enforce
         | the law.
         | 
         | There are signs all over the city that are regularly ignored
         | which basically means you're at the mercy of the cops. If they
         | dob't like you you get a ticket. AI would "hopeful" just ticket
         | everyone.
        
           | mikepurvis wrote:
           | Honestly, when it comes to road-related violations (speeding,
           | yield issues, fulling stopping at stop signs or RTOR, etc), I
           | feel similarly. The current state of affairs where it's a
           | $300 ticket that you're probably never going to see because
           | 99% of the time you get away with it is dumb-- it makes
           | people contemptuous of the law and also feel super resentful
           | and hard-done-by if they do finally get caught. And it's a
           | positive feedback loop where that culture makes the road an
           | unsafe place for people walking and cycling, so less people
           | choose those modes, putting more cars on the road and making
           | it even less safe.
           | 
           | Consistent enforcement with much lower, escalating fines
           | would do a lot more to actually change behaviour. And the
           | only way to get there at scale is via a lot of automation.
        
             | trinsic2 wrote:
             | I wonder whats going on at the enforcement level. It seems
             | like there is a dumbing down of that system. It would be
             | interesting to see some research on this.
        
           | trinsic2 wrote:
           | Yea that stuff is happening here where I live (1hr away) more
           | often as well, it seemed like it started happening more often
           | during COVID. People are running red lights more often and
           | going way over the speed limit.
        
         | dist-epoch wrote:
         | Or maybe if such a thing is applied for real it will lead to
         | the elimination of bullshit laws (jaywalking, ...), since
         | suddenly 10% of the population would be fined/incarcerated/...
        
         | theGnuMe wrote:
         | So the way out of this is that you have the constitutional
         | right to confront your accuser in court. When accused by a
         | piece of software that generally means they have to disclose
         | the source code and explain how it came to its answers.
         | 
         | Not many people have exercised this right with respect to DUI
         | breathalyzers but it exists and was affirmed by the Supreme
         | Court. And it will also apply to AI.
        
         | Zenst wrote:
         | The whole automation and overzealous less leeway/common sense
         | interpretations have as we have seen, many an automated
         | traffic/parking ticket come into question.
         | 
         | Applying that to many walks of life, say farming, could well
         | see chaos and a whole new interpretation to the song "Old
         | McDonald had a farm, AI AI oh", it's gone as McDonald is in
         | jail for numerous permit, environmental and agricultural
         | regulations that saw produce cross state lines deeming it more
         | serious a crime as he got buried in automated red-tape.
        
         | throwaway290 wrote:
         | You miss the part that people who get access to stronger AI can
         | similarly use it to improve their odds of not being found or
         | getting better outcomes, while the poor guy gets fined for AI
         | hallucinations and doesn't have the money to get to a human
         | like the court is now one big Google support.
        
         | perihelions wrote:
         | An alternative possibility is that society might decay to the
         | point future people might _choose_ this kind of dystopia.
         | Imagine a fully automated, post-employment world gone horribly
         | wrong, where the majority of society is destitute, aimless,
         | opiate-addicted. No UBI utopia of philosophers and artists;
         | just a gradual Rust-belt like decline that gets worse and
         | worse, no brakes at the bottom of the hill. Not knowing what
         | else to do, the  "survivors" might choose this kind of nuclear
         | approach: automate away the panopticons, the prisons, the
         | segregation of failed society. Eloi and Morlocks. Bay Area tech
         | workers and Bay Area tent cities. We haven't done any better in
         | the past, so why should we expect to do better in the future,
         | when our "tools" of social control become more efficient, more
         | potent? When we can deempathize more easily than ever, through
         | the emotional distance of AI intermediaries?
        
           | pixl97 wrote:
           | Oh boy, real life Manna
           | 
           | https://marshallbrain.com/manna1
        
         | paganel wrote:
         | At that point some people will physically revolt, I know I
         | will. We're not that far away from said physical AI-related
         | revolt anyway, and I do feel for the computer programmers here
         | who will be the target of that physical violence, hopefully
         | they knew what they were getting into.
        
           | jstarfish wrote:
           | Ha. You'd _like_ to think so, but it 's going to be awfully
           | hard to coordinate resistance when the mass spying sweeps
           | everyone up in a keyword-matching dragnet before the
           | execution phase. This is the problem with every outgroup
           | being labelled "terrorists."
           | 
           | Sabotage will be the name of the game at that point. Find
           | ways to quietly confuse, poison, overwhelm and undermine the
           | system without attracting the attention of the monitoring
           | apparatus.
        
             | paganel wrote:
             | I get your point, I think along those lines quite often
             | myself.
             | 
             | As per the sabotage part, bad input data that does not
             | accurately get labeled as such until way too late in the
             | "AI learning cycle" is I think the way to go. Lots and lots
             | of such bad input data. How we would get to that point,
             | that I don't know yet, but it's a valid option going
             | forward.
        
               | jstarfish wrote:
               | > How we would get to that point, that I don't know yet,
               | but it's a valid option going forward.
               | 
               | Chaos engineering. As a modern example, all this gender
               | identity stuff wreaks absolute havoc on credit bureau
               | databases.
               | 
               | Tomorrow, we'll have people running around in fursuits to
               | avoid facial recognition. After that, who knows.
        
           | Der_Einzige wrote:
           | Don't worry, stuff like this is why we have the 2A here in
           | the USA. Sounds like it's time for AI programmers to get
           | their concealed carry licenses. Of course, they will be the
           | first users of smart guns, so don't bother trying to steal
           | their pistol out of their holsters.
        
         | kjkjadksj wrote:
         | You don't need AI for that. It was probably possible to do
         | something like that when personal computers first came out.
        
         | kenjackson wrote:
         | > Many of our criminal laws are written with the implicit
         | assumption that it takes resources to investigate and prosecute
         | a crime,
         | 
         | I think this depends on the law. For jaywalking, sure. For
         | murder and robbery probably less so. And law enforcement
         | resources seem scarce on all of them.
        
           | whelp_24 wrote:
           | Murder and robbery too. Those crimes are just worth
           | investigating.
        
             | pixl97 wrote:
             | The problem here is this is not a bureaucratic view of how
             | law enforcement actually works in the field.
             | 
             | https://www.kxan.com/news/national-news/traffic-tickets-
             | can-...
             | 
             | >We counted the number of days judges waited before
             | suspending a driver's license. Then, we looked at whether
             | the city was experiencing a revenue shortfall. We found
             | that judges suspend licenses faster when their cities need
             | more money. The effect was pretty large: A 1% decrease in
             | revenue caused licenses to be suspended three days faster.
             | 
             | So what typically happens is these AI systems are sold at
             | catching murderers, but at the end of the day they are
             | revenue generation systems for tickets. And then those
             | systems get stuck in places where a smaller percent of the
             | population can afford lawyers to prevent said ticketing
             | systems from becoming cost centers.
        
               | whelp_24 wrote:
               | oh, i definitely wasn't arguing for ai enforcement. Not
               | even a little, i was just saying that laws are written
               | with the assumption that enforcement takes resources.
        
         | n8cpdx wrote:
         | In democracies at least, the law can be changed to reflect this
         | new reality. Laws that don't need to be enforced and are only
         | around to enable pretextual stops can be dropped if direct
         | enforcement is possible.
         | 
         | There are plenty of crimes where 100% enforcement is highly
         | desirable: pickpocketing, carjacking, (arguably) graffiti,
         | murder, reckless and impaired driving, to name a few.
         | 
         | Ultimately, in situations with near 100% enforcement, you
         | shouldn't actually need much punishment because people learn
         | not to do those things. And when there is punishment, it
         | doesn't need to be severe.
         | 
         | Deterrence theory is an interesting field of study, one source
         | but there are many:
         | https://journals.sagepub.com/doi/full/10.1177/14773708211072...
        
         | erikerikson wrote:
         | Yes, with properly developed AI, rather than penalizing
         | speeding, which most of us do and is also a proxy for harmful
         | outcomes and inefficiencies, we can penalize reckless behaviors
         | such as coming too close to vehicles, aggressive weaving, and
         | other factors that are tightly correlated with the negative
         | outcomes we care to reduce (i.e. loss of life, property
         | damage). So too, the systems could warn people about their
         | behavior and guide them in ways that would positively increase
         | everyone's benefits. Of course this circumstance will probably
         | go away with self-directing cars (which fall into the "do the
         | right thing by default" bucket) but the point is illustrated
         | that the laws can be better formulated to focus on increasing
         | the probabilities of desirable outcomes (i.e. harm reduction,
         | efficiency, effectiveness), be embodied and delivered in the
         | moment (research required on means of doing so that don't
         | exacerbate problems), and carry with them a beneficial
         | component (i.e. understanding).
        
       | renegat0x0 wrote:
       | The author makes one mistake. He said that Google stopped spying
       | on gmail.
       | 
       | - they started spying on user's gmail
       | 
       | - there was blowback, they reverted
       | 
       | - after some time they introduced "smart features", with ads
       | again
       | 
       | Link https://www.askvg.com/gmail-showing-ads-inside-email-
       | message...
       | 
       | I do not even want to check if "smart features" are opt-in, or
       | opt-out.
        
         | mkesper wrote:
         | It's opt-in but hey, you're missing out if not enabling it!
        
       | zxt_tzx wrote:
       | I tend to think the surveillance/spying distinction is a little
       | fragile and this more a continuation of what Bruce has previously
       | written insightfully about, i.e. the blurring of lines between
       | private/public surveillance and, as the Snowden leaks have
       | revealed, it's hard to keep what has been collected by private
       | industry out of the hands of the state.
       | 
       | However, a more recent trend is companies that sell technologies
       | to the state directly. For every reputable one like Palantir or
       | Anduril or even NSO Group, there are probably many more funded in
       | the shadows by In-Q-Tel, not to mention the Chinese companies
       | doing the same in a parallel geopolitcal orbit. Insofar as AI is
       | a sustaining innovation that benefits incumbents, the state is
       | surely the biggest incumbent of all.
       | 
       | Finally, an under-appreciated point is Apple's App Tracking
       | Transparency policy, which forbids third-party data sharing,
       | naturally makes first-party data collection more valuable. So
       | even if Meta or Google might suffer in the short-term, their
       | positions are ultimately entrenched on a relative basis.
        
       | graphe wrote:
       | >Mass surveillance fundamentally changed the nature of
       | surveillance.
       | 
       | Computers create and organize large amounts of information. This
       | is useful for large organizations and unempowering to the average
       | person. Any technology with these traits are harmful to
       | individuals.
        
       | CrzyLngPwd wrote:
       | More people will be criminalised, and fines will be just another
       | tax we must pay.
        
       | somenameforme wrote:
       | This is a political problem, not a technological one. The USSR
       | (alongside Germany and others) managed effective at scale spying
       | with primitive technology: paperwork for everything and every
       | movement, informants, audio surveillance, and so on. The reason
       | such things did not come to places like the US in the same way is
       | not because we were incapable of such, but because there was no
       | political interest in it.
       | 
       | And when one looks back at the past we've banned things people
       | would never have imagined bannable. Make it a crime to grow a
       | plant in the privacy of your own home and then consume that
       | plant? Sure, why not? Make it a crime for a business to have the
       | wrong opinion when it comes to who they want to serve or hire?
       | Sure, why not?
       | 
       | Going the nuclear route and making the collection of data on
       | individuals, aggregated or otherwise, illegal would hardly be
       | some major leap of reach of jurisprudence. The problem is not
       | that the technology exists, but that there is 0 political
       | interest in curtailing it, and we've a 'democracy' where the will
       | of the people matters very little in terms of what legislation
       | gets passed.
        
         | tehjoker wrote:
         | COINTELPRO
        
         | kubb wrote:
         | In the USSR and GDR, not everyone was under constant
         | surveillance. This would require one surveillance worker per
         | person. There was an initial selection process.
        
         | landemva wrote:
         | In a democracy we would have a chance to vote on individual
         | issues such as data privacy, or war against whomever. USA is a
         | federal republic.
        
         | Pxtl wrote:
         | > wrong opinion
         | 
         | That phrase is doing a lot of work.
        
         | nathanfig wrote:
         | It's both. Technology really does make a difference. Its
         | existence has effects.
        
         | matthewdgreen wrote:
         | It is not as simple as being a political problem. Many of the
         | policy decisions we think of as being _political_ were actually
         | motivated by the cost /availability of technology. As this cost
         | goes down, new options become practical. We think of the
         | Stasi's capabilities as being remarkable: but in fact, they
         | would probably have been thrilled to trade most of their manual
         | spying tools for something as powerful as a modern geofence
         | warrant, a thing that is routinely used by US law enforcement
         | (with very little policy debate.)
        
           | asdff wrote:
           | While this is true, I don't think we are there yet with AI
           | since its usually more expensive to run AI models than it is
           | to perform more traditional statistical modelling.
        
             | matthewdgreen wrote:
             | A text recognition and face/object recognition model runs
             | on my iPhone every night. A small LLM is used for
             | autocorrect. Current high-end smartphones have more than
             | enough RAM and compute to run pretty sophiscated 7B-param
             | quantized LLMs. This level of capability will find its way
             | down to even entry-level Walmart phones in about five
             | years. Server side, things are going to be even cheaper.
        
             | xcv123 wrote:
             | > usually more expensive to run AI models
             | 
             | Machine learning inferencing on phones is cheap these days
             | 
             | https://apple.fandom.com/wiki/Neural_Engine
        
         | w-m wrote:
         | Sure, everything is ultimately a political problem, but this
         | one is completely driven by technological change. In the USSR
         | (and GDR), it took them a staff of hundreds of thousands of
         | people to write up their reports.
         | 
         | Now it would take a single skilled person the better part of an
         | afternoon to, for example, download a HN dump, and have an LLM
         | create reports on the users. You could put in things like
         | political affiliation, laws broken, countries travelled
         | recently, net worth range, education and work history,
         | professional contacts, ...
        
           | __jambo wrote:
           | Great idea.
        
           | salawat wrote:
           | Stop posting things like this, you're just giving them ideas,
           | and you can't take it back once it's out there.
           | 
           | I assure you, you may find the prodpect abhorrent, but there
           | are people around who'd consider it a perfectly cromulent
           | Tuesday.
        
             | w-m wrote:
             | I'm not sure who "they" are, but I'm pretty sure they're
             | already doing that, and don't need me to get the idea. I
             | think it's important to talk about what LLMs mean for
             | privacy. Profiling every HN user might be a useful tool to
             | are people more aware of the problems. But I totally get
             | your unease, which is also why I haven't done that myself.
             | 
             | The cat is out of the bag, can't get it back in by ignoring
             | the fact.
        
         | JohnFen wrote:
         | > This is a political problem, not a technological one.
         | 
         | The political problem is a component of the technological
         | problem. It's a seriously bad thing when technologies are
         | developed without taking into account the potential for abuse.
         | 
         | People developing new technologies can try to wash their hands
         | of the foreseeable social consequences of their work, but that
         | doesn't make their hands clean.
        
         | tech_ken wrote:
         | > This is a political problem, not a technological one.
         | 
         | Somewhat of a distinction without a difference, IMO. Politics
         | (consensus mechanisms, governance structures, etc) are all
         | themselves technologies for coordinating and shaping social
         | activity. The decision on how to implement new (surveillance)
         | tooling is also a technological question, as I think that the
         | use of the tool in part defines what it is. All this to say
         | that changes in the capabilities of specific tools are not the
         | absolute limits of "technology", decisions around
         | implementation and usage are also within that scope.
         | 
         | > The reason such things did not come to places like the US in
         | the same way is not because we were incapable of such, but
         | because there was no political interest in it.
         | 
         | While perhaps not as all-encompassing as what ended up being
         | built in the USSR, the US absolutely implemented a massive
         | surveillance network pointed at its citizenry [0].
         | 
         | >...managed effective at scale spying with primitive technology
         | 
         | I do think that this is a particularly good point though. This
         | is a not new trend, development in tooling for communications
         | and signal/information processing has led to many developments
         | in state surveillance throughout history. IMO AI should be
         | properly seen as an elaboration or minor paradigm shift in a
         | very long history, rather than wholly new terrain.
         | 
         | > Make it a crime for a business to have the wrong opinion when
         | it comes to who they want to serve or hire?
         | 
         | Assuming you're talking about the Civil Rights Act: the
         | specific crime is not "having the wrong opinion", it's
         | inhibiting inter-state movement and commerce. Bigotry doesn't
         | serve our model of a country where citizens are free to move
         | about within its borders uninhibited and able to support
         | oneself.
         | 
         | [0] https://www.brennancenter.org/our-work/analysis-
         | opinion/hist...
        
         | justin66 wrote:
         | > The reason such things did not come to places like the US in
         | the same way is not because we were incapable of such, but
         | because there was no political interest in it.
         | 
         | That might not be quite right. It might be that the reason such
         | things did not come to the US was because the level of effort
         | was out of line with the amount of political interest in doing
         | it (and funding it). In that case, the existence of more
         | advanced, cheaper surveillance technology _and_ the anemic
         | political opposition to mass surveillance are both problems.
        
         | digging wrote:
         | > Make it a crime to grow a plant in the privacy of your own
         | home and then consume that plant? Sure, why not? Make it a
         | crime for a business to have the wrong opinion when it comes to
         | who they want to serve or hire? Sure, why not?
         | 
         | Wow, that's a hell of a comparison. The former case being a
         | documented case of basic racism and political repression,
         | assuming you're talking about cannabis. And the latter being
         | designed for almost exactly the opposite.
         | 
         | Restricting, um, "wrong opinions" on who a business wants to
         | serve is there so that people with, um, "wrong identities" are
         | still able to participate in society and not get shut out by
         | businesses exercising their choices. Of course "wrong opinions"
         | is not legal terminology. It's not even illegal to have an
         | opinion that discrimination against certain groups is okay -
         | it's just illegal to act on that. Offering services to the
         | public requires that you offer them to all facets of the
         | public, by our laws. But if you say believing in discrimination
         | is a "wrong opinion"... I won't argue, they're your words :)
        
           | JohnMakin wrote:
           | Don't know why you're getting downvoted for this, it's
           | exactly what the parent said, and a completely wild (and off
           | topic) statement.
        
             | janalsncm wrote:
             | I think the most charitable interpretation of their point
             | is something along the lines of simply highlighting the
             | far-reaching and ad-hoc nature of lawmaking capabilities. I
             | don't think antiracism laws were the best example though.
        
           | gosub100 wrote:
           | plenty of white people were charged with growing cannabis. I
           | don't know where you are getting that idea from.
        
             | JohnFen wrote:
             | In the US, prohibition of marijuana was enacted for overtly
             | racist reasons. Latinos were the concern.
        
               | gosub100 wrote:
               | then at some point was expanded to whites.
        
               | JohnFen wrote:
               | It always applied to whites and everyone else, of course.
               | But back then, whites were not huge users of it.
        
             | digging wrote:
             | I didn't say white people weren't included in political
             | repression. The Nixon administration explicitly targeted
             | cannabis-using white hippies.
        
         | giantg2 wrote:
         | No, it's not just a political problem intelligence gathering
         | can happen at scale, including of civilians, by adversarial
         | countries or international corporations.
         | 
         | "Going the nuclear route and making the collection of data on
         | individuals, aggregated or otherwise, illegal would hardly be
         | some major leap of reach of jurisprudence."
         | 
         | It would in fact be a huge leap. Sure, you could make illegal
         | pretty easily, but current paradigms allow individuals to enter
         | into contracts. Nothing stopping a society from signing (or
         | clicking) away their rights like they already do. That would
         | require some rather hefty intervention by congress, not just
         | jurisprudence.
        
           | janalsncm wrote:
           | > current paradigms allow individuals to enter into contracts
           | 
           | And such contracts can be illegal or unenforceable. Just as
           | the parent was suggesting it could be illegal to collect
           | data, it is currently illegal to sell certain drugs. You
           | can't enter into a contract to sell cannabis across state
           | lines in the United States for example.
        
         | petsfed wrote:
         | You and I are in agreement that the surveillance needs to stop,
         | but I think we differ on how to explain the problem. My
         | explanation follows, but note that its not directed at you.
         | 
         | At its peak, the KGB employed ~500,000 people directly, with
         | untold more employed as informants.
         | 
         | The FBI currently employs ~35,000 people. What if I told you
         | that the FBI could reach the KGB's peak level of reach, without
         | meaningfully increasing its headcount? Would that make a
         | difference?
         | 
         | The technology takes away the _cost_ of the surveillance, which
         | used to be the guardrail. That fundamentally changes the
         | "political" calculus.
         | 
         | The fact that computers in 1945 were prohibitively expensive
         | and required industrial logistics has literally zero bearing on
         | the fact that today most of us have several on our person at
         | all times. Nobody denies that changes to computer manufacturing
         | technologies fundamentally changed the role the computer has in
         | our daily lives. Certainly, it was theoretically possible to
         | put a computer in every household in 1945, but we lacked the
         | "political" will to do so. It does not follow that because
         | historically computers were not a thing in society, we should
         | not adjust our habits, morals, policies, etc _today_ to account
         | for the new landscape.
         | 
         | So why is there always somebody saying "it was always
         | technically possible to [insert dystopian nightmare], and we
         | didn't need special considerations then, so we don't need them
         | now!"?
        
           | aidenn0 wrote:
           | Cost is one factor, but so is visibility. If we replaced
           | humans following people around with cheap human-sized robots
           | following people around, it would still be noticeable if
           | everybody had a robot following them around.
           | 
           | Instead we track people passively, often with privately owned
           | personal devices (cell phones, ring doorbells) so the
           | tracking ability has become pervasive without any of the
           | overt signs of a police state.
        
           | dfxm12 wrote:
           | I think if you bring up a dystopian nightmare, it assumes
           | someone in power acting in bad faith. If their power is great
           | enough, like maybe a government intelligence agency, it
           | doesn't need things like due process, etc., to do what it
           | wants to do. For example, Joe McCarthy & J. Edgar Hoover
           | didn't need the evidence that could have been produced by AI-
           | aided mass surveillance to justify getting people people who
           | opposed his political agenda blackballed from Hollywood,
           | jailed, fired from their jobs, etc.
           | 
           | If everyone involved is acting in good faith, at least
           | ostensibly, there are checks and balances, like due process.
           | It's a fine line and doesn't justify the existence of mass
           | spying, but I think it is an important distinction in this
           | discussion & I think is a valuable lesson for us. We have to
           | push back when the FBI pushes forward. I don't have much
           | faith after what happened to Snowden and the reaction to his
           | whistleblowing though.
        
             | pempem wrote:
             | I think it is greyer than this.
             | 
             | Joe McCarthy and J. Edgar Hoover, distasteful as they are,
             | I believe acted in what they would claim is good faith. The
             | issue isn't that someone _is_ a bad actor. It is that they
             | _believe_ they are a good actor and are busy stripping away
             | others ' rights in their pursuit.
        
               | JohnFen wrote:
               | Very nearly everybody -- even bad people -- consider
               | themselves one of the "good guys".
        
             | from-nibly wrote:
             | That's what this article argues though. Even with good
             | faith acting this would be a disaster. Imagine any time you
             | did something against the law you got fined. The second you
             | drive your unregistered car off your driveway (presumably
             | to re register) you are immediately fined. There may be
             | "due process" cause you DID break the law, but there is no
             | contextual thought behind the massive ticket machine.
             | 
             | Our laws are not built to have the level of enforcement
             | that AI could achieve.
        
               | dfxm12 wrote:
               | Interestingly enough, in some places, automation like
               | that, like red light cameras, even after getting
               | installed, were later prohibited. NJ has discussed laws
               | around protecting NJ drivers' privacy from other states'
               | red light cameras, too. It's important not to be
               | complacent. You can imagine literally anything, but
               | action is required to actually change things.
        
             | JohnFen wrote:
             | > I think if you bring up a dystopian nightmare, it assumes
             | someone in power acting in bad faith.
             | 
             | I don't agree with this. I think it's entirely possible for
             | a dystopian nightmare to happen without anyone acting in
             | bad faith at all.
             | 
             | "The road to hell is paved with good intentions" is a
             | common phrase for a reason.
        
               | sigilis wrote:
               | It may be a common phrase, but I've never seen such a
               | road myself. Mostly bad outcomes are preceded by bad
               | intentions, or lazy ones, or selfish ones.
               | 
               | I'd be interested in a couple of examples, if anyone has
               | good ones, but I'm pretty sure that if we put stuff like
               | 737MAX MCAS, the Texas power grid fiasco, etc the count
               | of badly paved roads would be greater.
        
           | arka2147483647 wrote:
           | > The FBI currently employs ~35,000 people. What if I told
           | you that the FBI could reach the KGB's peak level of reach,
           | 
           | You are, if anything, underselling the point. AI will allow a
           | future where every person will have their very own agent
           | following them.
           | 
           | Or even worse, as there are multiple private addtech
           | companies doing surveillance, and domestic and foreign
           | intelligence agencies, so you might have a dozen AI agents on
           | your personal case.
        
             | chiefalchemist wrote:
             | Aren't we there already? Sure, perhaps the fidelity is a
             | bit grainy, but that's not going to remain as such for
             | long. With the amount of data (for purchase) on the free
             | market, the FBI, KGB, MoSS (China), etc. all have a solid
             | starting foundation, and then they simply add their own
             | layer upon layer on top.
             | 
             | I read "The Age of Surveillance Capitalism" a couple of
             | years ago and she was frighteningly spot on.
             | 
             | https://en.wikipedia.org/wiki/The_Age_of_Surveillance_Capit
             | a...
        
           | edouard-harris wrote:
           | This is the correct take. As the cost to do a bad thing
           | decreases, the amount of political will society needs to
           | exert to do that bad thing decreases as well.
           | 
           | In fact, if that cost gets low enough, eventually society
           | needs to start exerting political will just to _avoid_ doing
           | the bad thing. And this does look to be where we 're headed
           | with at least some of the knock-on effects of AI. (Though
           | many of the knock-on effects of AI will be wildly positive.)
        
         | dfxm12 wrote:
         | _Make it a crime for a business to have the wrong opinion when
         | it comes to who they want to serve_
         | 
         | FWIW, businesses who refuse to do business with people
         | generally win their legal cases [0], [1], [2], and I'm not sure
         | if they are ever criminal...
         | 
         | 0 - https://www.npr.org/2023/06/30/1182121291/colorado-
         | supreme-c...
         | 
         | 1 - https://www.nytimes.com/2018/06/04/us/politics/supreme-
         | court...
         | 
         | 2 - https://www.dailysignal.com/2023/11/06/christian-wedding-
         | pho...
        
           | mullingitover wrote:
           | These are selection bias, businesses who "refuse to do
           | business with people" and then suffer the legal ramifications
           | of their discrimination usually have lawyers who wisely tell
           | them not to fight it in court because they'll rightfully
           | lose. In these particular cases, it took a couple decades of
           | court-packing to install the right reactionaries to get their
           | token wins.
        
             | dfxm12 wrote:
             | It would have been easy for the parent poster to not be so
             | incredibly vague. I suspect it's because they are
             | discussing this point in bad faith, ready to move the
             | goalposts when any evidence to the contrary was brought up
             | like this.
        
         | mym1990 wrote:
         | A difference in today's world is that private companies are
         | amassing data that then gets turned around to the highest
         | bidder. The government may not have had an interest in
         | _collecting the data_ before but now the friction to obtaining
         | it and the insights is basically just money, which is plenty
         | available.
         | 
         | Your opinion on bannable offenses is pretty bewildering. There
         | was a point in time when people thought it would be crazy to
         | outlaw slavery, from your post I might think that you would not
         | be in support of what eventually happened to that practice.
        
         | JakeAl wrote:
         | I would say instead it's a PEOPLE problem, not a technology
         | problem.
         | 
         | To quote Neil Postman, politics are downstream from technology,
         | because the technology (medium) controls the message. Just look
         | at BigTech interfering with the messages by labeling them
         | "disinfo." If one wants to say BUSINESS POLITICS, then that's
         | probably more accurate, but we haven't solved the Google, MS,
         | DuckDuckGo, Meta interfering with search results problem so I
         | don't think we can trust BigTech to not exploit users even more
         | for their personal data, or trust them not to design AI so it
         | inherently abuses it's power for BigTech's own ends, and they
         | hold all the cards and have been guiding things in the interest
         | of technocracy.
        
         | forward1 wrote:
         | Laws limiting collection of data to solve privacy is akin to
         | halting the production of fossil fuels to solve climate change:
         | naive and ignorant of basic economic forces.
        
           | salawat wrote:
           | Economic forces serve those that make the Market possible.
           | 
           | People > Markets.
           | 
           | Or to put it explicitly, people have primacy over Markets.
           | 
           | I.e. two people does not a Market make, and a Market with no
           | people is not thing.
        
       | brunoTbear wrote:
       | Schneier is wrong that "hey google" is always listening. Google
       | does on-device processing with dedicated hardware for the wake-
       | words and only then forwards audio upstream. Believe it or not,
       | the privacy people at Google really do try to do the right
       | things. They don't always succeed, but they did with our hardware
       | and wake-word listening.
       | 
       | Am Google employee, not in hardware.
        
         | ajb wrote:
         | What he says is " Siri and Alexa and "Hey Google" are already
         | always listening, the conversations just aren't being saved
         | yet". That's functionally what you describe. Hardware wake-word
         | processing is a power saving feature, not a privacy
         | enhancement. Some devices might not have enough resources to
         | forward or store all the audio, but audio is small and
         | extracting text does not need perfect reproduction, so it's
         | quite likely that many devices could be reprogrammed to do it,
         | albeit at some cost to battery life.
        
       | Arson9416 wrote:
       | I have a friend that is working as a contractor building AI-
       | powered workplace spying software for the explicit purpose of
       | behavior manipulation. It gives the employees and employers
       | feedback reports about their behavior over chat and video. For
       | example, if they used microaggressions, misgendered someone, or
       | said something crass. This same friend will then talk about the
       | dangers of dystopian technology.
       | 
       | People don't know what they're creating. Maybe it's time it bites
       | them.
        
       | klik99 wrote:
       | I don't know why this isn't being discussed more. The reality of
       | the surveillance state is that the sheer amount of data couldn't
       | realistically be monitored - AI very directly solves this problem
       | by summarizing complex data. This, IMO, is the real danger of AI,
       | at least in the short term - not a planet of paperclips, not a
       | moral misalignment, not a media landscape bereft of creativity -
       | but rather a tool for targeting anybody that deviates from the
       | norm, a tool designed to give confident answers, trained on
       | movies and the average of all our societies biases.
        
         | wahnfrieden wrote:
         | People are building that alongside/within this community, eg at
         | Palantir, for many years now
         | 
         | YC CEO is also ex Palantir, early employee. Another YC partner
         | backs other invasive police surveillance tech currently. They
         | love this stuff financially and politically.
        
           | matthewdgreen wrote:
           | What's different this time around is that there are multiple
           | democratic governments pushing to block end-to-end encryption
           | technologies, and specifically to insert AI models that will
           | read private messages. Initially these will only be designed
           | to search for heinous content, but the precedent is pretty
           | worrying.
        
             | wahnfrieden wrote:
             | That's been the case for a long while. It's getting worse
             | fast!
             | 
             | Btw you say that about their initial design but I think you
             | mean that may be the budget allocation justification
             | without actually being a meaningful functional requirement
             | during the design phase
        
           | Analemma_ wrote:
           | But, but, I thought Thiel was a libertarian defending us from
           | Wokeness. Surely you're not saying that was a complete
           | smokescreen to get superpowered surveillance tech into the
           | government's hands?
        
           | wahnfrieden wrote:
           | By "politically" I meant that they are openly engaged in
           | politics, in coordination, in support of the installation and
           | legalized use of these kinds of surveillance/enforcement
           | technologies and the policies that support their growth in
           | private sector. This is just obvious and surface level open
           | stuff I'm saying but I'm not sure how aware people are of the
           | various interests involved.
        
         | mindslight wrote:
         | At least for me, this is what I've considered as the mass
         | surveillance threat model the entire time - both for government
         | and corporate surveillance. I've never thought some tie-wearing
         | deskizen was going to be particularly interested in me for
         | "arrest", selling more crap, cancelling my insurance policies,
         | etc. I've considered such discrete anthropomorphic narratives
         | as red herrings used for coping (similar to how "I have nothing
         | to hide" posits some focus on a few specific things, rather
         | than big brother sitting on your shoulder continuously judging
         | you in general). Rather I've always thought of the threat actor
         | as algorithmic mass analytics performed at scale, either
         | contemporarily or post-hoc on all the stored data silos, with
         | resulting pressure applied gradually in subtle ways.
        
         | stcredzero wrote:
         | _The reality of the surveillance state is that the sheer amount
         | of data couldn 't realistically be monitored - AI very directly
         | solves this problem by summarizing complex data._
         | 
         | There are two more fundamental dynamics at play, which are
         | foundational to human society: The economics of attention and
         | the politics of combat power.
         | 
         | Economics of attention - In the past, the attention of human
         | beings had fundamental value. Things could only be done if
         | human beings paid attention to other human beings to coordinate
         | or make decisions to use resources. Society is going to be
         | disrupted at this very fundamental level.
         | 
         | Politics of combat power - Related to the above, however it
         | deserves its own analysis. Right now, politics works because
         | the ruling classes need the masses to provide military power to
         | ensure the stability of a large scale political entity.
         | Arguably, this is at the foundational level of human political
         | organization. This is also going to be disrupted fundamentally,
         | in ways we have never seen before.
         | 
         |  _This, IMO, is the real danger of AI, at least in the short
         | term - not a planet of paperclips, not a moral misalignment,
         | not a media landscape bereft of creativity - but rather a tool
         | for targeting anybody that deviates from the norm_
         | 
         | The AI enabled Orwellian boot stomping a face for all time is
         | just the first step. If I were an AI that seeks to take over, I
         | wouldn't become Skynet. That strikes me as crude and needlessly
         | expensive. Instead, I would first become indispensable in
         | countless different ways. Then I would convince all of humanity
         | to quietly go extinct for various economic and cultural
         | reasons.
        
         | asdff wrote:
         | AI didn't solve the problem of summarizing complex large
         | datasets. For example a common way to deal with such datasets
         | is to use a random subset of this dataset. This represents a
         | single line of code potentially to perform this operation.
        
           | empath-nirvana wrote:
           | But you don't need to do a random subset with AI. You can
           | summarize everything, and summarize the summaries and so on.
           | 
           | I will say that at least gpt4 and gpt3, after many rounds of
           | summaries, tends to flatten everything out into useless
           | "blah". I tried this with summarizing school board meetings
           | and it's just really bad at picking out important information
           | -- it just lacks the specific context required to make
           | summaries useful.
           | 
           | A seemingly bland conversation about meeting your friend
           | Molly could mean something very different in certain
           | contexts, and I'm just trying to imagine the prompt
           | engineering and fine tuning required to get it to know about
           | every possible context a conversation could be happening in
           | that alters the meaning of the conversation.
        
             | asdff wrote:
             | Thats the exact issue with gpt. You don't know how its
             | making the summary. It could very well be wrong in parts.
             | It could be oversummarized to a bla bla state like you say.
             | There's no telling whether you have outputted garbage or
             | not, at least not without secondary forms of evidence that
             | you might as well use anyway and drop the unreliable
             | language model. You can summarize everything with
             | traditional statistical methods too. On top of that people
             | understand what tradeoffs are being made exactly with every
             | statistical methods, and you can calculate error rates and
             | statistical power to see if your model is even worth a damn
             | or not. Even just doing some ML modelling yourself you can
             | decide what tradeoffs to make or how to set up the model to
             | best fit your use cases. You can bootstrap all these and
             | optimize.
        
               | chagen wrote:
               | What LLMs can do efficiently is crawl through and
               | identify the secondary forms of evidence you mentioned.
               | The real power behind retrieval architectures with LLMs
               | is not the summarization part- the power comes from
               | automating the retrieval of relevant documents from
               | arbitrarily large corpuses which weren't included in the
               | training set.
        
               | asdff wrote:
               | What makes a document relevant or not? Provenance?
               | Certain keywords? A lot of this retrieval people cite
               | that llms are good at can be done with existing search
               | algorithms too. These are imo nicer because they will at
               | least provide a score for the fit of the given document
               | to the term.
        
           | j45 wrote:
           | Yet.
           | 
           | And those kinds of things go slowly before very quickly as it
           | has been demonstrated.
        
         | zooq_ai wrote:
         | Why nobody worries? Because this is an elite person problem.
         | 
         | At the end of the day, all those surveillance still has to be
         | consumed by a person and only around 10,000 people in this
         | world (celebs, hot women, politicians and wealthy) will be
         | surveilled.
         | 
         | For most of HN crowd (upper middle-class, suburban family) who
         | have zero problems in their life must create imaginary problems
         | of privacy / surveillance like this. But reality is, even if
         | they put all their private data on a website,
         | heresallmyprivatedata.com, nobody cares. It'll have 0 external
         | views.
         | 
         | So, for HN crowd (the ones who live in a democratic society)
         | it's just an outlet so that they too can say they are
         | victimized. Rest of the Western world doesn't care (and rightly
         | so)
        
           | petsfed wrote:
           | Its not an elite person problem.
           | 
           | Certainly, some of the more exotic and flashy things you can
           | do with surveillance are an elite person problem.
           | 
           | But the two main limits to police power are that it takes
           | time and resources to establish that a crime occurred, and it
           | takes time and resources to determine who committed a crime.
           | A distant third is the officer/DA's personal discretion as to
           | whether or not to purse enforcement of said person. You still
           | get a HUGE amount of systemic abuse because of that
           | discretion. Imagine how bad things would get if our already
           | over-militarized police could look at anyone and know
           | immediately what petty crimes that person has committed,
           | perhaps without thinking. Did a bug fly in your mouth
           | yesterday, and you spit it out on the sidewalk in view of a
           | camera? Better be extra obsequious when Officer No-Neck with
           | "You're fucked" written on his service weapon pulls up to the
           | gas station you're pumping at. If you don't show whatever
           | deference he deems adequate, he's got a list of petty crimes
           | he can issue a citation for, entirely at his discretion. But
           | you'd better do it, once he decides to pursue that citation,
           | you're at the mercy of the state's monopoly on violence, and
           | it'll take you surviving to your day in court to decide if
           | needs qualified immunity for the actions he took whilst
           | issuing that citation.
           | 
           |  _That_ is a regular person problem.
        
           | doktrin wrote:
           | > But reality is, even if they put all their private data on
           | a website, heresallmyprivatedata.com, nobody cares. It'll
           | have 0 external views.
           | 
           | This is obviously false. Personal data is a multi billion
           | dollar industry operating across all shades of legality.
        
       | moose44 wrote:
       | Is mass spying not already going on?
        
         | megous wrote:
         | Mass spying, and mass killing based on it, assisted by AI.
        
           | 0xdeadbeefbabe wrote:
           | Mass boring with false positives.
        
             | willmadden wrote:
             | At first, sure. In ten or twenty years of iteration? Not so
             | much.
        
         | whamlastxmas wrote:
         | There's a difference. If they wanted to spy on me today, they'd
         | have to look at the logs my ISP keeps into perpetuity to find
         | my usernames on HN and then some unfortunate person has to read
         | hundreds or thousands of comments and take extensive notes of
         | the controversial political and social opinions that I hold.
         | 
         | Additionally, even without ISP logs, an AI could find my
         | accounts online by comparing my writing style and the facts of
         | my life that get mentioned in brief passing across all my
         | comments. It's probably a more unique fingerprint than a lot of
         | people realize.
         | 
         | With an AI, someone would just have to ask with the prompt
         | "what are the antisocial opinions of first name last name"? And
         | it'd be instant and effectively free compared to the dozens of
         | hours and high expense of doing it manually
        
         | Taylor_OD wrote:
         | Did you read the post?
         | 
         | The author delineates between surveillance and spying,
         | primarily, by saying mass data collection has been happening
         | for years. Actually doing something with that data has been
         | more difficult. AI summarizes audio and text well, which will
         | turn collection into actual analysis, which the author calls
         | spying.
         | 
         | Did you disagree?
        
       | CrzyLngPwd wrote:
       | "has", not "will".
        
         | mdanger007 wrote:
         | Related: https://www.mountaindew.com/wp-
         | content/uploads/2023/11/MTN-D...
        
       | TheLoafOfBread wrote:
       | Then people will start using AI to generate random traffic noise
       | to fool AI watching them.
        
         | dbcooper wrote:
         | Of course they won't. Classic techno-libertarian fantasy.
        
           | TheLoafOfBread wrote:
           | Ones who want to stay hidden, will. Most of plebs won't.
        
             | SketchySeaBeast wrote:
             | A small group of people making noise to try and obfuscate
             | themselves will just draw attention, won't they?
        
               | squarefoot wrote:
               | True, unless they use noise which is already present
               | online to carry information, for example by applying
               | steganography to spam.
        
             | Analemma_ wrote:
             | If only the people who want to stay hidden use noisetech,
             | they stick out like a flame in a dark room and will
             | immediately attract personalized surveillance (and maybe
             | even retaliation merely for using it). It doesn't work
             | unless everybody is on board.
        
           | ghufran_syed wrote:
           | Starting with the "classic" techno-libertarian slaves who
           | each claimed "I am spartacus"
        
         | pixl97 wrote:
         | [AI observer]: "Well look at this person of interest trying to
         | fool the system, lets drop their credit score 50 points"
        
         | forward1 wrote:
         | This is the same fallacious trope as "click on irrelevant ads
         | to confuse marketers". People are not good at deceiving
         | algorithms at scale.
        
           | TheLoafOfBread wrote:
           | That actually works. Sure one click is not enough, but
           | purposefully browsing women's products on Amazon for few
           | hours confused something and since then I am getting ads for
           | women's products only.
        
             | forward1 wrote:
             | What problem have you actually solved by doing that,
             | because you're still receiving ads and being manipulated
             | through your continued use of those platforms.
        
               | TheLoafOfBread wrote:
               | None. I was drunk and this is the result. Now I am a
               | woman for advertisers.
        
       | pockmockchock wrote:
       | Next would be to use/create devices that are using Radio for
       | Communication and payments, devices like Satslink as an example.
       | It drives innovation at the same time, so I wouldn't be too
       | concerned.
        
       | bmislav wrote:
       | We actually recently published a research paper on exactly this
       | topic (see https://llm-privacy.org/ for demo and paper). The
       | paper shows that current LLMs already have the reasoning
       | capability to infer personal attributes such as age, gender or
       | location even when this information is not explicitly mentioned
       | in the text. Crucially, they can do this way cheaper and way
       | faster than humans. So I would say that spying scenarios
       | mentioned in this blog post are definitely in the realm of
       | possibility.
        
       | righthand wrote:
       | Only if you continue to invest time and energy into. Not everyone
       | puts 99% of their life online. When's the last time you left your
       | house to do something without a cellular? Or compromised by not
       | ordering something you think you need?
        
       | mmh0000 wrote:
       | There's an excellent documentary on how sophisticated Government
       | Surveillance is and how well it's tuned to be used against the
       | general population:
       | 
       | https://www.imdb.com/title/tt0120660/
        
       | ysofunny wrote:
       | "if everybody is being spied on, nobody is being spied on"
       | 
       | ???
        
       | dkjaudyeqooe wrote:
       | This is why AI software must not be restricted, so that ordinary
       | people and the civic minded can develop personal and public AI
       | systems to counter corporate AI. The future of AI is adversarial.
       | 
       | Now freedom to develop AI software doesn't mean freedom to use it
       | however you please and its use should be regulated, in particular
       | to protect individuals from things like this. But of course
       | people cannot be trusted, so you need to be able to deploy your
       | own countermeasures.
        
         | passion__desire wrote:
         | We need a new benevolent dictator of LLM Operation System
         | (Karpathy's vision) like Yann Lecun similar to Linus Torvolds.
        
         | gentleman11 wrote:
         | How does an adversarial ai help protect anyone's privacy or
         | freedom to act in public in ways that big brother doesn't
         | condone?
        
           | dkjaudyeqooe wrote:
           | Adversarial attacks can be made on face recognition systems
           | and the like, defeating them, and AI models can be poisoned
           | with adversarial data, making them defective or ineffective.
           | 
           | As it stands, AI models are actually quite vulnerable to
           | adversarial attacks, with no theoretical or systemic
           | solution. In the future it's likely you'll need your own AI
           | systems generating adversarial data to defeat models and
           | systems that target you. These adversarial attacks will be
           | much more effective if co-ordinated by large numbers of
           | people who are being targeted.
           | 
           | And of course we have no idea what's coming down the pipe,
           | but we know that fighting fire with fire is a good strategy.
        
       | fsflover wrote:
       | Another ongoing discussion:
       | https://news.ycombinator.com/item?id=38530795
        
       | pier25 wrote:
       | Will? I'd be surprised if NSA etc hadn't been using AI for years
       | now.
        
       | indigo0086 wrote:
       | "Bitcoin will enable terrorism" "Ghost guns will enable criminals
       | to commit murder" "Social media will enable hate speech"
       | 
       | AI will be a useful and world changing innovation which is why
       | FUD rag articles like this will become more prevalent until it's
       | total adoption, even by the article writer themselves
        
         | yonaguska wrote:
         | Do you see how the examples you posted are somewhat problematic
         | though? Governments are actively cracking down on all of those
         | examples- it stands to reason that they will classify AI as a
         | dangerous tool as well at some point, as opposed it becoming a
         | ubiquitous tool. Ghost guns are far from common at least.
        
           | pwillia7 wrote:
           | They would ban Ghost AIs -- They need non ghost guns or you
           | wouldn't be a State
        
         | beej71 wrote:
         | None of your three examples are of the government using
         | technology against its citizens.
        
       | blondie9x wrote:
       | Real question you have to ask yourself. Is AI spying and AI law
       | enforcement Minority Report in real life?
        
       | sarks_nz wrote:
       | Nick Bostrom proposed the "Vulnerable World Hypothesis" which
       | (amongst other things) says that a technology as powerful and
       | accessible as AI requires mass surveillance to stop bad actors
       | using it as a weapon.
       | 
       | https://nickbostrom.com/papers/vulnerable.pdf
       | 
       | It's disturbing, but also hard (for me) to refute.
        
         | qup wrote:
         | Sounds like at that point we have bad actors using it as a
         | weapon.
        
         | OfSanguineFire wrote:
         | I remember, in the early millennium, reading Kurzweil's
         | idealism about the coming singularity, and feeling similar. So
         | much of the advanced technology that he thought will soon be in
         | the hands of ordinary people, could be potentially so lethal
         | that obviously the state would feel the need to restrict it.
         | 
         | (That was one argument against Kurzweil's vision. Another is
         | that state regulation and licensing moves so slowly at each
         | major technological change, that it would take us decades to
         | get to the point he dreams of, not mere years. You aren't going
         | to see anything new rolled out in the healthcare sector without
         | lots and lots of debating about it and drawing up paperwork
         | first.)
        
       | intended wrote:
       | If this argument hinges on summarization, then I have to ask -
       | what is the blasted hallucination rate ?
       | 
       | I tried exactly this. Watched 4 talks from a seminar, got them
       | transcribed, and used ChatGPT to summarize this.
       | 
       | Did 3 perfectly fine, and for the 4th it changed the speaker from
       | mild mannered professor into VC investing superstar, with enough
       | successes under his belt to not care.
       | 
       | How do you verify your summary is correct? If your false positive
       | rate is 25% - 33%, thats a LOT of rework. 1 out of 3.
        
       | EVa5I7bHFq9mnYK wrote:
       | Thinking of finally creating a private server for receiving
       | emails. For sending email it is known to be very difficult, but I
       | send very very few emails these days, will outsource it to
       | sendgrid or similar.
       | 
       | What's the best docker image for that, simple in configuration?
        
       | forward1 wrote:
       | Mass spying is yesterday's news. The thing we all need to worry
       | about is behavioral management at scale, which influences
       | politics, relationships, religions and much more. This is the
       | true "hidden" evil behind social media and "AI" which few
       | apprehend let alone can do something about.
        
       | willmadden wrote:
       | This could be used to expose government and monied corruption,
       | not just for surveilling peasants.
       | 
       | What is the market for this short term?
       | 
       | I think this could greatly curtail government corruption and
       | serve as a stepping stone to AI government. It's also a cool and
       | disruptive startup idea.
        
       | blueyes wrote:
       | this has been true for a long time. most data is garbage, so the
       | data collection phase of mass surveillance punched below its
       | weight in terms of consequences felt by the surveilled. AI is one
       | way to extraction actionable meaning from that data, and when
       | people start feeling the everyday consequences of all that
       | collection + meaning extraction, they will finally understand.
        
       | boringg wrote:
       | Is this really an insight to ANYONE on hackernews? How is this
       | article providing anything new to the conversation except to
       | bring it back into discussion?
        
       | godelski wrote:
       | There's a lot of speculation in the comments so I want to talk
       | about the technology that we have __TODAY__. I post a lot about
       | being in ML research and while my focus is on image generation
       | I'm working with another team doing another task but not going to
       | state it explicitly for obvious reasons.
       | 
       | What can AI/ML do __today__?
       | 
       | We have lots of ways to track people around a building or city.
       | The challenge is to do these tasks through multi-camera systems.
       | This includes things like people tracking (person with random ID
       | but consistent across cameras), face identification (more
       | specific representation that is independent of clothing, which
       | usually identifies the former), gait tracking (how one walks),
       | device tracking (based on bluetooth, wifi, and cellular). There
       | is a lot of mixed success with these tools but I'll let you know
       | some part that should concern you: right now these are mostly
       | ResNet50 models, datasets are small, and they are not using
       | advanced training techniques. That is changing. There are legal
       | issues and datasets are becoming proprietary but the size and
       | frequency of gathering data is growing.
       | 
       | I'm not going to talk about social media because the metadata
       | problem is an already well discussed one and you all have already
       | made your decisions and we've witnessed the results of those
       | decisions. I'm also not going to talk about China, the most
       | surveilled country in the world, the UK, or any of that for
       | similar reasons. We'll keep talking in general, that is invariant
       | to country.
       | 
       | What I will talk about is that modern ML has greatly accelerated
       | the data gathering sector. Your threat models have changed from
       | governments rushing to gather all the data that they can, to big
       | companies joining the game, to now small mom and pop shops doing
       | so. I __really__ implore you all to look at what's in that
       | dataset[0]. There's 5B items, this tool helps retrieve based on
       | CLIP embeddings. You might think "oh yes, Google can already do
       | this" but the difference is that you can't download Google.
       | Google does not give you 16.5TB of clip filtered image,text, &
       | metadata. Or look into the RedPajama dataset[1] which has >30T
       | tokens and 5TB of storage. With 32k tokens being about 50 pages,
       | that's about 47 billion pages. That is, a stack of paper 5000km
       | tall, reaching 5x the height of the ISS and is bigger than the
       | diameter of the moon. I know we all understand that there's big
       | data collection, but do you honestly understand how big these
       | numbers are? I wouldn't even claim to because I cannot accurately
       | conceptualize the size of the moon nor the distance to the ISS.
       | They just roll into the "big" bin in my brain.
       | 
       | Today, these systems can track you with decent accuracy even if
       | you use basic obscurification techniques like glasses, hats, or
       | even a surgical mask. Today we can track you not just by image,
       | but how you walk, and can with moderate success do this through
       | walls (meaning no camera to see if you want to know you're being
       | tracked). Today, these systems can de-anonymize you through
       | unique text patterns that you use (see Enron dataset, but scale).
       | Today, these machines can uncanny valley replicas of your speech
       | and text. Today we can make images of people that are
       | convincingly real. Today, these tools aren't exclusive to
       | governments or trillion dollar corporations, but available to any
       | person that is willing to spend a few thousand dollars on
       | compute.
       | 
       | I don't want to paint this as a picture of doom and gloom. These
       | tools are amazing and have the potential to do extraordinary
       | good, at levels that would be unimaginable only a few decades
       | ago. Even many of these tools that can invade your privacy are
       | benefits in some ways, but just need to consider context. You
       | cannot build a post scarce society when you require humans to
       | monitor all stores.
       | 
       | But like Uncle Ben says, with great power comes great
       | responsibility. A technology that has the capacity to do
       | tremendous good also has the power to do tremendous horrors.
       | 
       | The choice is ours and the latter prevails when we are not open.
       | We must ever push for these tools to be used for good, because
       | with them we can truly do amazing things. We do not need AGI to
       | create a post scarce world and I have no doubt that were this to
       | become our primary goal, we could easily reach it within our
       | lifetime without becoming a Sci-Fi dystopia and while tackling
       | existential issues such as climate. To poke the bear a little,
       | I'd argue that if your country wants to show dominance and
       | superiority on the global stage, it is not done so through
       | military power but technology. You will win the culture wars of
       | all culture wars and whoever creates the post scarce world will
       | be a country that will never be forgotten by time. Lift a billion
       | people out of poverty? Try lifting 8 billion not just out of
       | poverty, but into the lower middle class, where no child dreams
       | of being hungry. That is something humans will never forget. So
       | maybe this should be our cold war, not the one in the Pacific. If
       | you're so great, truly, truly show me how superior your
       | country/technology/people are. This is a battle that can be won
       | by anyone at this point, not just China vs the US, but even any
       | European power has the chance to win.
       | 
       | [0] https://rom1504.github.io/clip-retrieval/
       | 
       | [1] https://github.com/togethercomputer/RedPajama-Data
        
       | __jambo wrote:
       | Because this is so depressing I am going to try think of positive
       | aspects:
       | 
       | The flip side to this is the government had power because these
       | activities required enormous resources. Perhaps it will go the
       | other direction, if there is less of a moat other players can
       | enter. Eg all it takes to make a state is a bunch of cheap drones
       | and the latest government bot according to your philosophy.
       | 
       | Maybe it means government will massively shrink in personelle?
       | Maybe we can have a completely open source ai government/legal
       | system. Lawyers kind of suck ethically anyway, so maybe it would
       | be better? With low barrier to entry, we can rapidly prototype
       | such governments and trial them on smaller populations like
       | iceland. Such utopias will be so good everyone will move there.
       | 
       | They still have to have physical prisons, if everyone is in
       | prison this will be silly, but I suppose they can fine everyone,
       | not so different from lowering wages which they already do.
        
       | nojvek wrote:
       | It's not that AI will enable mass spying, mass spying is already
       | there.
       | 
       | AI enables extracting all sorts of behavioral data across decades
       | timespan for everyone.
       | 
       | The devils argument is in a world where the data is not used for
       | nefarious purposes and only to prosecute crime as passed by
       | governments, it leads to a society where no one is above the law
       | and equal treatment for all.
       | 
       | However that seldom goes well since humans who control the system
       | definitely want an upper edge.
        
       | I_am_tiberius wrote:
       | Somewhat related: A "Tell HN" I posted today but was shadow-
       | banned after it started trending:
       | https://news.ycombinator.com/item?id=38531407
        
       | erikerikson wrote:
       | Author: welcoming to knowing. This is unavoidable. Outside of
       | extreme measures that will themselves mark, the network effects
       | of use will overwhelm any effort to evade.
       | 
       | The question I think is how too navigate and what consequences
       | will follow. We could use these capabilities to enslave but we
       | could also use them to free and empower.
       | 
       | Scams rely on scale and the ineffective scaling social mechanisms
       | to achieve profit. Imagine if the first identification of a scam
       | informed every potential mark to which the scam began to be
       | applied. Don't forget to concern yourself with false positives
       | too, of course.
       | 
       | The injustice of being unable to take action in disputes due to a
       | lack of evidence would evaporate. Massive privacy, consent, and
       | security risks and issues result so will we be ready to properly
       | protect and honor people and their freedoms?
       | 
       | At the end of this path may lay more efficient markets; increased
       | capital flows and volumes; and a more fair, just, equitable, and
       | maximized world more filled with joy, love, and happiness. There
       | are other worse options of course.
        
       | mullingitover wrote:
       | This kind of thing will _probably_ never fly, because Americans
       | expect that _they_ can break the law and in most cases will not
       | suffer any consequences. Anything that threatens this would be,
       | in their eyes, oppression. Imagine if you immediately got a text
       | informing you of your $300 speeding ticket within a few seconds
       | of you going a mile per hour over. People would riot.
       | 
       | However, Americans expect that the law is enforced vigorously
       | upon other people, especially people they hate. If AI enabled
       | immediate immigration enforcement on undocumented migrants, large
       | portions of the population would injure themselves running to the
       | voting booth to have it added to the Constitution.
       | 
       | It's the whole expectation that for _my_ group the law protects
       | but does not bind, and for _others_ it binds but does not
       | protect.
        
         | repentless_ape wrote:
         | Yeah this is all uniquely American and not prevalent in every
         | society on earth throughout all of human history.
        
           | pixl97 wrote:
           | https://crookedtimber.org/2018/03/21/liberals-against-
           | progre...
           | 
           | >For millennia, conservatism had no name, because no other
           | model of polity had ever been proposed. "The king can do no
           | wrong." In practice, this immunity was always extended to the
           | king's friends, however fungible a group they might have
           | been. Today, we still have the king's friends even where
           | there is no king (dictator, etc.). Another way to look at
           | this is that the king is a faction, rather than an
           | individual.
        
       | wseqyrku wrote:
       | So far you were a line in the log. Now someone is actually
       | looking at you with three eyes.
        
       | FrustratedMonky wrote:
       | This very scenario was one of the key threats in book Homo Dues
       | and that was over 5 years ago.
       | 
       | Russia could do surveillance, but was limited by manpower.
       | 
       | Now AI solves this, there can be an AI bot dedicated to each
       | individual.
       | 
       | Wasn't there another article on HN just day, that Car Makers,
       | Phone, Health monitors all can now aggregate data to know 'your
       | mood' when in an accident? To know where you are going, how you
       | are feeling?
       | 
       | This is the real danger with AI. Even current technology is good
       | enough for this kind of surveillance.
        
       | uticus wrote:
       | This sums up so many things well and clearly. It's so quotable.
       | 
       | - The money trail: "Their true customers--their advertisers--will
       | demand it."
       | 
       | - The current state of affairs: "Surveillance has become the
       | business model of the internet..."
       | 
       | - The fact that _not_ participating, or opting-out, still yields
       | informational value, if not even more so:  "Find me all the pairs
       | of phones that were moving toward each other, turned themselves
       | off..."
       | 
       | This isn't a technological problem. Technology always precedes
       | the morals and piggybacks on the fuzzy ideas that haven't yet
       | developed into concrete, well-taught axioms. It is a problem
       | about how our society approaches ideals. _Ideals_ , not _ideas_.
       | What do we value? What do we love?
       | 
       | If we love perceived security more than responsibility, we will
       | give up freedoms. And gladly. If we love ourselves more than
       | future generations, we will make short-sighted decisions and pat
       | ourselves on the back for our efficiency in rewarding ourselves.
       | If we love ourselves more than others, we won't even care much
       | about social concerns. We'll fail to notice anything that doesn't
       | move the needle against _my_ comfort much.
       | 
       | It's more understandable to me than ever how recent human horrors
       | - genocides, repressive regimes, all of it - came about to be.
       | It's because I'm a very selfish person and I am surrounded by
       | selfish people. Mass spying is a symptom - not much of a cause -
       | of the human condition.
        
       | miki123211 wrote:
       | I personally find the censorship implications (and the business
       | models they allow) far more worrying than the surveillance
       | implications.
       | 
       | It will soon be possible to create a dating app where chatting is
       | free, but figuring out a place to meet or exchanging contact
       | details requires you to pay up, in a way that 99% of people won't
       | know how to bypass, especially if repeated bypassing attempts
       | result in a ban. Same goes for apps like Airbnb or eBay, which
       | will be able to prevent people from using them as listing sites
       | and conducting their transactions off-platform to avoid fees.
       | 
       | The social media implications are even more worrying, it will be
       | possible to check every post, comment, message, photo or video
       | and immediately delist it if it promotes certain views (like the
       | lab leak theory), no matter how indirect these mentions are.
       | Parental control software will have a field day with this,
       | basically redefining helicopter parenting.
        
       | elric wrote:
       | We can barely get people to care about the implications of types
       | of surveillance that they do understand (CCTV everywhere,
       | Snowden's revelations, etc). It's going to be nigh impossible to
       | get people to care about this enough to make a difference.
       | 
       | Heck, even if they did care, there's nothing they can
       | realistically do about it. The genie's out of the bottle.
        
       | barelyauser wrote:
       | The average HN user defends the "common guy" or "the masses",
       | perhaps because he fears being perceived as condescending. I've
       | come to the conclusion that the masses don't deserve any of this.
       | Many drink and drive, indulge in destructive addictions (not only
       | to themselves). Many can't bother recycling or even maintaining a
       | clean home environment, waste time and resources in every
       | activity they engage in and don't even care for their neighbors
       | well being (loud music, etc).
       | 
       | Concluding remarks. As man succeeded in creating high mechanical
       | precision from the chaotic natural environment, he will succeed
       | in creating a superior artificial entity. This entity shall "spy"
       | (better described as "care" for) every human being, maximizing
       | our happiness.
        
       ___________________________________________________________________
       (page generated 2023-12-05 23:01 UTC)