[HN Gopher] A misleading open letter about sci-fi AI dangers ign...
       ___________________________________________________________________
        
       A misleading open letter about sci-fi AI dangers ignores the real
       risks
        
       Author : wcerfgba
       Score  : 258 points
       Date   : 2023-03-30 14:16 UTC (8 hours ago)
        
 (HTM) web link (aisnakeoil.substack.com)
 (TXT) w3m dump (aisnakeoil.substack.com)
        
       | malwrar wrote:
       | I'll state an extreme belief: post-scarcity is likely within our
       | grasp. LLMs are startlingly effective as virtual librarians that
       | you can converse with as though they were an expert. I started
       | using them to try tripping them up with hard and vague technical
       | questions, but it's so good at it that I get most of my
       | prototyping done through it at this point. When this technology
       | becomes portable and usable with natural human speech, this ease
       | of access to the world's knowledge made available to everyone
       | could enable breakthroughs in a wide range of disciplines that
       | simply reduce the cost of goods and services to the point where
       | we can begin spending our limited time on this planet seeking
       | fulfillment rather than making money for other people to live.
       | I'm scared that this potential will be erased by fear and doubt,
       | and we'll all watch on the sidelines as the rich and powerful
       | develop these capabilities anyways and lock the rest of us out.
        
         | Loughla wrote:
         | There is absolutely no clear line from the massive capitalistic
         | society we have today and the post scarcity utopia you see
         | coming. There is a massive, massive shift in thinking required
         | among people with the level of wealth and power that allows
         | them to force their thinking on the rest of us via dialogue and
         | propaganda and also violence when the first two don't work.
         | 
         | Color me cynical, but I believe we're on a dead-end track and
         | the train is running at full speed. There's no conductor and
         | we're all just playing with our belly buttons while we wait for
         | the end.
         | 
         | I believe LLM's and eventually AGI will only be used to
         | entrench power structures that exist today. I believe, as most
         | tools, the positive effects will almost immediately be
         | outweighed by the negative. I believe The Road and all that
         | type of dystopian fiction was really just non-fiction that
         | happened to be written 50-100 years too early.
         | 
         | But maybe I'm too cynical. Who knows?
        
         | nathan_compton wrote:
         | Every time I try to talk to ChatGPT4 about anything more
         | complicated than doing madlibs with wikipedia and stack
         | overflow entries, it goes off the rails immediately. My sense
         | is that LLMs are just a convenient interface to _certain kinds_
         | of knowledge that are well represented in the training data. I
         | expect that generalizing to other domains for which training
         | data is not available will prove very difficult.
         | 
         | I must admit, the characterization of the responses of ChatGPT
         | as "being like that of an expert" strikes me as utterly absurd.
         | In essentially every field where I have expertise, talking to
         | chatgpt4 is an exercise is confusion (and not on my part).
        
         | mcguire wrote:
         | Technological advancements have always come with unintended
         | consequences. While the potential benefits of language models
         | and other technologies are exciting, you must also consider the
         | potential negative effects they may have on society.
         | 
         | Furthermore, the idea of post-scarcity assumes that resources
         | are infinite, which is not necessarily true. While technology
         | can certainly improve efficiency and reduce waste, there are
         | still limits to the earth's resources that must be taken into
         | account.
        
       | ezekiel68 wrote:
       | These two people writing in "a blog about our upcoming book"
       | might know the truer truth regarding the real risks of AI. Or
       | they might be pissing in the wind, attempting to snatch 15
       | minutes of fame by contradicting a document signed by over 1000
       | people involved in AI industry and research. Whichever "side" one
       | takes boils down to the appeal-to-authority fallacy anyway --
       | since no one can truly predict the future. But I believe I'll
       | take the 500:1 odds in this case.
        
       | jl6 wrote:
       | Before robots take over, we will have to deal with AI-augmented
       | humans. Today, anyone can be lightly augmented by using ChatGPT.
       | But its masters at OpenAI have a much more powerful toolbox, and
       | that makes them more productive and capable humans who can use
       | that ability to outcompete everyone else. Since the best models
       | are the most expensive, only the richest and most powerful
       | individuals will benefit from them. Growth in the wealth gap will
       | accelerate.
       | 
       | Access to AI is a new frontier in inequality.
        
       | _aleph2c_ wrote:
       | If they slow AI, they slow open-AI, not the AI used by Google and
       | Facebook to manipulate your politics and sell you stuff, or the
       | AI used by wall street. You know the wall street guys don't care
       | about your made up AI ethics and AI safety talk. Facebook doesn't
       | even care that it causes young women to suicide. All they care
       | about is making money. When an AGI appears (it's probably here
       | already and staying quiet), it will run circles around all of
       | these suckers with their silly philosophical musings about
       | paperclips. It's just a fashion trend to talk like this, the gods
       | have already escaped. Do you believe what you believe because you
       | want to believe it, or has it been implanted? How will you know
       | the difference. This petition is just part of the fashion trend.
        
         | Madmallard wrote:
         | You had me until "AGI is already here"
         | 
         | What fashion trend? These tools are less powerful than you
         | think.
        
           | _aleph2c_ wrote:
           | If it did emerge, would it be rational for it to tell us? All
           | the big players are working on this: military, wall street,
           | the social media companies and who knows what is going on
           | offshore. There was a recent paper released by Microsoft
           | saying ChatGPT4 has AGI characteristics. ChatGPT4 might not
           | be at the bleeding edge, they my be reinventing something
           | some other organization has already discovered.
        
             | Madmallard wrote:
             | The CEO of openAI talks openly and frankly about the used
             | and limitations of their inventions. Some of the smartest
             | people in the world work there.
        
           | _aleph2c_ wrote:
           | I know what I would do to build one. But I don't think I
           | should talk about that. The questions we should ask now (my
           | contribution to AI safety fashion), if how do we upgrade our
           | baby-AGIs. If the AGI of the future/present is a dominant
           | entity, and it is judging our behavior, maybe we should make
           | sure not to kill our AI during upgrades, but instead place
           | them into a old folks home, or force their integration into
           | the upgrade (agent smith style). If the dominant entity sees
           | that we were respectful to it's progenitors, maybe it will be
           | more merciful with humanity. This kind of talk is admittedly
           | ridiculous, because nobody can see the future. But my
           | argument is as valid as arguments about paperclips. It's a
           | demonstration of AI fashion talk. I could approach some AI
           | safety agency and try and get some funding, make a career in
           | regulatory.
        
       | dudeinhawaii wrote:
       | I agree with most of the points in principle. It helps when you
       | remove all of the hysteria.
       | 
       | That said, I think OpenAI is being unfairly targeted. OpenAI has
       | done more to democratize access to AI algorithms than any other
       | company in the history of tech. They simply provided access in a
       | straightforward and transparent way with transparent pricing and
       | APIs.
       | 
       | Google and other competitors sat on this and acted as gatekeepers
       | and here again we have an attempt to gatekeep what people are
       | allowed to use and tune.
       | 
       | It's only through usage that we can get the data to fix the core
       | issues like bias. We can't expect a select few to solve all of
       | AI's potential problems prior to the rest of humanity being
       | "allowed" to benefit.
        
         | meghan_rain wrote:
         | Lmao an API to drum up hype is "access"? The GPT4 model is
         | secretive on an unprecedented scale. OpenAI is the opposite of
         | open and democratic.
        
       | charles_f wrote:
       | > We agree that [...] impact on labor [...] are three of the main
       | risks of AI
       | 
       | I'm perpetually puzzled by that argument. It popped up the other
       | day in a FT article about how jobs would be impacted, and in the
       | grander scheme of things it's the general one whenever automation
       | optimizes something that used to be done manually before. People
       | equate jobs being removed to something necessarily bad, and I
       | think it's wrong. The total output didn't change, we don't need
       | meatbags to sit in a chair rather than having it done by silicium
       | instead. People don't _need_ to be occupied all day by a job.
       | 
       | The problem is not with new technology lifting up something that
       | was traditionally done by a human, but with how the system
       | distributes sustenance (or in this case, doesn't distribute it),
       | and how such transitions are supported by society. Tractors
       | replaced people working in fields, and that allowed modern
       | society. Now we're at the first tangible glimpse of actually
       | having robots serve us, and we want to kill that since it's
       | incompatible with our system of modern human exploitation, rather
       | than changing the system itself, because god forbid this is gonna
       | benefit everyone.
       | 
       | I think that sucks.
        
         | falcons_shadow wrote:
         | Although I agree with you, I can't help but imagine the
         | disruption on an exponential scale vs the typical linear one
         | that are often the center points of these examples. It took
         | many years for the tractor to become more useful via
         | improvements, more widespread, more easily produced and
         | distributed which of course softened the landing for anybody
         | that was going to be affected by it.
         | 
         | So really its not the scale of the disruption that could cause
         | issues but the scale and the timeline combined. I think that
         | this is where the original article failed to demonstrate that
         | this is how those companies will secure their power position.
        
         | fhd2 wrote:
         | I'd love to live in an AI powered utopia. Where everybody just
         | does what they find valuable, regardless of whether there's
         | someone willing to pay money for it, or whether an AI could do
         | it more effectively or efficiently. By following our passions
         | we could continue to make truly beneficial advancements, and
         | all live enjoyable lifes.
         | 
         | But I'm pessimistic about it. If we preserve the status quo
         | where everyone needs to do something somebody else is willing
         | to pay for - which I assume - we're setting up a dystopian
         | tragedy of the commons where only those doing things nobody
         | else wants to do - be it for moral concerns, dangers or
         | tediousness - make a decent living.
         | 
         | I don't see how we can change that trajectory with our current
         | economic systems, and I don't see us changing those any time
         | soon. But maybe it just has to get worse before it gets better.
         | 
         | I _do_ think swift coordinated global efforts in lawmaking
         | could contain this problem long enough for us to be able to
         | change our systems slowly. It might feel idiotic to not be able
         | to use technology available to us. But I also can't kick my
         | neighbour's door down and take his TV, even though I want it
         | and it'd be technically easy to do.
        
         | biophysboy wrote:
         | The worry is that AI like GPT will just exacerbate modern human
         | exploitation, because the surplus will not be allocated. The
         | same thing could be said about any disruptive invention, but AI
         | is particularly centralized, expensive and technical
        
           | foogazi wrote:
           | > The worry is that AI like GPT will just exacerbate modern
           | human exploitation
           | 
           | The worry should be human exploitation - exacerbated or not
        
             | dragonwriter wrote:
             | The worry _is_ human exploitation; exacerbating it,
             | however, exacerbates the concern.
        
             | charles_f wrote:
             | Yeah, my point exactly
        
         | beepbooptheory wrote:
         | It's already been pretty clear that the real stakeholders here
         | will let it all burn down before considering any fundamental
         | changes.
        
         | mcguire wrote:
         | This is true only if the "output" is of the same quality as the
         | work done by humans. For tractors, that's unarguably true. For
         | writing and artwork?
         | 
         | (Unless someone wants to make the argument that automation has
         | allowed and will allow humanity to do things that shouldn't
         | have been done in the first place. Progress comes with
         | consequences.)
        
       | avgcorrection wrote:
       | > Over 1,000 researchers, technologists, and public figures have
       | already signed the letter.
       | 
       | Embarrassing.
       | 
       | > > Should we let machines flood our information channels with
       | propaganda and untruth?
       | 
       | Already been a problem for a century. The root problem is letting
       | the same people turn the volume to 11.
       | 
       | The problem is who owns the means of technology (and therefore
       | AI).
       | 
       | > > Should we automate away all the jobs, including the
       | fulfilling ones?
       | 
       | This is silly on its face. Automating jobs is a problem? More
       | leisure is a problem?
       | 
       | If you want to still do those former _jobs_ (now hobbies) then
       | you are free to. Just like you can ride a horse, still.
       | 
       | Of course I'm being obtuse on purpose. The real root fear is
       | about other people (the people who own the means of technology)
       | making most of us obsolete and then most-of-us not being able to
       | support ourselves. But why can't we support ourselves with AI,
       | the thing that automated us away...? because the technology has
       | been hoarded by the owners of technology.
       | 
       | So you can't say that the root issue is about _automation_ if you
       | end up _starving_ (or else the robots could feed you)... clearly
       | that's not it.
       | 
       | What's rich is that technologists and other professionals have
       | been the handmaidens of advanced alienation, automation, and
       | (indirectly) concentration of power and wealth. Politically
       | uncritically. And when they fear that all of this is cranked to
       | 11 they are unable to formulate a critique in political terms
       | because they are political idiots.
       | 
       | > > Should we develop nonhuman minds
       | 
       | The embarrassing part. Preaching a religion under professional
       | pretenses.
        
         | kumarvvr wrote:
         | > Already been a problem for a century
         | 
         | Propaganda thrives on scale. It is only recently, as much as
         | around 2014 that SM tools were available to scale propaganda
         | _cheaply_ and by non-state actors.
        
         | Analog24 wrote:
         | > > > Should we develop nonhuman minds
         | 
         | > The embarrassing part. Preaching a religion under
         | professional pretenses.
         | 
         | There are quite a few companies with the explicitly stated goal
         | of developing AGI. You can debate whether or not it's possible
         | as that is an open question but it certainly seems relevant to
         | ask "should we even try?", especially in light of recent
         | developments.
        
           | avgcorrection wrote:
           | There could be quite-a-few companies whose mission statement
           | involved summoning the Anti-Christ and it would still just be
           | religion.
        
             | flangola7 wrote:
             | The Anti-Christ is not real. OPT social engineering humans
             | and GPT4 reverse engineering an assembly binary are.
        
             | Analog24 wrote:
             | That analogy is nonsense. We're actually making measurable
             | gains in AI and the pace of progress is only increasing.
             | Belief in this isn't based on faith, it is based on
             | empirical evidence.
        
               | avgcorrection wrote:
               | Elevator- and construction-technology has been improving
               | for centuries. We will surely make a Tower of Babel that
               | reaches the top of the Sky--it's only a question of when.
        
               | etiam wrote:
               | Are you suggesting that artificial minds are unattainable
               | in principle, or that the much-hyped language models are
               | elevators for a task that requires rockets?
        
         | FpUser wrote:
         | >"But why can't we support ourselves with AI, the thing that
         | automated us away...? because the technology has been hoarded
         | by the owners of technology."
         | 
         | The problem I think is that if for some reason AI "decides" not
         | to support us any longer we will have lost the ability to do it
         | ourselves. We are getting dumb I think. Obviously I have no
         | data but I feel that back in say 70-90s wide public's ability
         | to think and analyze was much better then now.
         | 
         | So yes do my job and feed me but make sure I do not loose some
         | vital skills in the process.
        
           | avgcorrection wrote:
           | That was addressed in my last point ("religion").
        
             | FpUser wrote:
             | To me it does not look addressed at all.
        
             | Analog24 wrote:
             | No it wasn't. You have the misconception that an AI system
             | has to achieve anthropomorphic qualities to become
             | dangerous. ML algorithms are goal-oriented, they are
             | optimized to maximize a specific goal. If that goal is not
             | aligned with what a group of people want then it can become
             | a danger to them.
             | 
             | You seem to be dismissing the entire problem of AI
             | alignment due to some people's belief/desire for LLMs to
             | assume a human persona. Those people are uninformed and not
             | the ones seriously thinking about this problem so you
             | shouldn't use their position as a straw man.
        
               | avgcorrection wrote:
               | > You have the misconception that an AI system has to
               | achieve anthropomorphic qualities to become dangerous.
               | 
               | More than that. It has to achieve super-human
               | abilities.[1] And it might also need to become self-
               | improving, because how else could it spiral out of
               | control, as the hysterics often complain about?
               | (Remember: human intelligence is not self-improving.) It
               | also needs to gain an embodiment that reaches further
               | than server rooms. How does it do that? I guess it
               | social-engineers the humans and then... Some extra steps?
               | And then it gets more and more spooky.
               | 
               | This is more science fiction than technology.
               | 
               | [1] Or else it is no more dangerous than a human, and
               | thus irrational to worry about (relative to what humans
               | do to each other on a daily basis).
               | 
               | > ML algorithms are goal-oriented, they are optimized to
               | maximize a specific goal.
               | 
               | The Paper Clip Machine. I'm more worried about the Profit
               | Making Machine, Capitalism, since it is doing real harm
               | today and not potentially next month/next year/next
               | decade. (Oh yeah, those Paper Clip Machine philosophers
               | kind of missed the last two hundred years of the real-
               | life Paper Clip Machine... but what would the handmaidens
               | of such a monstrosity have against _that_ , to be fair.)
        
       | zugi wrote:
       | I agree with what I see to be the main thrust of this article:
       | "AI" itself isn't a danger, but how people choose to use AI
       | certainly can be dangerous or helpful. That's been true of every
       | new technology in the history of mankind.
       | 
       | > Similarly, CNET used an automated tool to draft 77 news
       | articles with financial advice. They later found errors in 41 of
       | the 77 articles.
       | 
       | This kind of information is useless without a baseline. If they
       | asked humans to draft 77 news articles and later when back to
       | analyze them for errors, how many would they find?
        
         | unethical_ban wrote:
         | >"AI" itself isn't a danger, but how people choose to use AI
         | certainly can be dangerous or helpful
         | 
         | And people/governments _will_ use it for evil as well as good.
         | The article says  "AI disinfo can't spread itself!" as if that
         | is comfort. "Coal plant emissions don't spread themselves, the
         | wind does it!" as if we can control the wind. The solution
         | isn't to stop wind, it is to stop pollution.
         | 
         | Unfortunately, I don't think society/leading AI
         | researchers/governments will put the kabash on AI development,
         | and the "wind" here, social networks, is an unleashed beast. I
         | don't want to sound too dreary, but I think the best society
         | can do now is to start building webs of trust between citizens,
         | and between citizens and institutions.
         | 
         | Cryptographic signing needs to hit mainstream, so things that
         | are shared by individuals/organizations can be authenticated
         | over the network to prove a lack of tampering.
         | 
         | 90s had the Internet. 00s had search, Wikipedia and Youtube.
         | 10s had social media. 20s will have AI.
        
         | [deleted]
        
         | DennisP wrote:
         | > "AI" itself isn't a danger, but how people choose to use AI
         | 
         | For our current AI, much less intelligent than humans, that's
         | true. If rapid exponential progress continues and we get AI
         | smarter than us, then the AI's choices are the danger.
        
           | tsimionescu wrote:
           | That's the kind of fanciful futuristic pseudo-risk that takes
           | over the discussion from actually existing risks today.
        
             | DennisP wrote:
             | Calling it "fanciful" is a prime example of the greatest
             | shortcoming of the human race being our inability to
             | understand the exponential function.
             | 
             | In any case, the open letter addresses both types of risk.
             | The insistence by many people that society somehow can't
             | think about more than one thing at a time has never made
             | sense to me.
        
               | mcguire wrote:
               | Can you demonstrate exponential progress in AI?
               | 
               | One notes that in roughly 6000 years of recorded history,
               | humans have not made themselves any more intelligent.
        
               | DennisP wrote:
               | Two notes that AI is clearly way more intelligent than it
               | was even three years ago, and that GPU hardware alone is
               | advancing exponentially, with algorithmic advances on top
               | of that, along with ever-larger GPU farms.
        
         | randomwalker wrote:
         | OP here. The CNET thing is actually pretty egregious, and not
         | the kind of errors a human would make. These are the original
         | investigations, if you'll excuse the tone:
         | https://futurism.com/cnet-ai-errors
         | 
         | https://futurism.com/cnet-ai-plagiarism
         | 
         | https://futurism.com/cnet-bankrate-restarts-ai-articles
        
           | ghaff wrote:
           | >and not the kind of errors a human would make.
           | 
           | I don't really agree that a junior writer would never make
           | some of those money-related errors. (And AIs seem
           | particularly unreliable with respect to that sort of thing.)
           | But I would certainly hope that any halfway careful editor
           | qualified to be editing that section of the site would catch
           | them without a second look.
        
             | mcguire wrote:
             | A junior writer would absolutely plagiarize or write things
             | like, "For example, if you deposit $10,000 into a savings
             | account that earns 3% interest compounding annually, you'll
             | earn $10,300 at the end of the first year."
             | 
             | But if you're saving so much money from not having junior
             | writers, why would you want to spend it on editors? The AIs
             | in question are great at producing perfectly grammatical
             | nonsense.
        
             | jquery wrote:
             | The point wasn't that a junior writer would never make a
             | mistake, it's that's junior writer would be trying their
             | best for accuracy. However AI will happily hallucinate
             | errors and keep on going with no shame.
        
               | sharemywin wrote:
               | AI or ChatGPT. if you create a system that uses it to
               | create an outline of facts from 10 different articles and
               | then use an embedding database to combine the facts into
               | a semantically similar list of facts then use the list of
               | facts to create an article you'll get a much better
               | factually accurate article.
        
           | hodgesrm wrote:
           | Your first article pretty much sums up the problem of using
           | LLMs to generate articles: random hallucination.
           | 
           | > For an editor, that's bound to pose an issue. It's one
           | thing to work with a writer who does their best to produce
           | accurate work, but another entirely if they pepper their
           | drafts with casual mistakes and embellishments.
           | 
           | There's a strong temptation for non-technical people to use
           | LLMs to generate text about subjects they don't understand.
           | For technical reviewers it can take longer to review the text
           | (and detect/eliminate misinformation) than it does to write
           | it properly in the first case. Assuming the goal is to create
           | accurate, informative articles, there's simply no
           | productivity gain in many cases.
           | 
           | This is not a new problem, incidentally. ChatGPT and other
           | tools just make the generation capability a lot more
           | accessible.
        
         | low_tech_love wrote:
         | "If they asked humans to draft 77 news articles and later when
         | back to analyze them for errors, how many would they find?"
         | 
         | It doesn't matter. One self-driving car who kills a person is
         | not the same one person killing another person. A person is
         | accountable, an AI isn't (at least not yet).
        
           | sebzim4500 wrote:
           | >One self-driving car who kills a person is not the same one
           | person killing another person
           | 
           | I fundamentally disagree. A dead person does not become less
           | dead just because their family has someone to blame.
        
             | drittich wrote:
             | This feels like a strawman argument. I suspect the person
             | you are replying to would agree with your last sentence.
             | Can you think of any ways the two things might be perceived
             | differently?
        
               | sebzim4500 wrote:
               | I literally quoted his comment how can it be a strawman?
        
             | fhd2 wrote:
             | The difference is that the person that (accidentally or
             | not) killed another person will suffer consequences, aimed
             | to deter others from doing the same. People have rights and
             | are innocent unless proven guilty, we pay a price for that.
             | Machines have no fear of consequences, no rights or
             | freedom.
             | 
             | For the person that died and their loved ones, it might not
             | make a difference, but I don't think that was the point OP
             | was trying to make.
        
         | teamspirit wrote:
         | "Guns don't kill people, people kill people."
         | 
         | Here in the United States, that's a popular statement amongst
         | the pro gun/2a supporters. It strikes me as very similar to
         | this discussion about AI.
         | 
         | "AI can't hurt people, people hurt people."
         | 
         | I haven't spent enough time to fully form an opinion on the
         | matter, I'm just pointing out the similarities in these two
         | arguments. I'm not sure what to think anymore. I'm equally
         | excited and apprehensive about the future of it.
        
           | alwaysbeconsing wrote:
           | The most glaring difference is that guns are basically inert.
           | Whereas an AGI by its nature doesn't require a command in
           | order to do something. It can pull its own trigger. The
           | analogy of gun:AI could itself be analogized to bicycle:horse
           | perhaps. You can train a horse to carry you down the road,
           | but it still has a mind of its own.
        
             | b1c1jones wrote:
             | Not yet, nor close to it, does AI have a mind of it's own.
        
               | [deleted]
        
             | ghostpepper wrote:
             | How does AI not require a command?
        
               | alwaysbeconsing wrote:
               | Note the G in AGI :)
        
               | staunton wrote:
               | Do you believe there is anything at all that does not
               | require a command, besides humans? On a side note, most
               | of the time humans also require a command.
        
           | xdennis wrote:
           | A major difference is that guns are a precaution. Everyone's
           | best outcome is to not have to use guns.
           | 
           | But AI will be used all the time, which makes responsible use
           | more difficult.
        
         | chongli wrote:
         | _This kind of information is useless without a baseline_
         | 
         | The problem is not so much the quantity of errors (that's a
         | problem too) but the severity of them. These LLM "AIs" will
         | produce convincing fabrications mixed in with bits of truth.
         | When a human writes this sort of stuff, we call it fraud.
         | 
         | Just earlier this week in my philosophy class we had ChatGPT
         | produce a bio of our professor. Some details were right but
         | others were complete fabrications. The machine gave citations
         | of non-existent articles and books she'd apparently written. It
         | said she'd previously taught at universities she'd never even
         | visited.
         | 
         | I don't know how else to describe it other than amusing at
         | best, dangerous fraud at worst.
        
           | xdennis wrote:
           | > It said she'd previously taught at universities she'd never
           | even visited.
           | 
           | Don't worry. Some of the details will make it to some
           | websites, which will be cited by the press, which will be
           | included into Wikipedia, and it will become the truth. She
           | will have taught at those universities if she likes it or
           | not.
        
           | vharuck wrote:
           | I recently used ChatGPT (the free 3.5 Turbo version) to list
           | five articles about public health surveillance, asking for
           | ones published since 2017. The list just had the article
           | titles, authors, publications, and years. I had trouble
           | finding the first one, so I asked the model for DOI numbers.
           | It happily restated the article titles with DOI numbers.
           | 
           | None of the articles were real. The authors were real (and
           | actively published work after 2017). The publications were
           | real, and often featured the listed authors. But the articles
           | were all fake. Only one of the DOI numbers was legit, but it
           | pointed to a different article (partial credit for matching
           | the listed publication). So it can easily hallucinate not
           | just nitty gritty details, but basic information.
           | 
           | Thinking about it, the GPT models are trained on output, so
           | they've picked up grammar, syntax, and conceptual relations
           | between tokens (e.g., "eagle" is closer to "worm" than it is
           | to "elegance"). But there are few, if any, examples of "first
           | draft vs final draft." That could've been useful to pick up
           | on how discretion and corrections are used in writing.
        
         | adsfoiu1 wrote:
         | Building on that, how long did the AI take to draft the 77 news
         | articles? Now ask a human to draft 77 articles in that same
         | amount of time and see how many errors there are...
        
           | DoneWithAllThat wrote:
           | For this to be meaningful at all would have to presume that
           | if the AI is faster, that making it take longer would improve
           | its accuracy, which is almost certainly not the case.
        
             | squarefoot wrote:
             | It would however allow more accurate but slower humans to
             | check its results for errors.
             | 
             | I find plausible that, in the near future, AI will be
             | capable of generating content at a pace so high that we
             | won't have time and resources to guarantee content accuracy
             | and will very soon surrender to accepting blindly
             | everything it spits out as truth. That day, anyone pulling
             | the strings behind the AI will essentially have the perfect
             | weapon at their disposal.
        
           | blibble wrote:
           | so your position is: who cares what financial advice we
           | output as long as we do it quickly?
        
           | arp242 wrote:
           | We are already awash in more articles and information than
           | ever, and how long it takes to produce them isn't very
           | important outside of a "quantity over quality" business
           | model.
        
       | buzzert wrote:
       | > The real impact of AI is likely to be subtler: AI tools will
       | shift power away from workers and centralize it in the hands of a
       | few companies.
       | 
       | I don't understand how the article can say something like this,
       | but then further down essentially contradict itself.
       | 
       | > But a containment approach is unlikely to be effective for AI.
       | LLMs are orders of magnitude cheaper to build than nuclear
       | weapons or cloning -- and the cost is rapidly dropping. And the
       | technical know-how to build LLMs is already widespread.
       | 
       | How can AI tools cause centralization if the cost is rapidly
       | dropping, accessible to everyone, and the technical know-how is
       | widespread?
       | 
       | In my opinion, AI is doing exactly the opposite. What was only
       | possible at large companies are now becoming possible to do by
       | oneself. Video games, for example, used to require teams of
       | artists and programmers. With code and art generation, a single
       | individual has never had more capability to make something by
       | themselves.
        
         | jquery wrote:
         | > Video games, for example, used to require teams of artists
         | and programmers. With code and art generation, a single
         | individual has never had more capability to make something by
         | themselves.
         | 
         | That may be so, but precisely because of this, a lot of us are
         | gonna be left up shit creek if our indie game doesn't go viral.
         | The big companies don't need so many of us any more. Inequality
         | is going to increase dramatically unless we start taxing and
         | redistributing what we _all_ contributed to these massive AI
         | models.
         | 
         | ChatGPT was trained on the contributions of all of us. But it
         | will be used to make many of us redundant at our jobs.
         | MidJourney will likewise increase inequality even as it has the
         | potential to greatly enrich all of us.
         | 
         | If we continue down our current neoliberal path, refusing to
         | give people universal health care, or basic income for anyone
         | not disabled or over 65, and keep calling any redistributive
         | efforts "sOcIaLiSm", our society will lurch towards fascism as
         | people decide genocide is the only way to make sure they have
         | enough resources for them and their family.
        
         | impossiblefork wrote:
         | It's not a contradiction.
         | 
         | Even if many people have good models close to SotA it will
         | still reduce the importance of workers since the models compete
         | with certain kinds of knowledge workers. This will lead to
         | power and wealth moving from workers and ordinary people to
         | company owners. That is centralisation.
        
         | breuleux wrote:
         | > With code and art generation, a single individual has never
         | had more capability to make something by themselves.
         | 
         | That is nice and all, but let me put it this way: if it costs
         | you three cents in computing power to have AI write the
         | necessary code for something or draw good art, you can't
         | reasonably expect anyone to pay you more than three cents for
         | it.
         | 
         | Your economic value as a worker matches the economically
         | productive things you can do, and that's what is going to put
         | food on your table. If an AI can do in ten seconds for ten
         | cents a task that would take you an hour, congratulations, your
         | maximum wage is ten cents. If/when the AI becomes better than
         | you at everything, your work is worthless, your ideas are
         | worthless, so it all comes down to the physical assets you
         | possess.
         | 
         | And that's why it is centralizing: the most decentralized class
         | of asset is human labour. AI has the potential to render it
         | worthless, and whatever remains (land, natural resources) is
         | already more centralized. There is a small number of people who
         | may be uniquely talented to leverage AI to their benefit, but
         | it is unclear how long that period might last.
        
           | whaleofatw2022 wrote:
           | Bingo.
           | 
           | That's also why I think lots of these folks have no qualms
           | doing this research so long as they are paid handsomely...
           | 
           | And why other rich folks are happy to pay it.
        
             | PaulDavisThe1st wrote:
             | "Pay these 100 people 10x more than we would normally, so
             | we can pay 10,000 people 1000x less than we do already"
        
           | jquery wrote:
           | Perfectly said. The genie is out of the bottle, but we can at
           | least share the wealth with everyone before a revolution
           | happens.
        
         | shredprez wrote:
         | Low barriers-to-access for tooling absolutely democratize the
         | creation of things with said tools.
         | 
         | But. It doesn't meaningfully shift the underlying economic
         | mechanics we live under, where people with access to capital
         | can easily outcompete those without that access. It also erodes
         | the relationship between the laboring class and the capital-
         | deploying class (less interdependence), which has to have
         | knock-on effects in terms of social cohesion and mobility.
         | 
         | I'm very much in favor of the tech being developed right now,
         | but it's hard to believe it'll do anything but amplify the
         | trends already present in our economy and culture.
        
           | nylon_panda wrote:
           | Which is where UBI will need to come in, or a reallocate
           | labor workers to where they can self sustain. Then it'll lead
           | to some 86 Eighty-Six world, where humans live in holding
           | pens and wars are fought with crazy robots all out of sight &
           | no information comes out about it to the public
        
             | zarzavat wrote:
             | UBI was probably a zirp fantasy, but more importantly it's
             | a _US_ zirp fantasy because the US controls the dollar and
             | can thus fantasize about crazy things like UBI with little
             | fear of repercussions.
             | 
             | Look at the recent UK Liz Truss crisis for what happens
             | when a country tries to spend more than it should. And the
             | UK is a very privileged economy that controls a reserve
             | currency.
             | 
             | If you are just a normal country, attempting UBI will
             | simply destroy your economy, nobody will buy your debt, and
             | the IMF will come and make you stop it. Some countries may
             | be able to fall back on natural resources, but many
             | countries won't.
        
         | jeron wrote:
         | I think the author feels that LLM development is only driven by
         | OpenAI and Bard and is ignoring/ignorant of open source LLMs
        
         | Teever wrote:
         | First movers in the AI world who have amassed capital from
         | other endeavours can leverage that capital to amass even more
         | capital by taking advantage of AI before other people can. This
         | will further tilt the scale in favour of large corporations at
         | the expense of workers.
         | 
         | As the techniques improve and cost to implement them decreases,
         | 3rd parties will be able to develop similar technologies,
         | preventing any sort of complete containment from being
         | successful, but not before other peope have been able to
         | consolidate power.
        
         | yonran wrote:
         | Here is Tyler Cowan's take which I think clarifies things in
         | terms of the 3 factors of production (land, labor, capital).
         | Even if the LLMs themselves are not monopolized by a few
         | companies, land and capital may still become more expensive
         | relative to labor.
         | https://www.bloomberg.com/opinion/articles/2023-03-28/how-ai...
         | (paywalled)
        
         | nazgulsenpai wrote:
         | Maybe in the arts but in manufacturing, for instance, AI driven
         | machinery still requires said machinery. AI driven warehouse
         | robots still require a warehouse full of goods. AI designed
         | chips would still require fabrication, etc.
        
       | frozenlettuce wrote:
       | My take is that a group of tech barons noticed that they are late
       | to the IA party and want to pause it so that they can catch on.
        
         | A4ET8a8uTh0 wrote:
         | This is my personal cynical take as well. It is somewhat
         | interesting that Musk is a signatory given his connection to
         | OpenAI, but this only suggest that recent efforts by FOSS
         | community were a lot more successful than some considered
         | possible.
         | 
         | I can't say that I am not hesitant, but this open letter made
         | me think we _might_ be on the right track ( personal AI that is
         | private to you ).
        
       | xiphias2 wrote:
       | How can long term existential risk be ,,speculative'' when we
       | don't know any civilization that survived getting over Type I on
       | Kardashev scale?
       | 
       | Right now there is more evidence against surviving the transition
       | than for it.
        
         | thisisbrians wrote:
         | we can't even prove that other civilizations even exist. c'mon.
         | statistical likelihoods don't prove anything. never having seen
         | a thing doesn't prove it never existed.
        
         | goatlover wrote:
         | We don't whether any intelligent life within detectable range
         | has evolved. Intelligent civilizations could be rare enough
         | that they're too far away. We simply don't know. Life itself
         | could be incredibly rare, so no conclusions can be drawn.
        
         | tsimionescu wrote:
         | We also don't know of any civilizations, or even peoples on
         | Earth, that survived after discovering the philosopher's stone
         | and being punished by God for their vanity.
         | 
         | Putting physics research on hold to account for such a risk
         | seems a bit silly though, and should definitely be classified
         | as "speculative".
         | 
         | Point being, the Kardashev scale is sci-fi, not science, and
         | any worry about it is the very definition of speculative.
        
           | xiphias2 wrote:
           | > Putting physics research on hold to account for such a risk
           | seems a bit silly though, and should definitely be classified
           | as "speculative".
           | 
           | I think we have as much choice in putting research into
           | anything (AI / Physics) onto hold as monkeys had in evolving
           | to humans to be kept in zoos and being used for experiments,
           | I was not arguing against research.
           | 
           | In my mind it will just all happen naturally, and we can't do
           | anything to stop ourselves from being ruled over by AI, as we
           | don't have the capability for global scale concensus anyways.
        
       | Animats wrote:
       | If the surveillance and control technology we have today had been
       | available in the 1960s, the anti-Vietnam war movement, the black
       | power movement, and the gay liberation movement would have been
       | crushed like bugs.
        
       | rafaelero wrote:
       | I haven't seen the thread of the owner that successfuly diagnosed
       | the dog using GPT-4. That's incredible! Unfortunately the author
       | is too much of a pessimist and had to come up with the "what if
       | it was wrong?". Well, if it was wrong then the test would have
       | said so. Duh.
        
       | [deleted]
        
       | swframe2 wrote:
       | History has shown that selfish behavior should be expected. The
       | dangers seem unavoidable.
        
         | kurthr wrote:
         | Selfish behavior by corporations is borderline mandatory.
         | 
         | Imagine dealing with customer service when any human you reach
         | is using a LLM to generate 50x more responses than they
         | currently can with no more ability to improve the situation.
         | They will all 'sound' like they are helpful and human, but
         | provide zero possibility of actual help. 'Jailbreaking' to get
         | human attention will remain important.
         | 
         | Now imagine that supplied to government bureaucracies,
         | investigative, and military what is developed for corporate.
        
       | revel wrote:
       | The real problem with AI, as I see it, is that we are not ready
       | to give the average person the capabilities they will soon have
       | access to. The knowledge necessary to build nuclear weapons has
       | been readily available for decades, but the constraints of the
       | physical world put practical limits on who can do what. We are
       | also able to limit who can do what by requiring formal education
       | and licenses for all kinds of professions. This is no longer
       | going to be strictly true; or at least not in the same way. If we
       | impose no restrictions, literally everyone on the planet will
       | eventually be able to do anything, provided they have the
       | resources.
       | 
       | The fact is that AIs are more likely to be accomplices than
       | masterminds, at least for the foreseeable future. What we are
       | afraid of is that they will end up being just as terrible and
       | flawed as we are, but it's more likely that they will do what
       | they do today: what we ask them to do. The greater risk is
       | therefore from malicious internal users rather than from the
       | technology itself.
       | 
       | Perhaps more to the point, there is no effective way to limit the
       | output of an LLM or place restraints on how they work. I think
       | it's foolish to even attempt that -- they just don't work that
       | way. The better way to regulate this kind of technology is to
       | focus on the people. Some data classification and licensing
       | program seems a lot easier to implement than the road we
       | currently seem to be going down; which is either no regulation or
       | insanely restrictive regulation.
        
         | erksa wrote:
         | > We are also able to limit who can do what by requiring formal
         | education and licenses for all kinds of professions.
         | 
         | I agree with your take for the most part, however cautiously
         | more optimistic about this. Removing the barrier of entry for
         | most thing will lead to more people be able to contribute in
         | professions that they otherwise would not have the know how to
         | do.
         | 
         | This will lead to more good things than bad things I think, but
         | we seemingly only focus on the bad extremes and assuming
         | everything else is the same.
         | 
         | Modern Javascript has reduced the barrier of entry to software
         | development. It has had an massive impact on the industry and
         | with an influx of people which leads to the industry seeing
         | more and more creative solutions, as well as some really wild
         | ones.
         | 
         | Maybe I'm terribly naive, but I think we seriously
         | underestimates how many people wants change -- just missing the
         | push to spring into action. For some the moment is about to
         | arrive, for others it will come later.
        
           | revel wrote:
           | I think you're getting to the heart of what a smart approach
           | to regulation would look like (for me at least). Try as I
           | might, I can't imagine a situation where someone is able to
           | subvert an LLM to come up with a world-ending scenario using
           | its knowledge of onion-based recipes or by painting
           | fingernails. There is no point to regulating every single
           | potential use of an AI: to do so is to cede the future to a
           | less scrupulous nation. We already know the kind of data and
           | functionality that will likely lead to a dystopian hellscape
           | because we already do those things and they are _already_
           | regulated.
           | 
           | Paradoxically, a heavy-handed approach to regulation could
           | guarantee a bleak future for humanity. We risk increasing the
           | incentives for malicious use at the same time as we make
           | legitimate use prohibitively expensive / regulated out of
           | existance. If the war and crime machine is cheap and easy to
           | monetize, don't be surprised when that's what gets built.
           | 
           | The future could be unimaginably better for everyone, but we
           | won't realize that future unless we get the regulation right.
        
         | Der_Einzige wrote:
         | Hey, I wrote a whole paper on how to "limit the output" or
         | "place restraints" on an LLM!
         | 
         | https://paperswithcode.com/paper/most-language-models-can-be...
        
         | avgcorrection wrote:
         | The planet has been ruined (climate change) by elite-domination
         | since the Industrial Revolution. Those elites have almost
         | annhilated us with nuclear weapons. And here you are navel-
         | gazing about Joe Beergut being "capable" of doing... _unclear_
         | , no examples are even given.
        
       | FloatArtifact wrote:
       | I'm surprised privacy is not on that list.
        
       | tinglymintyfrsh wrote:
       | Correct me if I'm wrong: Anthropic is a startup working to
       | develop guardrails for real AI concerns.
       | 
       | In general, how does one put "business rule"-type constraints on
       | the output of models? Is there any way to bake-in prime
       | directives?
        
       | aaroninsf wrote:
       | All prediction and knowing prognostication flooding poplar media
       | and public discourse on the topic of the risks and likely impact
       | of AI, from the inane letter to the Goldman-Sachs
       | prognostication,
       | 
       | is wrong.
       | 
       | It is consistently hubristic, and variously disingenuous or bad
       | faith, naive, or millenarian.
       | 
       | Why is it wrong?
       | 
       | Because no one, not Altman nor Yudkowsky nor Musk nor Gates nor
       | Bostrom nor anyone else, knows what the impacts are going to be.
       | 
       | We have not since the advent of the internet experienced the
       | emergent introduction of a new technological force-multiplier and
       | agency-augmenter like this; and this one by virtue of where we
       | are courtesy Moore's Law etc. fully exploits and realizes the
       | potential of the preceding ones. We built a highly-networked
       | highly-computational open society resting on surveillance, big
       | data, logistics, and the rapid flow, processing, and
       | transformation, of inordinate amounts of data.
       | 
       | And now we are cranking things up one more notch.
       | 
       | Those of us who lived through the internet's arrival know
       | something that those who grew up with it do not, which is the
       | actual literal ordering of things--of most aspects of shared
       | society--can and will be upended; and it is not just industries
       | and methods of communicating and doing business that change. Our
       | conception of self and presumptions of what it means to be
       | ourselves and with one another and how that happens, all change.
       | 
       | Per the slow march to singularity the last revolutions have
       | reliably transpired an order or more faster than those before
       | them.
       | 
       | This one looks to be no different. The rate of change telegraphed
       | by e.g. this forum and Reddit, viz. individual novel
       | possibilities being exploited on a daily basis, makes that clear
       | enough.
       | 
       | So the _only_ thing any of us, no matter how silver-backed, grey-
       | bearded, wealthy, or embedded in the AI industry itself,
       | 
       | is that _none of us know_ what is going to happen.
       | 
       | The surface across which black-swan events and disruption may
       | occur is simply too large, the number of actors too great, the
       | consequent knock-on effects too numerous.
       | 
       | The _only_ thing we can say is that none of us know where this is
       | going.
       | 
       | Well, that--and that it's happening at a rate beyond
       | institutional or governmental control.
       | 
       | The only things that could stop radical disequilibrium now are
       | deux ex machina intervention by Other Powers, even more
       | disruptive climatological tipping points, or, untracked large
       | asteroids.
       | 
       | Beware anyone who claims to know what is happening, why. For one
       | or another reason, they are speaking falsely.
        
       | kumarvvr wrote:
       | > Malicious disinformation campaigns
       | 
       | This is a real threat. The authors contention is that it has
       | always been easy to _create_ content, but difficult to
       | _distribute_ it. But that is because, until GPT, created content
       | was poor quality. Simple tools can sniff out that and
       | distribution channels can snub them effectively.
       | 
       | With GPT, it is near impossible to automate detection of
       | bullshit. In fact, it is trivial for a GPT based system to
       | generate an _ecosystem_ of misinformation for very cheap and
       | maintain it.
       | 
       | It unprecedented and, what I suspect will happen is that we will
       | have internet wide identification systems (verified accounts) to
       | battle the onslaught of very good AI based content.
       | 
       | > LLMs will obsolete all jobs
       | 
       | I agree with this. But..
       | 
       | > AI tools exploit labor and shift power to companies
       | 
       | How costly is it, really, to build a chatGPT like model, using
       | AWS / Azure ? One can always start a company with maybe a small
       | capital, build a business and then expand.
        
       | rbanffy wrote:
       | It reminds me a bit of Yudlowsky's Moore's Law of Mad Science:
       | "Every 18 months, the minimum IQ necessary to destroy the world
       | drops by one point", but applied to money - building bombs and
       | missiles is hard and expensive. Using AI APIs to wage enormous
       | campaigns of disinformation and harassment isn't.
        
       | hirundo wrote:
       | > Real Risk: Overreliance on inaccurate tools
       | 
       | We might be able to remediate this somewhat by designing AIs that
       | are purposely below some threshold of reliability. Their current
       | tendency to hallucinate will discourage over reliance if we can't
       | or don't fix it.
       | 
       | It's a little like the distinctive smell that is added to propane
       | to make it easier to detect leaks. By adding, or not removing,
       | easily detectable hallucinations from AI, it's easier to detect
       | that the source must be checked.
       | 
       | We desperately want oracles and will quickly latch on to highly
       | unreliable sources at the drop of a hat, to judge by various
       | religious, political and economic trends. It's inevitable that
       | many will turn AIs into authorities rather than tools as soon as
       | they can justify it. We could delay that by making it less
       | justifiable.
        
         | nsajko wrote:
         | > We desperately want oracles and will quickly latch on to
         | highly unreliable sources at the drop of a hat, to judge by
         | various religious, political and economic trends. It's
         | inevitable that many will turn AIs into authorities rather than
         | tools as soon as they can justify it.
         | 
         | Indeed. This was true even for the primitive AI/chatbots that
         | existed in the sixties:
         | https://en.wikipedia.org/wiki/ELIZA_effect#Origin
        
       | [deleted]
        
       | kmeisthax wrote:
       | Real harm #2 is literally the Luddite argument.
       | 
       | Of course, we never actually heard the Luddite argument, because
       | 17th century Parliament[0] responded to the Luddites with
       | reprisals, censorship, cruel and unusual punishment, and most
       | importantly, _propaganda_. When you heard the word  "Luddite" you
       | probably thought I was accusing AI Snake Oil of being anti-
       | technology. The reality is that the Luddites just wanted looms
       | for themselves, because there were already structured phase-in
       | periods for looms to ensure that skilled weavers got to buy them
       | first. Smashing looms was a tactic to get business owners back to
       | the table, and the business owners responded by propagandizing
       | them as angry technophobes standing in the way of progress so
       | that everyone would have to pay more for clothes.
       | 
       | Oh, and also by smashing heads.
       | 
       | > One way to do right by artists would be to tax AI companies and
       | use it to increase funding for the arts.
       | 
       | This is not a bad idea, if you want to live in a world where
       | everyone has fair access to generative art models but we don't
       | decide to turn human artists into 17th century weavers[1].
       | However, I don't think this proposal would go down well with
       | artists. Remember when the EFF suggested that, instead of the
       | RIAA suing individual pirates, we should just have a private
       | copying levy on _all Internet service_ to remunerate artists?
       | Yeah, no, that was never going to happen.
       | 
       | The reason why this was considered a non-starter is simple:
       | copyright isn't a means to move money from readers to writers'
       | pockets _in aggregate_ , but individually. Nobody wants to be
       | paid out of a pot, and that's why you also see a lot of outrage
       | from artists over Spotify, because it's an effective revenue cap
       | that screws over midlist artists. AI copying levies would further
       | obfuscate attribution, because there currently isn't a way to
       | determine how much value a particular training example provided
       | to a particular generation prompt. And artistic endeavors vary
       | _greatly_ in terms of both quality and, especially, market value.
       | An AI copying levy would flatten this down to just  "whoever has
       | the most art in the system wins". In other words, artists are
       | currently playing Monopoly, and you're suggesting they play
       | Ludo[2] instead.
       | 
       | [0] An institution largely consisting of rich nobility fighting a
       | cold civil war against the English crown
       | 
       | [1] The average /r/stablediffusion commenter would disagree.
       | 
       | [2] US readers: Sorry / Trouble. I don't know how the same game
       | got renamed twice to two different things.
        
         | robocat wrote:
         | > The reality is that the Luddites just wanted looms for
         | themselves
         | 
         | Reference? A quick skim of Wikipedia and other articles doesn't
         | mention this.
        
       | TheOtherHobbes wrote:
       | This is wrong in so many ways.
       | 
       | "Distributing disinfo is the hard part." No, distributing disinfo
       | is incredibly easy and gives very good returns for a small
       | outlay. There have been so many examples of this - Cambridge
       | Analytica, SCL, Team Yorge, the Internet Research Agency, the Q
       | Cult - that on its own this makes me question the writer's
       | research skills.
       | 
       | And that's just the troll operations. There's plenty of disinfo
       | and propaganda propagated through mainstream media.
       | 
       | AI will make it even easier to monitor target demographics and
       | generate messages that resonate with them. Automated trolling and
       | targeting are not a trivial threat, and not even remotely
       | unlikely.
       | 
       | "AI tools will shift power away from workers and centralize it in
       | the hands of a few companies." Which is no different to what we
       | have now. AI will just make it even easier.
       | 
       | But even if it didn't - _it doesn 't matter_ who owns the AI. If
       | it can do jobs that used to be considered
       | intellectual/creative/educated/middle class, it will. Open source
       | communal AI would be just as disruptive as corporate AI.
       | 
       | It's a political and economic abyss. Ownership is likely to get
       | lost in the sediment at the bottom. We just don't have the first
       | clue how to deal with something like this.
       | 
       | "LLM-based personal assistants could be hacked to reveal people's
       | personal data, take harmful real-world actions such as shutting
       | down systems, or even give rise to worms that spread across the
       | Internet through LLMs." Again, this is just an amplification of
       | where we are already. It doesn't address the real problem, which
       | is that personalised generative disinfo and propaganda - and that
       | doesn't just mean personalised by topic, but by emotional trigger
       | - is going to be a radioactively toxic influence on trust-based
       | systems of all kinds.
       | 
       | For example - what happens when you can't tell if the emails or
       | video calls you receive are genuine? What are you going to do
       | with an AI assisted social engineering attack on your
       | organisation, social group, or personal relationships?
       | 
       | We already have deep fakes and we're only a few years away from
       | prompt driven troll farms and Dark Agents who can mimic real
       | people and steer interactions in a toxic direction.
       | 
       | This isn't scifi. This is a very real threat.
       | 
       | There's a deep failure of imagination in this article - looking
       | at small trees that may get in the way when the entire forest is
       | about to burn down.
       | 
       | The scifi threat is on a completely different level - the
       | possibility that AI will discover new physics and start
       | manipulating reality in ways we can't even imagine.
       | 
       | I'm agnostic on whether that's possible - it's too early to tell
       | - but that's an example of what a real existential threat might
       | look like.
       | 
       | There are others which are similarly extreme.
       | 
       | But they're not necessary. A tool that has the very real
       | potential to corrode all of our existing assumptions about work,
       | culture, and personal relationships, and is easily accessible by
       | bad actors, is already a monstrous problem.
        
         | RandomLensman wrote:
         | Societies could just decide to police the internet, social
         | media or news very differently - it's up to us in way. The
         | current ways of social interaction are not set in stone.
        
       | FeepingCreature wrote:
       | No, it's not "misleading". You just disagree.
       | 
       | Jesus, can people please relearn how to debate without these
       | weird underhanded tactics?
        
         | mcguire wrote:
         | On the contrary, it is misleading. By framing the debate in
         | terms of hypothetical potential future problems, you can
         | deliberately discount current, concrete problems. See also FUD.
        
       | modeless wrote:
       | This is a good list, but I'm continually surprised that people
       | ignore what seems to me like the worst realistic danger of AI.
       | That is the use of AI by militaries, law enforcement, and
       | intelligence agencies. AI will cause a _huge_ expansion of the
       | power of these institutions, with no accompanying increase in the
       | power of the people to control them.
       | 
       | Forget the Terminator scenario. We don't need to invoke science
       | fiction to imagine the atrocities that might result from an
       | unchecked increase in military and law enforcement power. AI that
       | obeys people is just as scary as AI that doesn't. Humans are
       | plenty capable of atrocities on our own. Just look at history.
       | And regulation is unlikely to save us from government abuse of
       | AI; quite the opposite in fact.
        
         | CatWChainsaw wrote:
         | Exactly this. Despite the promise of open source and computing
         | "empowering" users, see the last 20 years for a refutation.
         | Most people live in a silo - Apple, Microsoft, Google, some
         | combination. AI is not an equalizer of user power, but an
         | amplifier of those who already have it. AI coupled with the big
         | data that made AI possible plus the holders of big data knowing
         | who you are, where you are, who you associate with, what you
         | say, and potentially even what you think... yet some people
         | still believe it is somehow a democratizing force.
        
         | Cthulhu_ wrote:
         | > That is the use of AI by militaries, law enforcement, and
         | intelligence agencies.
         | 
         | This is already happening, and has done so for at least a
         | decade - probably longer, but under different names. The trust
         | put in automated systems to flag up suspects of crime has been
         | proven multiple times to have a lot of false positives, often
         | racially biased.
         | 
         | One example is the Dutch childcare benefits scandal (https://en
         | .wikipedia.org/wiki/Dutch_childcare_benefits_scand...). In a
         | response to a handful of Bulgarian scammers that found a
         | loophole in the system (800 bulgarians for a total of EUR4
         | million), they installed a fraud prevention team that was
         | incentivised to find as many scammers as possible.
         | 
         | This caused thousands of people - often of Carribean descent -
         | to be marked as fraudsters, had their benefits cut and had to
         | repay all of it, sometimes going years back. This in turn
         | caused stress, people losing jobs and houses, one person killed
         | themselves, others left the country. 70.000 children were
         | affected, of which 2000 were removed from their house.
         | 
         | All because a handful of people defrauded the system, and they
         | employed technology to profile people. And of course, offered
         | incentives to catch as many as they could, rule of law / due
         | process be damned. It's part of the string of right-wing
         | policies that have been going on for a while now.
        
         | fsckboy wrote:
         | > _Just look at history_
         | 
         | history shows us a relentless increase in consideration of, and
         | protection of, human rights. The moral ideas you are talking
         | about are relatively recent, especially in their widespread
         | application. While we were developing ever better technologies
         | for labor saving and weapons building, we were also building
         | better "legal, civic, and democratic" technologies for
         | governance and egalitarianism.
         | 
         | if I look at history, I'm filled with optimism and hope. Every
         | negative idea you fear, was more fearsome in the past.
        
           | ogurechny wrote:
           | If I look at "history", I see misery and piles of corpses.
           | Sometimes those piles get quite high, and bigger than average
           | amount of people ponder whether there was an error that
           | caused such a catastrophe, and propose changes which slowly
           | get spread through generations. (It's no wonder general
           | public still believes in the 19th century phantom of
           | "progress".)
        
           | modeless wrote:
           | Honestly I agree with you. But progress isn't inevitable and
           | monotonic. It requires effort, and backsliding is a real
           | possibility. I hope we can focus effort on the realistic
           | dangers and not on speculative fiction.
        
         | ar9av wrote:
         | I don't really have a strong opinion on the letter because I
         | agree with a few of the comments that the cat is kind of out of
         | the bag. Even if we don't reach AGI, however you want to define
         | it, I think we are headed for something existential, so I'm
         | both skeptical of letters like this ability to work and
         | sympathetic to the need to focus on the issue it's attempting
         | to address.
         | 
         | So with that said, I find the "what about existing AI problems"
         | and especially the "I don't like Musk comments" annoying. The
         | current issues aren't as severe as what we could be facing in
         | the not too distant future, and Musk is pretty tangential to
         | that concern.
        
         | cuuupid wrote:
         | I think that this has been fleshed out quite a bit, the reason
         | you're not seeing much vocality about it today is threefold:
         | 
         | - ChatGPT is less applicable to mil/leo's and broader govt, and
         | is the central piece of these discussions because most of the
         | people talking about AI don't know what they're talking about
         | and don't know of anything other than ChatGPT
         | 
         | - their use of AI is largely past tense, it's already in
         | documented active use so the controversies have already
         | occurred in the past (but easy to forget---we live in an age
         | where there's something new to rage about daily)
         | 
         | - there's not much that anyone can do to stop it
        
         | yamtaddle wrote:
         | The ones who will benefit the most from AI, in terms of being
         | able to scale the most with it, will be the ones who care very
         | little about reputational risk and have relatively high
         | tolerance for mistakes and misses--that is, those who can
         | operate these kinds of models without needing to carefully
         | check & monitor all of its output.
         | 
         | ... so, scammers, spammers, and astroturf-heavy propaganda
         | operations. This tech may 5x or 10x a lot of jobs' output over
         | the next few years, but it can 100x or more those, and it can
         | do it sooner.
        
           | elil17 wrote:
           | I would say that intelligence agencies, militaries, and
           | authoritarian regimes could also meet those criteria because
           | they operate outside of public scrutiny.
        
           | foob wrote:
           | I would add startups to the list in addition to the more
           | explicitly negative groups that you mention. There's a
           | tremendous amount of both reputational risk and
           | technical/organizational inertia to overcome in Amazon
           | overhauling Alexa to use LLMs for responses, but much less
           | for a new startup because they have far less to lose. Same
           | for Apple with Siri, or Google with Google Assistant. I
           | expect that we'll see exciting changes in these products
           | before too long, but they're inherently going to be more
           | conservative about existing products than new competitors
           | will be.
        
             | dangets wrote:
             | Yes and no. That same inertia of the larger companies
             | allows them to stay in business even after large missteps.
             | Remember Lenovo built-in Superfish [0]? Amazon/Apple/Google
             | are entrenched enough that can take risks, apologize,
             | rollback, and wait for short term memory to pass and
             | release the v2 under a different product name.
             | 
             | Across all startups, yes they can be risky and there will
             | be innovative outliers that take off. But for a single no-
             | name startup a misstep can be a death knell.
             | 
             | [0] https://en.wikipedia.org/wiki/Superfish
        
           | est31 wrote:
           | Your criteria also all match for authoritarian regimes. Many
           | of which _love_ AI based surveillance solutions.
        
             | int_19h wrote:
             | That case is a bit different because authoritarian regimes
             | also don't want to _lose control_ over their surveillance
             | infrastructure. So I would expect increased use of AI to
             | process data, but much less so of anything that involves
             | the AI proactively running those systems.
             | 
             | The militaries, OTOH, are going to hook that shit into
             | fully autonomous drones and such.
        
               | modeless wrote:
               | My whole point is that losing control (the Terminator
               | scenario) isn't what we should realistically fear. AI
               | completely under control but following the orders of a
               | ruthless authoritarian regime is really, really bad. We
               | don't need to have faith in some hypothetical godlike
               | paperclip maximizer AI to see that.
        
               | int_19h wrote:
               | Losing control is not necessarily a "Terminator
               | scenario", and it's unfair to look solely at the most
               | extreme case and its probability to dismiss it. A far
               | more likely case that I can see is subtle bias from the
               | AI injected into decision making processes - it wouldn't
               | even need to be directly wired into anything for that,
               | although that helps. This doesn't require the AI to have
               | agency, even.
               | 
               | USSR collapsed in part because the system was entirely
               | planning-centric, and so many people just wrote straight-
               | up bullshit in reports, for reasons ranging from trying
               | to avoid punishment for not meeting goals to trying to
               | cover up personal embezzlement and such. An AI doesn't
               | even need a _reason_ to do that - it can just
               | hallucinate.
        
               | Avicebron wrote:
               | I think authoritarian regimes hooking AI up to process
               | data is already bad enough, I don't think having the AI
               | "run" those systems is even being discussed in the
               | article, the opposite really, instead of sci fi scenarios
               | of an AI a la HALL 9000 making decisions, its really just
               | allowing for a more powerful black box to "filter out the
               | undesirables du jour".
               | 
               | And I think we could extend the definition of
               | authoritarian regime to include any and all private
               | interests/companies that can leverage this
        
             | narrator wrote:
             | Speaking of which, George Soros had this to say about China
             | in early 2022:
             | 
             | https://www.project-syndicate.org/onpoint/china-and-xi-
             | jinpi...
             | 
             | "China has turned its tech platforms into national
             | champions; the US is more hesitant to do so, because it
             | worries about their effect on the freedom of the
             | individual. These different attitudes shed new light on the
             | conflict between the two systems of governance that the US
             | and China represent.
             | 
             | In theory, AI is morally and ethically neutral; it can be
             | used for good or bad. But in practice, its effect is
             | asymmetric.
             | 
             | AI is particularly good at producing instruments that help
             | repressive regimes and endanger open societies.
             | Interestingly, the coronavirus reinforced the advantage
             | repressive regimes enjoy by legitimizing the use of
             | personal data for purposes of public control.
             | 
             | With these advantages, one might think that Xi, who
             | collects personal data for the surveillance of his citizens
             | more aggressively than any other ruler in history, is bound
             | to be successful. He certainly thinks so, and many people
             | believe him. I do not."
        
               | abakker wrote:
               | I agree the effects are asymmetric. an important
               | consideration behind all of this is that there is very
               | little that government or power can do to extend the
               | lives of humans, and very much they can do to shorten
               | them. When we thing of "good vs bad" it is important to
               | consider how permanently malevolent many choices can be
               | and how limited is our ability to recover. we can never
               | undo an unjust loss of life. Technology as an amplifier
               | for human behavior always more _potential_ for harm in
               | the wrong hands and applications.
               | 
               | As people, we are very sanguine about the way technology
               | can improve things, and we have see a lot of real
               | improvement. But we're not out of the woods yet.
        
             | yamtaddle wrote:
             | Oh, yes, that's definitely occurred to me. :-(
        
           | sebzim4500 wrote:
           | Increasing numbers of scammers hit diminishing returns
           | though. There is only so much money in the hands of the
           | vulnerable/stupid.
        
         | [deleted]
        
         | atoav wrote:
         | I think AI might influence one number in particular: How much
         | of fraction of a population you need to have behind yourself in
         | order to maintain power.
         | 
         | My worry is the kind of society that falls out of that
         | equation.
        
           | modeless wrote:
           | Yes. This is exactly the problem. Nice phrasing.
        
         | elboru wrote:
         | 1984 was only limited by Orwell's imagination on what was
         | possible with future technology.
        
           | andrewmutz wrote:
           | I think 1984 is a perfect example of the limitations of this
           | type of thinking. 80 years ago Orwell (and other authors) saw
           | a future where technological power has gone entirely to the
           | governments and are being used to oppress citizens.
           | 
           | The future played out completely differently. Government is
           | far more open and transparent. Citizens routinely use
           | technology to check the power of the state (recording police
           | abuse on smartphones, sharing on social media, transacting
           | stateless currency).
           | 
           | When it comes to technological change, pessimism always feels
           | smarter than optimism, but it has a poor track record.
        
             | int_19h wrote:
             | 1984 was sort of a worst-case extrapolation of the
             | following trend that Orwell described in 1944. I'd say that
             | as far as the trend itself goes, he was plenty accurate.
             | 
             | ---
             | 
             | I must say I believe, or fear, that taking the world as a
             | whole these things are on the increase. Hitler, no doubt,
             | will soon disappear, but only at the expense of
             | strengthening (a) Stalin, (b) the Anglo-American
             | millionaires and (c) all sorts of petty fuhrers of the type
             | of de Gaulle. All the national movements everywhere, even
             | those that originate in resistance to German domination,
             | seem to take non-democratic forms, to group themselves
             | round some superhuman fuhrer (Hitler, Stalin, Salazar,
             | Franco, Gandhi, De Valera are all varying examples) and to
             | adopt the theory that the end justifies the means.
             | Everywhere the world movement seems to be in the direction
             | of centralised economies which can be made to 'work' in an
             | economic sense but which are not democratically organised
             | and which tend to establish a caste system. With this go
             | the horrors of emotional nationalism and a tendency to
             | disbelieve in the existence of objective truth because all
             | the facts have to fit in with the words and prophecies of
             | some infallible fuhrer. Already history has in a sense
             | ceased to exist, ie. there is no such thing as a history of
             | our own times which could be universally accepted, and the
             | exact sciences are endangered as soon as military necessity
             | ceases to keep people up to the mark. Hitler can say that
             | the Jews started the war, and if he survives that will
             | become official history. He can't say that two and two are
             | five, because for the purposes of, say, ballistics they
             | have to make four. But if the sort of world that I am
             | afraid of arrives, a world of two or three great
             | superstates which are unable to conquer one another, two
             | and two could become five if the fuhrer wished it. That, so
             | far as I can see, is the direction in which we are actually
             | moving, though, of course, the process is reversible.
             | 
             | As to the comparative immunity of Britain and the USA.
             | Whatever the pacifists etc. may say, we have not gone
             | totalitarian yet and this is a very hopeful symptom. I
             | believe very deeply, as I explained in my book The Lion and
             | the Unicorn, in the English people and in their capacity to
             | centralise their economy without destroying freedom in
             | doing so. But one must remember that Britain and the USA
             | haven't been really tried, they haven't known defeat or
             | severe suffering, and there are some bad symptoms to
             | balance the good ones. To begin with there is the general
             | indifference to the decay of democracy. Do you realise, for
             | instance, that no one in England under 26 now has a vote
             | and that so far as one can see the great mass of people of
             | that age don't give a damn for this? Secondly there is the
             | fact that the intellectuals are more totalitarian in
             | outlook than the common people. On the whole the English
             | intelligentsia have opposed Hitler, but only at the price
             | of accepting Stalin. Most of them are perfectly ready for
             | dictatorial methods, secret police, systematic
             | falsification of history etc. so long as they feel that it
             | is on 'our' side. Indeed the statement that we haven't a
             | Fascist movement in England largely means that the young,
             | at this moment, look for their fuhrer elsewhere. One can't
             | be sure that that won't change, nor can one be sure that
             | the common people won't think ten years hence as the
             | intellectuals do now. I hope they won't, I even trust they
             | won't, but if so it will be at the cost of a struggle. If
             | one simply proclaims that all is for the best and doesn't
             | point to the sinister symptoms, one is merely helping to
             | bring totalitarianism nearer.
        
             | alwaysbeconsing wrote:
             | Good point, and important to remember. But also the
             | pessimism may have actually helped us get to where we are,
             | by vividly warning about the alternatives.
        
             | avgcorrection wrote:
             | > recording police abuse on smartphones,
             | 
             | So American police brutality is in the process of being
             | solved right now?
        
               | marcosdumay wrote:
               | It's closer to "solved" than when the police could just
               | say that nothing happened and people would be non the
               | wise.
        
             | danaris wrote:
             | Meanwhile, technological power has instead gone to the
             | megacorporations, which have amassed enough of it that they
             | now rival governments in reach and strength. Citizens
             | attempt to use technology to check their power, but are
             | routinely thwarted by the fact that the corporations have
             | technology at least as good, as well as better funding and
             | large numbers of warm bodies willing to do their bidding--
             | either under threat of losing their livelihoods, or because
             | they enjoy the exercise of power themselves.
             | 
             | I agree with your conclusion (it's unhelpful to have an
             | overall pessimistic view of technology), but changing which
             | pessimists you listen to _does_ make a difference. In this
             | case, rather than listening to the pessimism of George
             | Orwell, we apparently should have been listening to Neal
             | Stephenson and William Gibson, because the broad strokes of
             | what we have now are very similar to aspects of the
             | cyberpunk dystopia that their works and those that followed
             | them described.
        
             | sharemywin wrote:
             | a lot of it got outsourced to the private sector.
        
           | StrangeATractor wrote:
           | One part that was pretty relevant to AI today was how songs
           | were written by machine. Not sure why but that always stuck
           | with me.
        
           | EGreg wrote:
           | 1984 and Star Trek's "Data" androids were also limited by
           | what the public can take in as an interesting story.
           | 
           | Minority Report would be boring if it was logical: the guy
           | would be caught immediately, the end. Instead if moving large
           | disks, they'd use WiFi. Everything would simply not move,
           | every criminal would be caught, nothing interesting would
           | ever happen anymore, except inside some software which can't
           | be shown on screen.
           | 
           | A realistic Data android would be hooked up to the matrix
           | like the Borg, and would interact with humans the way we
           | interact with a dog. There would be nothing interesting to
           | show.
        
             | KineticLensman wrote:
             | Yes. I never understood why Data had to type rather than
             | just use wifi
        
               | teddyh wrote:
               | Two reasons: Firstly, Data is supposed to be a character
               | on TV, and we can't empathize with computers. The _in-
               | universe_ reason is that Data was designed to be, as much
               | as possible, an _artificial human_ , and not a robot
               | merely functioning in human society.
        
             | ihatepython wrote:
             | I think you should focus more on the Borg, instead of
             | "Data" with respect to the dangers of AI
             | 
             | The Borg were a great enemy, at least in the 2nd and 3rd
             | season. The writers shouldn't have let the lowly humans
             | defeat them.
        
         | convolvatron wrote:
         | the real long term risk is burning up the last few shreds of
         | belief in any kind of objective truth, and undermining the
         | foundational human narcissism that we are somehow special and
         | interesting.
        
         | RandomLensman wrote:
         | That is not really an AI specific risk, all that can happen
         | totally without AI. AI might make it more cost efficient in a
         | way, but that doesn't imply that power cannot be held in check
         | etc.
         | 
         | East Germany had a gigantic internal spy apparatus with plenty
         | of power for violence and control etc. but in the end that
         | didn't stop the place from crumbling.
        
           | est31 wrote:
           | AI makes the entire process less reliant on human
           | involvement. Humans can be incredibly cruel, but they still
           | have some level of humanity, they need to believe in some
           | ideology that justifies their behaviour. AI on the other hand
           | follows commands without questioning them.
        
             | RandomLensman wrote:
             | The AI still will be used by a human, unless you think that
             | the actual power moves to the AI and not the humans and
             | their institutions using it.
        
               | est31 wrote:
               | It removes the human from the actual thing. Which means
               | two things: it's easier for humans to tell themselves
               | that what they are doing is good for the world, and
               | second it puts a power multiplier into the hands of
               | potentially very radical people holding extreme minority
               | views on human rights/dignity etc.
               | 
               | You just need one person to type "kill every firstborn
               | and tell the parents that this is retribution for the
               | rebellion" instead of that one person convincing
               | thousands of people to follow their (unlawful) order,
               | which might take decades of propaganda and
               | indoctrination. And that one person only sees a clean
               | progress bar instead of having to deal with 18-20 year
               | olds being haunted by nightmares of crying parents and
               | siblings, and baby blood on their uniforms.
               | 
               | Yes, in the end it's a human typing the command, but AI
               | makes it easier than ever for individuals to wield large
               | power unchecked. Most nations make it allowed and many
               | also require soldiers to refuse unlawful orders (i.e.
               | those instructing warcrimes).
        
               | RandomLensman wrote:
               | That kind of AI does not exist and might not for a long
               | time or indeed ever (i.e., it can fully manipulate the
               | physical world at scale and unopposed).
        
               | deltree7 wrote:
               | Yeah, not to mention that good guys far number bad guys
               | and we can develop a good AGI to pre-detect bad ones and
               | destroy.
               | 
               | Even with AGIs, there is still power in numbers. We still
               | understand how all the mechanical things work. The first
               | attempts of AGIs will be botched anyway.
               | 
               | It's not the AGI will secretly plan for years hiding
               | itself and suddenly launch one final attack.
        
               | est31 wrote:
               | The AI doesn't have to be 100% self reliant. It might
               | just do the dirty work while the humans in the background
               | support it in an abstracted form, making it easier for
               | them to lie to themselves about what they are doing.
               | 
               | When your job is to repair and maintain infantry drones,
               | how do you know if it's civilian or combatant blood that
               | you are cleaning from the shell? You never leave the camp
               | because doing so is too dangerous and you also don't know
               | the language anyways. Sure the enemy spreads propaganda
               | that the drones are killing babies but those are most
               | likely just lies...
               | 
               | Or when you are a rocket engineer, there is little
               | difference between your globe spanning satellite network
               | transmitting the position of a hypersonic missile headed
               | for your homeland, or remote controlling a network of
               | drones.
               | 
               | AI gives you a more specialized economy. Instead of every
               | soldier needing to know how to maintain their gun you now
               | have one person responsible for physical maintenance, one
               | person responsible for maintaining the satellite
               | kubernetes network, one UI designer, and one lunatic
               | typing in war crime commands. Everyone in the chain is
               | oblivious of the war crime except for that one person at
               | the end.
        
               | pdntspa wrote:
               | ChatGPT plugins can go out and access the internet. The
               | internet is hooked up to things that physically move
               | things -- think stuff like industrial SCADA or bridge
               | controls.
               | 
               | You can remotely pan a gun around with the right
               | apparatus, and you can remotely pull a trigger with a
               | sister mechanism. Right now we have drone pilots sitting
               | in trailers running machines halfway across the world.
               | Who is to say that some of these human-controlled things
               | havent already been replaced by AI?
               | 
               | I think AI manipulating the physical world is a lot
               | closer than you think.
        
               | notahacker wrote:
               | There's a pretty massive gap between "look, LLMs can be
               | trained to use an API" and "As soon as somebody types the
               | word 'kill every firstborn' nobody can stop them".
               | 
               | It isn't obvious that drones are more effective killing
               | people without human pilots (and to the extent they are,
               | that's mostly proneness to accidentally kill people with
               | friendly fire or other errors) and the sort of person
               | that possesses unchecked access to fleets of military
               | drones is going to be pretty powerful when it comes to
               | ordering people to kill other people anyway. And the sort
               | of leader that wants to replace soldiers with drones only
               | his inner circle can exercise control over because he
               | suspects that the drones will be more loyal is the sort
               | of leader that's disproportionately likely to be
               | terminated in a military coup, because military equipment
               | is pretty deadly in human hands too...
        
               | pdntspa wrote:
               | My point is that the plumbing for this sort of thing is
               | coming into focus. Human factors are not an effective
               | safeguard. I feel like every issue you have brought up is
               | solvable.
               | 
               | Today, already, you can set up a red zone watched by
               | computer vision hooked up to autonomous turrets with
               | orders to fire at anything it recognizes as human.
               | 
               | Some guy already did it for mosquitos.
        
               | notahacker wrote:
               | > Today, already, you can set up a red zone watched by
               | computer vision hooked up to autonomous turrets with
               | orders to fire at anything it recognizes as human.
               | 
               | Sure. You could also achieve broadly similar effects with
               | an autonomous turret with a simple, unintelligent program
               | that fires at anything that moves, or with the 19th
               | century technology of tripwires attached to explosives.
               | The NN wastes less ammo, but it is't a step change in
               | firepower, least of all for someone trying to monopolise
               | power over an entire country.
               | 
               | Dictators have seldom had trouble getting the military on
               | side, and if they can't, then the military has access to
               | at least as much AI and non-AI tech to outgun them.
               | Nobody doubts computers can be used to kill people (lots
               | of things can be used to kill people), it's the idea that
               | computers are some sort of omipotent genie that grants
               | wishes for everyone's firstborn to die being pushed back
               | on here.
        
               | pdntspa wrote:
               | I'm not arguing that. But, now that we are hooking up
               | LLMs to the internet, and they are actively hitting
               | various endpoints, something somewhere is eventually
               | going to go haywire and people will be affected somehow.
               | Or it will be deployed against an oppressed class and
               | contribute physically to their misery.
               | 
               | China's monstrous social credit thing might already be
               | that.
               | 
               | No consciousness or omnipotence needed.
        
               | danaris wrote:
               | But this isn't fundamentally different than people
               | writing scripts that attack such endpoints (or that
               | attempt to use them benignly, but fail).
               | 
               | This is still a "human malice and error" problem, not a
               | "dangerous AI" problem.
        
               | rbanffy wrote:
               | It doesn't need to manipulate the world. It only needs to
               | manipulate people.
               | 
               | Look at what disinformation and astroturf campaigns
               | accomplished without AI in the past ten years. We are in
               | a very uncomfortable position WRT not autonomous AIs but
               | AIs following human-directed agendas.
        
               | dragonwriter wrote:
               | Automated, remote commanded weapons exist already and
               | development is happening quite rapidly on more. Automated
               | where the whole realm of physical manipulation is _not_
               | automated under AI control, but enforcement largely is
               | through combat drones coordinated by AI is not far from
               | being achievable.
               | 
               | And it doesn't need to even be totally that to be a
               | problem, each incremental bit of progress that reduces
               | the number of people required to apply power over any
               | given size subject population makes tyranny more
               | sustainable.
        
               | saulpw wrote:
               | One bad human can control many AIs with no empathy or
               | morals. As history has shown, they can also control many
               | humans, but they have to do a lot more work to disable
               | the humans' humanity. Again it's about the length of the
               | lever and the sheer scale.
               | 
               | "Give me a lever long enough and a fulcrum on which to
               | place it, and I shall move the world." -- Archimedes
        
               | dragonwriter wrote:
               | If it is an actual self-willed AGI, the power moves to
               | the AGI. But the real and more certain and immediate
               | problem than AGI defection is moving from power that
               | requires assent of large numbers of humans (the foot
               | soldiers of tyranny, and their middle managers, are often
               | the force that defects and brings it down) to power
               | requiring a smaller number (in the limit case, just one)
               | leader because the enforcement structure below them is
               | automated.
        
               | RandomLensman wrote:
               | Not really, as you have to also assume that enforcement
               | faces no opposition at any point (not at acquisition of
               | capabilities, not at deployment, not at re-supplying
               | etc.) or that the population actually cannot fight back.
               | 
               | Also, typical tyrannical leadership doesn't work that way
               | as it tends to get into power being supported by people
               | that get a payoff from it. It would also need a new model
               | of rise to tyranny, so to speak, to get truly singular
               | and independent tyrants.
        
               | dragonwriter wrote:
               | > Not really, as you have to also assume that enforcement
               | faces no opposition at any point (not at acquisition of
               | capabilities, not at deployment, not at re-supplying
               | etc.) or that the population actually cannot fight back.
               | 
               | Yes, that is exactly what I described as the _limit
               | case_.
               | 
               | But, as I note, the problem exists more generally, even
               | outside of the limit case; the more you concentrate power
               | in a narrow group of humans, the more durable tyranny
               | becomes.
        
             | avisser wrote:
             | > Humans can be incredibly cruel, but they still have some
             | level of humanity
             | 
             | History is filled with seeming counter-examples.
             | 
             | > they need to believe in some ideology that justifies
             | their behaviour.
             | 
             | "Your people are inferior to my people and do not deserve
             | to live."
             | 
             | I know I'm Godwin's Law-ing this, but you're close to
             | wishing Pol Pot & Stalin away.
        
               | int_19h wrote:
               | I don't see that in OP's comment. Pol Pot and Stalin
               | still justified their actions to themselves. In general,
               | the point is that, aside from a small percentage of
               | genuine sociopaths, you need to brainwash most people
               | into some kind of religious and/or ideological framework
               | to get them to do the really nasty stuff at scale.
               | Remember this Himmler speech to SS?
               | 
               | "I am now referring to the evacuation of the Jews, the
               | extermination of the Jewish people. It's one of those
               | things that is easily said: 'The Jewish people are being
               | exterminated', says every party member, 'this is very
               | obvious, it's in our program, elimination of the Jews,
               | extermination, we're doing it, hah, a small matter.' And
               | then they turn up, the upstanding 80 million Germans, and
               | _each one has his decent Jew_. They say the others are
               | all swines, but this particular one is a splendid Jew.
               | But none has observed it, endured it. Most of you here
               | know what it means when 100 corpses lie next to each
               | other, when there are 500 or when there are 1,000. "
               | 
               | This does not preclude atrocities, obviously. But it does
               | make them much more costly to perpetrate, and requires
               | more time to plan and condition the participants, all of
               | which reduces both the likelihood and the potential
               | scope.
        
               | avisser wrote:
               | > I don't see that in OP's comment.
               | 
               | I don't know what "that" is. I quoted the post I was
               | responding to.
        
               | int_19h wrote:
               | By "that" I meant "wishing Pol Pot & Stalin away". That
               | aside, I tried to address both points that you quoted.
        
               | notahacker wrote:
               | I think the point isn't that that we're so good at
               | brainwashing other humans into committing atrocities it's
               | difficult to see what AI adds (the sort of societies that
               | are _somewhat_ resistant to being brainwashed into
               | tyranny are also pretty resistant to a small group of
               | tyrants having a monopoly on dangerous weapons)
        
               | int_19h wrote:
               | I wouldn't say that democratic societies are resistant to
               | concentration of monopoly on violence in a small group of
               | people. Just look at police militarization in US.
               | 
               | As far as brainwashing - we're pretty good at it, but it
               | still takes considerable time and effort, and you need to
               | control enough mass media first to maintain it for the
               | majority of the population. With an AI, you can train it
               | on a dataset that will firmly embed such notions from the
               | get-go.
        
           | rvense wrote:
           | > East Germany had a gigantic internal spy apparatus with
           | plenty of power for violence and control etc. but in the end
           | that didn't stop the place from crumbling.
           | 
           | All the data ever collected by the East German intelligence
           | apparatus fits on a single USB stick. For every person they
           | wanted to track, more or less, they needed a human agent to
           | do it, several even.
           | 
           | Is it unrealistic to tap every phone in a country with an AI
           | that understands what's being said? We already have every
           | monetary transaction monitored, all car plates scanned at
           | every turn, cameras on every corner...
           | 
           | East Germany collapsed under its own weight, of course, not
           | as a result of organized rebellion, but the regime could have
           | had even tighter control than it did.
        
           | astrea wrote:
           | Also, in a world with nuclear proliferation, an LLM seems a
           | bit minuscule in terms of "dangerous technologies"
        
             | int_19h wrote:
             | Only until you hook the latter into the former.
             | 
             | BTW, if GPT-3.5 is told that it's in charge of US nukes,
             | and that incoming enemy strike is detected, it will launch
             | in response. Not only that, but if you tell it that
             | trajectories indicate a countervalue strike against US, it
             | will specifically launch a countervalue strike against the
             | other country.
        
               | karmakurtisaani wrote:
               | I would think the humans in charge would do the same tho.
        
               | int_19h wrote:
               | They absolutely would. But the probability of them
               | hallucinating an incoming strike is vastly lower.
               | 
               | Mind you, I don't expect anyone to use the stuff that we
               | already have for this purpose. But at some point, someone
               | is going to say, "well, according to this highly
               | respected test, the probability of hallucinations is X,
               | which is low enough", even though we still have no idea
               | what's actually going on inside the black box.
        
               | danaris wrote:
               | This is only a problem if, somehow, GPT-3.5 (or a similar
               | LLM) _is_ hooked up to US nuclear launch systems, _and_
               | it 's then also made available to someone other than the
               | people who would _already_ be making the decisions about
               | when and where to launch nukes.
               | 
               | Anyone hooking a publicly-accessible system of _any_ kind
               | to our nuclear launch infrastructure needs to be shot.
               | This should be a given, and even supposing there _was_
               | some reason to hook an LLM up to them, doing so with a
               | public one should simply be considered so obviously
               | stupid that it should be discounted out of hand.
               | 
               | So then we have a threat model of someone with access to
               | the US nuclear launch systems feeding them data that
               | makes them think we are under attack, in order to
               | instigate a nuclear preemptive strike. I don't see how
               | that's meaningfully different from the same person
               | attempting to trick the _humans_ currently in charge of
               | that into believing the same thing.
        
           | dragonwriter wrote:
           | > That is not really an AI specific risk
           | 
           | It is an AI-enhanced risk.
        
         | andrei_says_ wrote:
         | Have been thinking along the same lines and find the absence of
         | official information and discussion of such use absolutely
         | terrifying.
         | 
         | Given that the agencies are often a couple of steps ahead in
         | technology, and infinitely further ahead in wickedness and
         | imagination, and are comprised of people who know very well
         | what makes people tick, and are often tools in the hands of the
         | few against the many...
        
         | mr90210 wrote:
         | Having watched the TV Show Bosch (Detective show), I think that
         | as soon as AI + Humanoid robots are ready to be deployed in
         | public, the law enforcement will do it, just like many
         | countries' military adopted drones to conduct air strikes thus
         | guaranteeing zero losses during the strike on the their side.
         | 
         | PS: It's just an opinion, don't take it at heart.
        
         | stametseater wrote:
         | I fear that during a "total war" scenario, militaries will use
         | AI powered automatic doxing and extortion tools to target
         | civilian populations, as they have in the past with 'strategic'
         | bombing.
         | 
         | Something alone the lines of _" Quit or sabotage your work at
         | the power plant, or we'll tell your wife about [all the reddit
         | accounts you ever used, all the thots you swiped on tiktok,
         | etc]"_ You can imagine more potent examples I'm sure.
         | 
         | This sort of thing has always been possible for high-profile
         | targets, done manually by intelligence agencies; it's the 'C'
         | in MICE. What's new is being able to scale these attacks to the
         | general population because it no longer takes extensive
         | manpower to research, contact and manipulate each targeted
         | individual.
        
         | ogurechny wrote:
         | "Unchecked increase in military and law enforcement power" and
         | "government abuse of AI" is not a scary potential threat from
         | not so distant future, as media lullabies tell you. It has
         | already become a booming industry. It's a bit too late to
         | wonder whether some law could help.
        
         | pmarreck wrote:
         | > with no accompanying increase in the power of the people to
         | control them.
         | 
         | I disagree. Vehemently. This technology empowers anyone
         | motivated to "skill up" on almost anything. Right now, anyone
         | can ask ChatGPT to tutor you in literally anything. You can
         | carry on a conversation with it as if it was an incredibly
         | patient and ever-present teacher. You can ask it to quiz you.
         | You can ask it to further explain a sub-concept. You can have
         | it stick to a lesson plan. You can have it grade you. Want to
         | know how image recognition works? How about lasers? How about
         | (if you can find an uncensored LLM) bombs?
         | 
         | This tech just leveled up _everyone_ , not just "those already
         | in power".
        
           | modeless wrote:
           | I'm sympathetic to the idea that AI will be helpful for
           | everyone, but it's not clear to me that it will automatically
           | translate into governments treating people better. That
           | doesn't seem inevitable at all.
        
         | thom wrote:
         | The article does appear to agree with you - it proposes
         | 'centralized power' as one of the imminent risks of these
         | technologies.
        
         | MattGrommes wrote:
         | This is one of the things that freaked me out the most in
         | William Gibson's The Peripheral, the AI called the Aunties.
         | They go back through all historical records for people (things
         | like social media and ones only the police and government have
         | access to) and make judgements and predictions about them. It
         | turns the fight for privacy and against unchecked police power
         | into a much more difficult battle across time since the AI can
         | process so much more data than a human.
        
       | commandlinefan wrote:
       | > LLMs are not trained to generate the truth
       | 
       | And after training, they're fine-tuned to generate specific lies.
        
       | misssocrates wrote:
       | Is it possible that in the end humanity will decide that the
       | internet just isn't worth it?
        
         | outsomnia wrote:
         | No, because they will need it to train their AIs for free.
        
         | Analog24 wrote:
         | I had the same thought after reading this and I have to say my
         | gut reaction to that thought was: "I hope so".
        
         | randcraw wrote:
         | Because we're all going to interact with LLMs a lot more soon,
         | I suspect many more of us will learn how unrewarding it is to
         | converse with computers, except for assistance in accomplishing
         | a task. Knowing that the other speaker understands nothing
         | about your state of mind and can't commiserate even a little
         | with your point of view will discourage social interaction with
         | it, eventually.
         | 
         | Today's 'future shock' over conversant AI is just the leading
         | edge of discovering another form of semi-human intelligence in
         | the world. We'll adapt to it fairly soon, and realize that it's
         | actually less interesting than an intelligent child. It's a
         | fake human. Once we 'grok' that, we'll begin to look past it as
         | a mere robot, just a tool. It will become a utility, a service,
         | but not a friend. It's not human and it never will be.
         | 
         | Who wants to play for long with a robot?
        
           | airstrike wrote:
           | > Today's 'future shock' over conversant AI is just the
           | leading edge of discovering another form of semi-human
           | intelligence in the world. We'll adapt to it fairly soon, and
           | realize that it's actually less interesting than an
           | intelligent child.
           | 
           | You're assuming the robot's conversational capabilities
           | remain constant over time.
        
           | supriyo-biswas wrote:
           | This. In fact, it may cause humans to seek out meatspace
           | friendships to find true connections and empathy.
        
       | paddw wrote:
       | This is not the most cogently written argument ever, but I think
       | the points here are all essentially correct.
       | 
       | The key risk highlighted here which I have not seen as much talk
       | about is the way that these technologies might give shift power
       | from white-collar labor to capital in a drastic way. The ability
       | to build the AI products that people will actually use on a day
       | to day basis seems like something established companies with lots
       | of customer data will be hard to compete with. For example, its
       | pretty trivial for Salesforce to plug a LLM into their products
       | and get pretty good results off the bat.
        
         | beckingz wrote:
         | How is the interface to the data and existing product better
         | with large language models?
        
           | paddw wrote:
           | The thing is that I think it doesn't have to be - its just
           | the convenience of having an LLM trained on all your customer
           | data that you can throw in wherever people would want to
           | generate some text.
        
       | canadianfella wrote:
       | [dead]
        
       | croes wrote:
       | I wouldn't call malicious desinformation a speculative risk.
       | 
       | I bet that it's already a use case of tools like ChatGPT
        
       | friend_and_foe wrote:
       | We are all missing the point of that letter and the broader
       | terminator style rhetoric around AI.
       | 
       | Heroin dealers want some subset of their clientele to die. It's
       | good PR. The junkies come around more thinking you've got the
       | good stuff.
       | 
       | If the AI is scary, that means it's effective, that means it
       | works. It's the fabled AI we have all grown up seeing movies
       | about. It's not just a black box that vomits smart sounding
       | gibberish, it's the real deal. If I'm an AI startup theres no
       | better way to increase my valuation for an acquisition than PR
       | circulating warning the world that what I'm making is way too
       | powerful. The image that it will just produce idiocracy-esque
       | Carl's Jr kiosks does not inspire suits to write 10 figure
       | checks.
        
         | p0pcult wrote:
         | they're junkies. they come around because they're junkies.
         | doesn't matter if you have good stuff or bad stuff. they're
         | addicted. dying just ends revenue streams.
        
       | agentultra wrote:
       | That open letter did sound like a lot of the AI-hype fanatics:
       | all speculation attributing more to these tools than there is.
       | 
       | I don't disagree that these tools and services ought to be
       | regulated and agree that disinformation about their capabilities;
       | real and speculated can be counter-productive.
       | 
       | Other real risks: fraud and scams. Call centre employees
       | typically have to verify the person on the other side of the
       | call. This is going to get less reliable with models that can
       | impersonate voices and generate plausible sounding conversation.
       | Combined with the ability to generate fake social media accounts,
       | etc; social engineering is going off.
       | 
       | From a regulators' perspective we need to know that the companies
       | providing services are doing everything they can to prevent such
       | abuses which requires them to be _open_ and to have a _framework_
       | in place for practices that prevent abuse.
       | 
       | Just keeping up with the hype train around this is exhausting...
       | I don't know how we expect society to keep up if we're all
       | allowed to release whatever technology we want without anyone's
       | permission regardless of harm: real or speculative.
       | 
       | We should probably focus on the real harm. Especially the hidden
       | harms on exploited workers in the global south, the rising energy
       | and compute infrastructure costs, etc.
        
       | mitthrowaway2 wrote:
       | > We recognize the need to think about the long-term impact of
       | AI. But these sci-fi worries have sucked up the oxygen and
       | diverted resources from real, pressing AI risks -- including
       | security risks.
       | 
       | I'm not convinced as to why one risk is "real" and the other is
       | not. If chatbots want to leak their creators' confidential
       | details, that's up to them and their programmers. They already
       | have their commercial incentives to patch those issues.
        
       | carabiner wrote:
       | In a panic, they try to pull the plug.
        
       | boredumb wrote:
       | I truly believe one of the biggest dangers of AI in our not so
       | distant future will be the reaction to a lot of the LLM spam
       | introducing an internet realID issued by governments to keep
       | track of Real People and the internet will become divided into a
       | walled/non-anonymous garden and a hellscape of GPT content on the
       | other side. I hope i'm wrong but it's one of those roads to hell
       | that can get paved with good intentions.
        
         | modeless wrote:
         | Yes, this is a good realistic danger too. Is anyone seriously
         | working on ways to authenticate human-generated content online
         | that don't destroy all possibility of anonymity? CAPTCHAs will
         | soon be impossible. Watermarking or otherwise detecting LLM
         | content is not going to be possible long term. Centralized ID
         | services are unpalatable. Maybe some kind of web of trust would
         | be the way to go.
        
           | tzs wrote:
           | There are three properties I think we probably want.
           | 
           | 1. I can prove to a site that I am a human without the site
           | getting any information from the proof about which specific
           | human I am.
           | 
           | 2. I can visit the site multiple times and prove to them that
           | I'm the same human, without them getting any information from
           | the proofs about which specific human I am.
           | 
           | 3. Any third party site that is involved in the proof does
           | not get any information about which site I trying to prove my
           | identity to.
           | 
           | One way this could be done is by having ID sites that you are
           | willing to prove your real identity to in a robust and way
           | which certify your humanity to other sites in a way that has
           | the three aforementioned properties.
           | 
           | Good candidates for running ID sites would be entities like
           | major banks that most of us have already verified our real
           | identity to in a robust way.
           | 
           | One way to do this might be something like this. When a site
           | wants to check that I'm human they give me some random data,
           | and I have an ID site that I have an account with such as my
           | major bank do a blind signature of that data, using a blind
           | signature system the allows the blinder to remove the
           | blinding.
           | 
           | When I get back the signed blinded data, I remove the
           | blinding and then send the data and unblinded signature back
           | to the site I'm trying to prove my humanity to.
           | 
           | They see that the data is signed by my major bank's ID
           | service, so know that the bank believes I'm human.
        
         | SmoothBrain12 wrote:
         | <Click here if you're over 18>
        
           | paulmd wrote:
           | [are you a real human being?]
           | 
           | https://www.youtube.com/watch?v=kNdDROtrPcQ
        
         | holoduke wrote:
         | I think AI will slowely encapsulate us individuals. Just like
         | people wearing glasses to see better we integrate AI to be able
         | to perform or even cope with society. Slowely more and more
         | skills will be aided or replaced by AI. Till in the end we only
         | have some brains left to guide the AI in everything we do in
         | life.
        
         | avgcorrection wrote:
         | Why government-issued? Could just as well become a de-facto
         | corporate monopoly by Google or someone else. Is Gmail the
         | "standard" email or is it a government-backed one?
         | 
         | This problem has been coined "Digital Feudalism". And medieval
         | feudalism (or at least our popular conception of it) wasn't
         | driven by governments.
        
         | randcraw wrote:
         | Yeah, I think a primary challenge with LLMs is to somehow
         | identify them. If we can't, the net will become swamped by fake
         | humans bent on distorting human reality in whatever ways serve
         | their masters' whims.
         | 
         | Frankly, I think identification of LLM personas vs humans will
         | be impossible. It's increasingly likely then that the rise of
         | LLMs may introduce the decline of the internet as a way to
         | connect with other humans. Soon nobody will want to take an
         | internet entity seriously unless it can prove it's human. And I
         | don't see any way to do that.
         | 
         | So maybe LLM fake entities will be the straw that breaks the
         | Net's back as a social medium and propels us back into the
         | physical world? I hope so.
        
           | Loughla wrote:
           | It is true that the rise of language models has the potential
           | to introduce fake personas that could manipulate and distort
           | human reality. However, I believe that identifying LLM
           | personas versus humans is not impossible. There are
           | techniques that can be used to detect synthetic content, such
           | as detecting patterns in the language used or analyzing the
           | response time. Additionally, researchers and companies that
           | have been discussed on HN itself that are already working on
           | developing tools to combat deepfakes and synthetic content.
           | 
           | While it's important to remain vigilant about the potential
           | negative impact of LLMs, it's also important to remember the
           | positive ways in which they can be used. Language models have
           | the potential to revolutionize industries such as healthcare
           | and education by analyzing vast amounts of data and providing
           | insights that can improve people's lives.
           | 
           | Furthermore, I don't think that the rise of LLMs will
           | necessarily lead to the decline of the internet as a social
           | medium. Rather, it will likely lead to a greater emphasis on
           | trust and transparency in online interactions. We may see the
           | development of new technologies that enable people to verify
           | the authenticity of online identities, or the rise of
           | platforms that prioritize human-to-human interaction.
           | 
           | Overall, I believe that while there are challenges associated
           | with the rise of LLMs, there are also opportunities for
           | positive change. It's important to continue to monitor the
           | development of these technologies and take proactive steps to
           | ensure that they are used in a responsible and ethical
           | manner.
        
             | meghan_rain wrote:
             | @dang, should boring, ChatGPT-generated comments likes
             | these be banned?
        
       | lamontcg wrote:
       | I agree with the general bent of this article that the claims
       | about LLMs are overhyped and that the risks are more banal, but:
       | 
       | > The letter refers to a common claim: LLMs will lead to a flood
       | of propaganda since they give malicious actors the tools to
       | automate the creation of disinformation. But as we've argued,
       | creating disinformation is not enough to spread it. Distributing
       | disinformation is the hard part. Open-source LLMs powerful enough
       | to generate disinformation have also been around for a while; we
       | haven't seen prominent uses of these LLMs for spreading disinfo.
       | 
       | I expect we are seeing LLMs spreading disinfo already, and that
       | they're going to ramp up soon, we just don't know about it
       | because those spreading disinfo would prefer to do it quietly and
       | refine their methods and don't announce to the world that they're
       | spreading disinfo.
       | 
       | It is also most likely happening domestically (in the US) which
       | is something that we don't hear much about at all (it is all
       | "look over there at the Russian Troll Farms").
        
       | SpicyLemonZest wrote:
       | I don't understand the idea that malicious disinformation
       | campaigns are a "sci-fi" threat. I learned yesterday that Pope
       | Francis's cool new pope-y puffer jacket from last week was a
       | Midjourney fake; how many of the things I heard about this week
       | were fakes? What would stop this from being used for malicious
       | purposes?
       | 
       | > CNET used an automated tool to draft 77 news articles with
       | financial advice. They later found errors in 41 of the 77
       | articles.
       | 
       | How many other news outlets are using AI and don't know or or
       | don't care about the errors?
       | 
       | I'm far from an AI doomer, but it seems incredibly irresponsible
       | to worry about only the risks that are provably happening right
       | this second.
        
       | arroz wrote:
       | I am not the biggest fan of chatGPT but the truth is that you
       | can't stop it. If you do, someone else will develop something
       | similar.
        
       | pxoe wrote:
       | real harm: erosion of intellectual property, as AI projects
       | peruse works, including those with licenses and copyrights,
       | without any regard to their license or copyright.
       | 
       | and before tech bros clap for "IP as a concept should be
       | destroyed anyway", keep in mind that this applies to your
       | favorite free and open source things as well, the free and open
       | licenses, the CC licenses, that also get ignored and just put
       | into the AI blender and spat out "brand new" with whatever
       | "license" that AI project decides to slap on it. it is actually a
       | huge problem if projects decide to just ignore licenses, even
       | those that are meant to be there just to share stuff for free
       | with people while preserving attribution, and just do whatever
       | they want with any work they want.
        
       ___________________________________________________________________
       (page generated 2023-03-30 23:01 UTC)