[HN Gopher] AI's biggest risk is the corporations that control them
___________________________________________________________________
AI's biggest risk is the corporations that control them
Author : LukeEF
Score : 248 points
Date : 2023-05-06 14:09 UTC (8 hours ago)
(HTM) web link (www.fastcompany.com)
(TXT) w3m dump (www.fastcompany.com)
| mitthrowaway2 wrote:
| Hinton: "The main immediate danger is bad actors. Also, while not
| immediate, there is a concern that AI might eventually become
| smarter than humans".
|
| Whittaker: "Wrong! The main immediate danger is corporations. And
| the concern that AI might become smarter than humans not
| immediate."
| flangola7 wrote:
| The biggest risk is machines running out of hand and squishing
| all of us like a bug by accident. Once pseudo-intelligent
| algorithms are running every part of industry and engaging in
| global human communications it only takes minor errors to cascade
| and amplify into a real problem, one that will be moving faster
| than we can react to.
|
| Think stock market flash crash, replacing digital numbers that
| can be paused and reset with physical activity in supply chains,
| electrical grids, internet infrastructure, and interactions in
| media and interpersonal communication.
| data_maan wrote:
| All these warnings about AI safety are bullshit.
|
| Humanity is perfectly well capable of ruining itself without help
| from AGI (nuclear proliferation is unsolved and getting worse,
| climate change will bite soon etc).
|
| If anything AGI could save us by giving us some help in solving
| these problems. Or perhaps doing the mercy kill to put us out
| quickly, instead of us suffering a protracted death by a slowly
| deteriorating environment.
| kortilla wrote:
| "There are also other things killing us" is not a justification
| for making more. Why not just give nuclear weapons to
| extremists?
| latency-guy2 wrote:
| Agreed, stop making food, cars, drugs, guns, knives, forks,
| pens, stairs, bathtubs, rugs, and so on. We are actively
| being murdered everyday by these things and more, lets stop
| the extremists from gaining access to these things.
|
| Do not justify, that's illegal from now on.
| kortilla wrote:
| Are you literally incapable of seeing the difference
| between making food and giving nuclear bombs to terrorists?
| data_maan wrote:
| Because nuclear weapons definitely would kill us. AGI may
| also help us.
|
| Since we are not helping ourselves and will soon enough
| suffer climate doom, we really don't have anything to loose
| to go for AGI. It's the only rational choice right now, the
| logic is compelling.
|
| (Climate doom sounds dramatic, I know, but it's a fact if you
| read the latest IPCC report and the Surround science.)
| shrimp_emoji wrote:
| > _Because nuclear weapons definitely will kill us_
|
| I don't think so. Hurt us, sure.
|
| Kill us?
|
| That's bioweapons. ;) Wait until a Plague, Inc.-tier
| engineered virus inevitably escapes a BSL4 lab and gg. AI
| might count in that it might help someone engineer such a
| virus in their home lab. I hope we have offplanet
| population centers or posthumans by then.
| data_maan wrote:
| Again, no AGI needed for biowepons, malicious actors can
| do that today already. Perhaps not as easy, perhaps not
| as fast, but they can do it.
|
| What we have shown time and again though is that what we
| can't do is solve climate change. For that only AGI may
| help.
| siliconc0w wrote:
| I think my biggest concerns are:
|
| 0) civil unrest from economic impacts and changes in how the
| world works
|
| 1) increasing the leverage of bad actors - almost certainly this
| will increase frauds and thefts but on the far end you things
| like, "Your are GPT bomb maker. Build me the most destructive
| weapon possible with what I can order online."
|
| 2) swarms of kill bots, maybe homemade above
|
| 3) AI relationships replacing human ones. I think this one cuts
| both ways since loneliness kills but seems like it'll have
| dangerous side-effects like further demolishing the birth rate.
|
| Somewhat down on the list is the fear corporations or government
| gatekeeping the most powerful AIs and using them to enrich
| themselves, making it impossible to compete or just get really
| good at manipulating the public. There does seem to be a
| counterbalance here with open-source models and people figuring
| out how to make them more optimized so better models are more
| widely available.
|
| In some sense this will force us to get better at communicating
| with each other - stamping out bots and filtering noise from
| authentic human communication. Things seem bad now but it seems
| inevitable that every possible communication channel is going to
| get absolutely decimated with very convincing laser-targeted spam
| which will be very difficult to stop without some sort of large
| scale societal proof of human/work system (which ironically
| altman is also building).
| [deleted]
| EVa5I7bHFq9mnYK wrote:
| Now that everyone and their mother in law has chimed in about the
| perils of AI, folks are arguing whose mother in law gave the
| better talk.
| agentultra wrote:
| This is exactly the problem with ML right now. Hinton and other
| billionaires are making sensational headlines predicting all
| sorts of science fiction. The media loves a good story and fear
| is catchy. But it obscures the real danger: humans.
|
| LLM's are merely tools.
|
| Those with the need, will, and desire to use them for their own
| ends pose the real threat. State actors who want better weapons,
| billionaires who want an infallible police force to protect their
| estates, scammers who want to pull off bigger frauds without
| detection, etc.
|
| It is already causing undue harm to people around the world. As
| always it's those less fortunate that are disproportionately
| affected.
| [deleted]
| cced wrote:
| > It is already causing undue harm to people around the world.
| As always it's those less fortunate that are disproportionately
| affected.
|
| Source?..
| 13years wrote:
| _It is already causing undue harm to people around the world._
|
| Nefarious use, from scams, weaponized tech, propaganda and just
| the magnitude of noise generated by meaningless content, we are
| about to have a burden of undesirable effects to deal with in
| regards to both powerful and easily available technology to
| all.
|
| There is so much focus on the problems of future AGI, but
| little on AI that we have now, working as designed, but yet
| still very problematic from the impacts on societal order.
|
| I've elaborated much on the societal and social implications in
| the following reference. I expect AI will lead all concerns in
| the area of unexpected consequences in due time.
|
| https://dakara.substack.com/p/ai-and-the-end-to-all-things
| flatline wrote:
| That post came off as a bit hyperbolic to me, but I
| fundamentally agree with the premise that this will have an
| impact much like social media, with all its unforeseen
| consequences. It's not about AGI taking over the world in
| some mechanistic fashion, it's about all the trouble that we
| as humans get into interacting with these systems.
| 13years wrote:
| > That post came off as a bit hyperbolic to me
|
| I would say it is as hyperbolic as the promised
| capabilities of AI. Meaning that if it truly has the
| capability claimed, then the potential damage is
| equivalent. Nonetheless, I expect we will see a significant
| hype bubble implosion at some point.
|
| > it's about all the trouble that we as humans get into
| interacting with these systems
|
| Yes, it will always be us. Even if AGI "takes over the
| world", it would have been us that foolishly built the
| machine that does so.
| mitthrowaway2 wrote:
| Certainly LLMs are not AGI, and AGI has not yet been built. But
| what's your knock-down argument for AGI being "science
| fiction"?
| agentultra wrote:
| Intelligence isn't going to be defined, understood, and
| created by statisticians and computer scientists.
|
| There's a great deal of science on figuring out what
| intelligence is, what sentience is, how learning works. A
| good deal of it is inconclusive and not fully understood.
|
| AI is a misnomer and AGI is based on a false premise. These
| are algorithms and systems in the family of Machine Learning.
| Impressive stuff but they're still programs that run on fancy
| calculators and no amount of reductive analogies are going to
| change that.
| NumberWangMan wrote:
| "Fancy calculators" is kind of a reductive analogy, isn't
| it?
|
| I assert that machine learning is learning and machine
| intelligence is intelligence. We don't say that airplanes
| don't really fly because they don't have feathers or flap
| their wings. We don't say that mRNA vaccines aren't real
| vaccines because we created them with CRISPR instead of by
| isolating dead or weakened viruses.
|
| What matters, I believe, is what LLMs can do, and they're
| scarily close to being able to do as much, or more, than
| any human can do in terms of reasoning, _despite_ the
| limitations of not having much working memory, and being
| based on a very simple architecture that is only designed
| to predict tokens. Imagine what other models might be
| capable of if we stumbled onto a more efficient
| architecture, one that doesn 't spend most of its parameter
| weights memorizing the internet, and instead ends up
| putting them to use representing concepts. A model that
| forgets more easily, but generalizes better.
| mitthrowaway2 wrote:
| > Intelligence isn't going to be defined, understood, and
| created by statisticians and computer scientists.
|
| What is an example of a task that demonstrates either
| intelligence, sentience, or learning, and which you don't
| think computer scientists will be able to get a computer to
| do within, say... the next 10 years?
| c1ccccc1 wrote:
| > Hinton and other billionaires are making sensational
| headlines predicting all sorts of science fiction.
|
| Geoff Hinton is not a billionaire! And the field of AI is much
| wider than LLMs, despite what it may seem like from news
| headlines. Eg. the sub-field of reinforcement learning focuses
| on building agents, which are capable of acting autonomously.
| irrational wrote:
| I thought the biggest risk was Sarah Connor and Thomas Anderson.
| mrshadowgoose wrote:
| I fully agree that malicious corporations and governments are the
| largest risk here. However, I think it's incredibly important to
| reject the reframing of "AI safety" as anything other than the
| existential risk AGI poses to most of humanity.
|
| What will the world look like when AGI is finally achieved, and
| the corporations and governments that control them rapidly have
| millions of useless mouths to feed? We might end up living in a
| utopic post-scarcity society where literally every basic need is
| furnished by a fully automated industrial base. But there are no
| guarantees that the entities in control will take things in that
| direction.
|
| AI safety is not about whether "tech bros are going to be mean to
| women". AI safety is about whether my government is concerned
| with my continued comfortable existence once my economic value as
| a general intelligence is reduced to zero.
| somenameforme wrote:
| The risk in your scenario there is not really coming from AGI,
| but how selective access to such might enable people to harm
| others. If you have access to AGI capable of enabling you to
| realistically build out your utopian vision, then it matters
| not what other groups are doing - short of coming to try to
| actively destroy you. You'd have millions that would join you,
| and could turn that vision into a reality regardless of
| whatever currently powerful entities think about it. So the
| real danger does not really seem to be AGI, but the restricted
| access to such.
|
| The focus on "safety" all but guarantees that there are going
| to be two sets of "AGI", if such is ever to be achieved. There
| will be lobotomized, censored, and politically obedient version
| that the public has access to. And then there will be the
| "real" system that militaries, governments, and
| influential/powerful entities will be utilizing. You can
| already see this happening today. There is an effectively 100%
| chance that OpenAI is providing "AI" systems to the military
| and government, and a 0% chance that responses from it ever
| begin with, "As an AI language model..."
| raincole wrote:
| I feel you have a fairy-tale definition of AGI. AGI is not
| literally magic. It's not "genie in a bottle" in the literal
| sense.
|
| > If you have access to AGI capable of enabling you to
| realistically build out your utopian vision, then it matters
| not what other groups are doing
|
| It's genie. AGI is a computer program, and it doesn't create
| an alternative universe for your personal comfort. Or, more
| specifically, even that kind of AGI is possbile, there will
| be a weaker AGI before that. The AGI that is not strong
| enough to ignore physical constraints, but strong enough to
| fuck everyone up if under control of a malicious entity. And
| that is what AI safely about.
| hesayyou2049 wrote:
| The guys with the medals on their chests using unrestricted
| AI while the public gets the toothless one. Not too different
| from the enduring idea that common folk will be eating bugs
| while they eat steaks.
| incone123 wrote:
| Hardly surprising that it's an enduring idea. The private
| jet crowd already tell everyone else to be mindful of our
| carbon footprints.
| nullsense wrote:
| All these views on AGI are so self-serving. Which group will
| have access and control it etc.
|
| It will control itself. We're talking general intelligence.
| They won't be tools to be used however we see fit. They will
| be Rosa Parks.
|
| The more I think about "AI alignment" and "the control
| problem" I feel like most of it is Ph.D math-nerd nonsense.
| ip26 wrote:
| _If you have access to AGI capable of enabling you to
| realistically build out your utopian vision, then it matters
| not what other groups are doing - short of coming to try to
| actively destroy you_
|
| I guarantee you there will be someone else with access to
| comparable AGI with an imperialistic vision who will enlist
| said AGI to help subjugate you and build their vision.
| pixl97 wrote:
| The problem with humanity is we look to solve our inability
| to create unlimited power before we solve the problem of
| unlimited greed.
| [deleted]
| somenameforme wrote:
| Think about current times, and imagine there was some group
| with an imperialistic vision set out to subjugate the rest
| of society to help build their vision. Do you think this
| group would be more, or less, successful in a time where
| both society and the imperialists had access to the same
| AGI? In other words, if we give both groups to the exact
| same tool, would the capacity/knowledge gap widen or
| narrow?
| yyyk wrote:
| >AI safety is about whether my government is concerned with my
| continued comfortable existence once my economic value as a
| general intelligence is reduced to zero.
|
| We already have real life examples close to this, in resource
| export based economies where most citizens are useless to the
| main economic activity. The result hadn't been pretty so far...
| kortilla wrote:
| Like Norway?
| yyyk wrote:
| That's the one reasonable exception. Given the sample size,
| it's not too encouraging.
| paganel wrote:
| Can't see any potential AGI doing any waste disposal work or
| nurse-like caring, or at least not as (relatively) cheap us
| humans are willing to do it, so those jobs will still be safe.
| mrshadowgoose wrote:
| AGI, by definition, would be as capable as a typical human
| intelligence. This implicitly includes being able to
| perceive, and interact with the physical world.
|
| Why wouldn't an AGI be capable of performing a physical task,
| if given the suitable means to interact physically?
| paganel wrote:
| It's much cheaper to feed a human brain and a human body
| compared to "feeding" an AGI, I'm talking about menial (and
| maybe not so menial) tasks like garbage collecting. Under
| capitalism cheaper is generally used as the preferred
| option.
| mitthrowaway2 wrote:
| Do you think it's plausible that computers might someday
| have the potential to come down in cost-for-performance?
|
| Cars eventually became cheaper than horses...
| paganel wrote:
| > Do you think it's plausible that computers might
| someday have the potential to come down in cost-for-
| performance?
|
| I have no idea. I do think though that it's a matter of
| energy, and that us, humans, are way better at creating
| it and putting it to use compared to potential future
| AGI-capable machines. Lungs + the blood system are just
| such an efficient thing, especially if you also look at
| the volume/space they occupy compared to whatever it is
| that would power that future AGI-capable machine.
|
| > Cars eventually became cheaper than horses...
|
| In large parts of the world donkeys, cows/oxes and horses
| are still cheaper and more efficient [1] compared to
| tractors, just look at many parts of India and most of
| Africa. Of course, us living in the West tend to not
| think about those parts of the world all that often, as
| we also tend to mostly think about the activities that we
| usually carry out (like having to travel between two
| distant cities, a relatively recent phenomenon).
|
| [1] "More efficient" in the sense that if you're an
| African peasant and your tractor breaks down in the
| middle of no-where then you're out of luck, as the next
| tractor-repair shop might be hundreds of kms away. That
| means you won't get to plow your land, that means famine
| for you and your family. Compared to that, horses/oxes
| (to give just an example) are more resilient.
| ben_w wrote:
| Green500 top supercomputer, gets 65Gflops/W.
|
| 65Gflops/W = 6.5e10 operations per joule = 2.34x10^17 per
| kWh
|
| Assume $0.05/kWh electricity: (2.34x10^17 operations/kWh)
| / ($0.05/kWh) = 4.68x10^18 operations per US dollar
|
| Human brain computational estimates are all over the
| place, but one from ages ago is 36.8x10^15 flops [?]
| 3.7e16 operations/second [?] 1.3e20 operations/hour:
| https://hplusmagazine.com/2009/04/07/brain-chip/
|
| Given previously calculated cost, this is equivalent to a
| human that costs $28.31/hour.
|
| Of course, as we haven't actually done this yet, we don't
| know if that computational estimate is correct, nor if we
| do or don't need to give it off-hours and holidays.
|
| Still, general explanation is there's a lot of room for
| improvement when it comes to energy efficiently in
| computation; calling this Moore's Law may be inaccurate,
| but the reality happens to have rhymed thus far.
| mrshadowgoose wrote:
| Do you have literally any evidence of this extremely bold
| claim? Especially considering we don't even have AGI yet?
|
| In your non-existent calculations, have you taken into
| account the 20-30 years of energy and resources it
| typically costs to train a typical human intelligence for
| a specific task?
|
| Have you considered that general intelligence uses on the
| order of 10 watts? Even if AGI ends up using 10x this,
| have you considered that 100 watts is a rounding error in
| comparison to the power use involved in all the
| industrial processes that humans currently coordinate?
| m4nu3l wrote:
| I doubt that AI will lead to a post-scarcity society. It
| depends of what you mean by "post-scarcity". The amount of good
| and services will always be finite regardless of how they are
| produced.
|
| > and the corporations and governments that control them
| rapidly have millions of useless mouths to feed?
|
| I always struggle to understand this. Maybe I'm missing
| something. Who's buying what AIs produce if nobody has an
| income? You can imagine a scenario where corporations only
| trade between them (so only shareholder benefit from the
| production). However in such a scenario who prevents other
| people from spawning their AI systems?
|
| I also doubt shareholders can actually consume all the GDP by
| their own. If production is so high that they can't and other
| people are poorer, then prices must come down. This combined
| with the fact that you can use your AI to produce services,
| makes me skeptical of these claims.
| pixl97 wrote:
| You're making a lot of odd assumptions here that can break
| when the underlying ideas on how things work...
|
| People work to create things... at this point there is a
| shared duopoly between humans and machines on creating things
| (in the past animals used to be heavily involved in this
| labor, and no longer are). Now think what happens if humans
| are not needed, especially in mass, to create things.
|
| Right now if you're rich you need other humans to dig up coal
| or make solar panels so you can produce things and sell them
| to make the yacht you want. But what would happen if you no
| longer needed the middle part and all those humans that want
| rights and homes and such in the middle? They would not
| longer be a means, but a liability. Price no longer is a
| consideration, human capital is no longer a consideration,
| control of energy, resources, and compute now is.
| danaris wrote:
| > The amount of good and services will always be finite
| regardless of how they are produced.
|
| So will the number of people.
|
| The point of "post-scarcity" isn't that there are _infinite_
| resources; it 's that there are _more than the people need_.
| elijahbenizzy wrote:
| Wanna bet that the 2020s would be called "post-scarcity" by
| cavemen? You can buy food without sabertooth tigers
| assaulting you! If you get a cut, you probably _won 't_ die!
| Fire is the press of a button! We _make_ shelter, not buy it!
| (and all of this was true like 150 years ago -- not sure what
| they would make of the internet...).
|
| Project this forward another several thousand years and
| people will be laughing at us:
|
| - You had to call up people and were limited by the speed of
| light?
|
| - You didn't have teleportation?
|
| - You lived <100 years and died due to cancer?
|
| - You were still asking "WTF is gravity"?
|
| - You hadn't had the +2 spacial and +1 time dimensional
| implants in you yet?
|
| - You hadn't adopted the metric system yet?
|
| And so on...
| tester457 wrote:
| The last bullet point is a highlight.
| jimmaswell wrote:
| As an aside (not that you endorsed it) is anyone else sick of
| hearing "tech bro"? It feels like a slur pretty much. I can't
| take anyone seriously who uses it. As someone who makes art
| occasionally and commissions artists regularly, when an artist
| whines about "ai tech bros" it makes me want to use ai even
| more out of spite.
| brigadier132 wrote:
| "Tech bro" is just an insult created purely from sour grapes.
| The people that use the term "tech bro" are the same people
| that describe things as "gross". These are the people that
| will be automated out of jobs first.
| rurp wrote:
| > These are the people that will be automated out of jobs
| first.
|
| This attitude is _exactly_ why "tech bro" is a
| perjorative. There is a prominent group of people that
| shares your disdain for folks who are upset that their
| lives are being ruined by technological changes, all so
| that they can have some new shiny toys and become even
| wealthier.
|
| On top of being, yes, gross, being so vocal about that
| attitude is stupid. It would be much better to at least
| pretend to have some empathy or at least keep your glee to
| yourself.
| jimmaswell wrote:
| I don't fault anyone for being upset they have to find a
| new job, but I have justified disdain for what amount to
| horse farriers who want to ban the automobile.
| teucris wrote:
| It's definitely used as a generic slur, but there is a need
| to call out the problematic parts of tech culture that have
| led to some of our recent problems with social media,
| privacy, bias, etc. I don't know of any terminology that
| hasn't been weaponized, so I resort to using "bro culture".
| The reality is that terminology is a treadmill - terms get
| "used up" as they're laden with connotations and baggage,
| forcing us to find new terms, ad infinitum.
| MichaelZuo wrote:
| > The reality is that terminology is a treadmill - terms
| get "used up" as they're laden with connotations and
| baggage, forcing us to find new terms, ad infinitum.
|
| Perhaps for those lacking courage.
|
| There are plenty of real world examples that demonstrate
| people, including sizeable organized groups, are capable of
| doing otherwise, at least for a few hundred years.
|
| e.g. Vatican hardliners sticking to their canon.
| dragonwriter wrote:
| I dunno, the Vatican seems a perfect example of people
| needing to come up with new terms as old ones get "used
| up", even when the _ideas_ don't change.
|
| I mean, that's pretty much the reason why we have the
| "Dicastery for the Doctrine of the Faith" rather than the
| "Congregation of the Holy Inquisition" and "Dicastery for
| Evangelization" rather than "Sacred Congregation for the
| Propagation of the Faith" (or, and this perhaps indicates
| how the name had worn out better, in Latin short form
| "Propaganda Fidei".)
| jimmaswell wrote:
| Is it really accurate to imply there are no women willingly
| complicit in or benefitting from evil corporation deeds?
| agalunar wrote:
| For many younger speakers, "you guys" is legitimately a
| second person plural pronoun (like "y'all") and implies
| nothing about the gender^1 of the referents, even if they
| consider singular "guy" to be a synonym for man.
|
| Some older speakers use "guy" as a term of address, as in
| "Hey, guy", similar to how one might say "hey, bud" or
| even "hey you".
|
| I don't think it will ever happen, but it's funny to
| imagine something similar happening and "bro(s)" coming
| to be a nongendered term.
|
| Anyway, it's never crossed my mind before that "tech
| bros" singles out men; for me it evokes a stereotype of,
| yes, men, but it's really an attitude, value system,
| world view, or collection of behaviors that are being
| alluded to. (Of course, it's also only implication in the
| sense of "hinting at", because it's not contradictory to
| say "tech bros are the worst, and tech women are too").
|
| [1] The... non-grammatical gender. English no longer has
| grammatical gender in any case, so it's unambiguous, but
| it feels weird to use "gender" in a linguistic context
| and not mean grammatical gender.
| rurp wrote:
| It's a broad generalization that isn't meant to be
| precisely accurate in all cases. I'm not claiming it's a
| great term, but it does succinctly describe a notable
| attitude and culture. If there's a better term to use
| that conveys the same message I'm sure many folks would
| be happy to adopt it.
| mikrl wrote:
| <<Tech bro>> is so passe, I have my own (more offensive)
| names for the archetype to which the term applies.
| mrshadowgoose wrote:
| I feel the same way. At least in my social circles, "tech
| bro" tends to be used by the loudest and least-informed
| individuals when they try to marginalize something they don't
| understand, but vaguely don't like (or have read that they're
| not supposed to like).
| cwkoss wrote:
| In my social circle, the only people bothered by the term
| are tech bros.
| __MatrixMan__ wrote:
| Why bother with corporations and government if you have AGI?
| Wouldn't it be a better coordinator than they would? (and if
| it's not, we can always go back to having governments and
| corporations)
| staunton wrote:
| > and if it's not, we can always go back to having
| governments and corporations
|
| I wouldn't be so sure about that...
| elijahbenizzy wrote:
| The entire idea that we will have "useless mouths to feed" is
| making a big assumption. "post-scarcity" is absurd -- the more
| we get, the more problems we will create, its just human
| nature.
|
| - Sustain everybody on earth? Focus everything on moving off
| the planet and colonizing the universe. - Infinite energy?
| Don't have infinite vessels to travel. - Space travel easy?
| Limited by the speed of light.
|
| And so on... Sure, you may dream that AI will be solving it all
| and we'll be sitting on our lazy butts, but a society that
| doesn't have challenge dies _very_ quickly (AI or not), so we
| 've learned to make challenge and grow.
|
| The optimist in me knows that we can't even comprehend the
| challenges of the future, but the idea that we won't play _the_
| pivotal role is laughable.
|
| This is the thing with actual exponential growth -- the curve
| is so steep that all our minds can do is take the current view
| of the world and project our fears/preconceived notions into
| the future.
| iinnPP wrote:
| It's not a large stretch to imagine a scenario where AGI or
| even ChatGPT is used to justify a nuclear war where a select
| few are secured and humanity is reset under their control.
|
| There's a reason for the plethora of fiction around it.
| incone123 wrote:
| Nuclear war leads to extremely long term and widespread
| environmental damage. Forcing technological regression on
| society by other means is much cleaner. Of course an agi
| won't care since it won't be limited to a human lifespan
| nor much inconvenienced by radiological pollution.
| elijahbenizzy wrote:
| Its also not a real stretch to imagine a scenario where
| human decision making by a select few who have control over
| this leads to _exactly_ the same thing. Do you trust Putin
| or ChatGPT more? (actual question, I don 't know the
| answer).
| staunton wrote:
| I think we can "trust" Putin to keep doing what he's
| doing. Who knows what GPT-X might do?
| rrgok wrote:
| Forgive if I'm gonna rant a little bit under your comment.
| The phrase "the more we get, the more problems we will
| create, it is just human nature." struck a chord that I
| cannot myself stop ranting about.
|
| I'm gonna ask again as I've done in some other post. Why we
| consider ourselves the most intelligent species if we don't
| stop and ask ourselves this: for how long are we gonna face
| challenges? What is the supposedly end goal that will
| terminate this never ending chase? Do humans really want to
| solve all problems?
|
| I don't really understand and I'm 32 years old. I've been
| asking this question for a long time. What is the point of
| AI, raising consciousness, curing cancer, hell beating death,
| if we don't have a clear picture of where we are going. Is it
| to have always problems and solving them incrementally or
| just solving all problems once and for all? If it is the
| latter, there already is a great solution to it. If it is the
| former, then I'm afraid I have to break it up to you (not
| specifically the parent poster, but you as in the reader):
| you have sick mind.
| ericmcer wrote:
| Philosophy is boring so it doesn't really play well in
| political discussions. I agree though, when we argue about
| something like AI without any kind of philosophical
| underpinning the argument is hollow.
|
| AI is "good" in the sense of what goal? Becoming a space-
| faring civilization? Advancing all technology as fast as
| possible? Building a stable utopia for life on earth?
| elijahbenizzy wrote:
| The end goal? Does life have a purpose? Is it possible to
| "solve all problems"? To even have a picture of where we
| are going? We move forward because there is nowhere else to
| go.
|
| Perhaps there is a general state of societal enlightenment,
| but I've read too many sci-fi books to be anything but a
| skeptic.
| eep_social wrote:
| I think the way you've framed "problems" is off the mark.
| I'll try to explain my view but it's not straightforward
| and I am struggling a bit as I write below.
|
| The way I see it, what the GP is getting at is the idea
| that human societies require challenges or else they
| stagnate, collapse, and vanish. We can observe this on an
| individual level and GP is generalizing it to societies
| which I agree with but I doubt this is "settled".
|
| On a personal level, if you have achieved some form of
| post-scarcity, you will still complain -- about the
| weather, the local sports team, your idiot cofounder,
| whatever. A less fortunate person might be complaining that
| they can't afford time with their kids because they're at
| their second job. The point is that everyone will find
| problems in their life. And those problems are a form of
| challenge. And a life truly without challenge is
| unbelievably boring. Like kill-yourself boring. If there is
| no struggle, there is no point. The struggle gives humans
| purpose and meaning. Without struggle there can be no
| achievement in the same way that shadows require light.
|
| So, with all of that in mind, I think the point is that
| even with AGI, humans will require new challenges. And if
| AGI enables post-scarcity for ~everyone, that just means
| ~everyone will be inventing challenges for themselves. So
| there is no end game where the challenges taper off and we
| enter some kind of stability. I, and I think GP, think that
| stability would actually be an extinction level event for
| our society.
|
| Person by person, I think the kind of challenge varies.
| What do you dream of doing if you had no constraints on
| your time? How many years would you spend smoking weed and
| playing video games before you got bored of that? Video
| games that hold your attention do so by being challenging
| (btw). It was about a year, for me.
|
| > Do humans really want to solve all problems?
|
| No, we want challenges that provide a sense of
| accomplishment when they have been overcome.
|
| Thank you for reading my ramble, hope it helps.
| mitthrowaway2 wrote:
| A human only has a comparative advantage over an AI when they
| can solve a problem at a lower cost than the AI can do the
| same. It's hard to imagine that would be the case in a
| scenario where AGI is decently smarter than humans are.
| vasco wrote:
| Opposable thumbs are an advantage for a few years.
| mitthrowaway2 wrote:
| Agreed, we do have a lot of nimble and low-cost actuators
| in our body. That will probably provide gainful
| employment for a while. I just don't see it being a very
| long-lasting advantage.
| tehjoker wrote:
| We have the capacity to take care of everyone right now but
| we don't because private wealth realized that comfortable
| people creates an equal society and equality means there's no
| way to be a mogul where everyone has to listen to you. This
| is in part why they destroyed the middle class, things got
| too crazy for them in the 1960s and they counter attacked.
| There are documents from both liberal and conservative
| establishment figures at that time describing the situation.
|
| c.f. The Crisis of Democracy:
| https://en.wikipedia.org/wiki/The_Crisis_of_Democracy
|
| c.f. The Powell Memorandum:
| https://scholarlycommons.law.wlu.edu/powellmemo/
| m4nu3l wrote:
| > We have the capacity to take care of everyone right now
| but we don't because private wealth.
|
| This is very not the case. Wealth is created. It doesn't
| exist just somewhere to be redistributed. Making the
| assumption that you will have the same GDP if you
| redistribute massively is unrealistic.
| tehjoker wrote:
| This is not a GDP focused argument. We had the capacity
| to take care of everyone in the 1950s or 1960s.
|
| However, while not an ecological argument, it would even
| be beneficial within capitalism to rebalance workers and
| the wealthy as the system is demand oriented. If you give
| everyone more money and stuff, they will be able to buy
| more, and one person's spending is another's income. It
| could be argued that GDP would increase faster under a
| more equal system, but I don't think the planet could
| take it (hence, like under our current system, planning
| will be needed to mitigate the environmental cost).
| bostik wrote:
| I've seen a remarkable take on this. In the form of
| micro-scifi, no less:
|
| > _The robot revolution was inevitable from the moment we
| programmed their first command: "Never harm a human, or
| by inaction allow a human to come to harm." We had all
| been taught the outcast and the poor were a natural price
| for society. The robots hadn't._
|
| 'Nuff said.
| m4nu3l wrote:
| >This is not a GDP focused argument.
|
| The argument was about wealth, which its production is
| measured by GDP. It's definitely a GDP argument.
|
| > If you give everyone more money and stuff, they will be
| able to buy more, and one person's spending is another's
| income.
|
| I disagree. I don't think the system is either demand nor
| supply oriented. It clearly is both. If you just take
| money from rich people forcing them to divest and give it
| to poor people you won't get immediate grow, but
| inflation. If you just produce and consume there won't be
| any growth.
|
| > It could be argued that GDP would increase faster under
| a more equal system.
|
| You would need to provide me with good evidence for this,
| given all economic systems in history that championed
| equality ended up with very low growth. It's the reason
| why China, Russia and many other countries are lagging
| behind.
|
| > but I don't think the planet could take it (hence, like
| under our current system, planning will be needed to
| mitigate the environmental cost).
|
| Growth is not directly related to energy consumption (nor
| unrelated). You can have economic growth by becoming more
| efficient. Also a lot of services produced today are
| intangible (like software) and require much less energy
| per dollar to be produced.
|
| Also most environmental issues are not just a product of
| the market, but (if for instance you look at climate
| change) are at least in equal part Governmental failures.
| We could have had ~100% nuclear energy production by now
| if Governments didn't restrict or entirely ban nuclear
| energy.
| ryandrake wrote:
| But _relative_ wealth is not created. It is a percentage
| and by definition distributed. Relative wealth is what
| drives standard of living and creates the power
| imbalances OP is talking about.
|
| Absolute wealth (the kind that gets created) is kind of
| pointless to measure. If you have $50K and all your
| neighbors have $100K, and then tomorrow you have $100K
| and all your neighbors have $1M, you and your neighbors
| created wealth but you are worse off.
| m4nu3l wrote:
| > Absolute wealth (the kind that gets created) is kind of
| pointless to measure.
|
| It's literally the other way around to me. I want to be
| better off than I'm now, not better of in relative term
| (which might mean I'll be worse off).
|
| > you have $50K and all your neighbors have $100K, and
| then tomorrow you have $100K and all your neighbors have
| $1M, you and your neighbors created wealth but you are
| worse off.
|
| You literally aren't if there is no inflation. You are
| worse off of your neighbors. If you want to be wealthy as
| them then ask yourself about how they are doing it and
| copy them. That's how the system grows. You don't go and
| punish people that are successful in producing wealth.
| danielbln wrote:
| > a society that doesn't have challenge dies very quickly
|
| Do you have an example or a citation for this?
| elijahbenizzy wrote:
| Pure conjecture. But I think there are many examples: look
| at dying empires, corporations, etc... its all the same.
| They stop seeing challenges, get lazy, and are taken over
| by some hungrier and scrappier entity. When the real
| challenge comes they're unprepared.
| mirekrusin wrote:
| Do you think AGI will get lazy?
| elijahbenizzy wrote:
| A program is neither lazy nor motivated. It does
| precisely what it is programmed to do.
|
| I would push back on the question, as well as a myriad of
| implied premises behind it...
| MacsHeadroom wrote:
| A machine learning model is not a program, is not
| programmed, and research on emergent motivation in ML
| models disagrees with this position.
| whitemary wrote:
| > _AI safety is about whether my government is concerned with
| my continued comfortable existence once my economic value as a
| general intelligence is reduced to zero._
|
| You wanted a "free market" and now you're complaining? Didn't
| you get what you want?
| mrshadowgoose wrote:
| > You wanted a "free market" and now you're complaining?
|
| Where exactly did I claim this?
| bostonsre wrote:
| Do you think open source AI could also pose a risk to humanity
| and if so, how does it compare to the risks of malicious
| corporations or governments? It seems like open source AI has
| been accelerating rapidly and gaining tremendous steam and
| could potentially surpass or maybe just keep parity with
| corporations that constantly ingest open source innovations.
| Whatever open source produces could just be ingested by those
| bad corporations and governments. It seems like it would be
| pretty hard to regulate either private or open source AI at
| this point and it kind of seems like it could be an unstoppable
| runaway train. If AGI is controllable, maybe open source at the
| forefront would allow us to get to a mutually assured
| destruction like state where all governments are at parity.
| joe_the_user wrote:
| _However, I think it 's incredibly important to reject the
| reframing of "AI safety" as anything other than the existential
| risk AGI poses to most of humanity._
|
| I think the folks who lean super-hard on the existential risk
| problem of AGI compared to everything else do themselves a
| disservice. The "everything else is irrelevant" tone serves to
| alien people who have with real concerns about other dangers
| like climate change and who might in include AGI safety in
| their existing concerns.
|
| It doesn't help that a lot of the existential risk theorists
| seem to come from market fundamentalist positions that don't
| appear to have a problem with serious markets of corporate
| behavior.
|
| _AI safety is not about whether "tech bros are going to be
| mean to women"._
|
| Just as an example. Why you even need to chose between this
| stuff? Why can't people worried about "X-risk" also concern
| themselves with mundane problem? Why set-up a fight between?
| That won't get people outside the X-risk bubble interested, it
| will reinforce the impression that the group's a nutty cult
| (just as the FTX connection did).
|
| For the sake of your cause, I strong suggest not framing it
| that way. Do climate people say "individual species extinction
| doesn't matter 'cause climate change is bigger?" No. Get a
| clue.
| bo1024 wrote:
| > it's incredibly important to reject the reframing of "AI
| safety" as anything other than the existential risk AGI poses
| to most of humanity
|
| "AI safety" has been a term of art for many years and it
| doesn't refer to (only) that. Your post is the one reframing
| the term...see https://en.wikipedia.org/wiki/AI_safety
|
| Furthermore, I agree with Whittaker's point in the article,
| which is that arguments like yours have the effect of
| distracting from or papering over the real concrete harms of AI
| and technology, today, particularly on women and minorities and
| those who are both.
| jrm4 wrote:
| I keep trying to figure out ways to explain to people that
| "AGI" is a deeply unlikely danger, literally so small as its
| not worth worrying about.
|
| Right now, the best I can come up with is "the randomness of
| humans." I.e. if some AGI were able to "come up with some plan
| to take over," at some point in the process it has to use human
| labor to do it -- and it's my very firm belief that we are so
| random as to be unmodelable. I'm incredibly confident that this
| scenario never happens.
| JoshTriplett wrote:
| > Right now, the best I can come up with is "the randomness
| of humans." I.e. if some AGI were able to "come up with some
| plan to take over," at some point in the process it has to
| use human labor to do it -- and it's my very firm belief that
| we are so random as to be unmodelable. I'm incredibly
| confident that this scenario never happens.
|
| This is a huge handwave, and one resting on multiple
| incorrect assumptions.
|
| Dedicated human labor is not inherently required. Humans are
| very modelable, if necessary. AGI has any number of ways to
| cause destruction, and "plan to take over" implies far more
| degree of modeling than is actually necessary; it would
| suffice to execute towards some relatively simplistic goal
| and not have any regard for (or concept of) humans at all.
| est31 wrote:
| > it would suffice to execute towards some relatively
| simplistic goal and not have any regard for (or concept of)
| humans at all.
|
| The classical example here is the paperclip optimizer AI
| put in charge of a factory, made to make as many paperclips
| as possible. Well, it turns out humans have iron in their
| blood so why keep them alive? Gotta make paperclips.
| yyyk wrote:
| Every 'world takeover' plan that an 'unaligned' AGI might do,
| can just as well be done by an 'aligned' AGI being commanded
| by humans to do said plan, the alignment ensuring that the
| AGI will obey. The latter scenario is far more likely than
| the former.
|
| If your interlocutor thinks there aren't any humans who'll do
| it if they can, just ask him whether they have ever _met_
| humans or read the papers... As one twitter wit put it:
| "Demonstrably unfriendly natural intelligence seeks to create
| provably friendly artificial intelligence".
|
| https://twitter.com/snowstarofriver/status/16365066362976747.
| ..
| JoshTriplett wrote:
| AGI _without_ alignment is near-certain death for everyone.
| Alignment just means "getting AI to have any concept of
| 'the thing we told it to do', let alone actually do it
| without causing problems via side effects". Alignment is a
| prerequisite for non-fatal AGI. There are certainly _other_
| things required as well.
| yyyk wrote:
| We already know how humans will act. Maybe they can be
| deterred with MAD, but I wouldn't count on it if doing
| serious damage is too easy for too many people (we should
| do something about that). On the other hand, we have very
| little knowledge of how AGI will act aside from book-
| based fantasies that some people choose to take as
| reality (these books were based on the symbolic AIs of
| yore).
|
| >Alignment just means "getting AI to have any concept of
| 'the thing we told it to do'.
|
| That's a requirement for AGI anyway, and not what
| Alignment means. Alignment means aligning the AGIs values
| with the values of the trainers.
| ben_w wrote:
| > The latter scenario is far more likely than the former.
|
| Is it?
|
| I think nobody really knows enough at this point to even
| create a good approximation of a probability distribution
| yet.
| yyyk wrote:
| No, but the probability of humans acting the way they
| often do is high. It would take some probability
| distribution to match that.
| mrshadowgoose wrote:
| > I'm incredibly confident that this scenario never happens.
|
| That's great, but you're talking about a branch of scenarios
| that nobody here is discussing. "AGI deciding to take over"
| is not being discussed, rather "shitty
| people/companies/governments using AGI as a tool to exert
| their will" is the concern. And it's a real concern. We have
| thousands of years of human history, and the present day
| state of the world, which clearly demonstrate that people in
| power tend to be shitty to the common person.
| jrm4 wrote:
| Right. I'm agreeing with the op.
| rm169 wrote:
| If that's the best argument you can come up with, I don't see
| how you can be so incredibly confident in this view. So what
| if human labor will be required? Humans won't all band
| together to stop a power seeking AI. And I don't see any
| human randomness matters. I agree that it's naive to think a
| complex takeover plan can be perfectly planned in advance. It
| won't be necessary. We will voluntarily cede more control to
| an AI if it is more efficient to do so.
| leroy-is-here wrote:
| What if AGI just turns out to be exactly like the current human
| mind is now, except more accurate at digital calculation? What
| if we created AGI and then it was just lazy and wanted to read
| about action heroes all day?
| amelius wrote:
| We'd tell it that it would get action hero comics after they
| complete a task.
|
| (I'm feeling like a true prompt engineer now)
| optimalsolver wrote:
| And with no negative outcomes imaginable. Was worried for a
| minute there.
| yyyk wrote:
| Don't worry, they'll 'align' it so that it has to work all
| day.
| kelipso wrote:
| Just RLHF that part out and make it an x maximizer.
| mrshadowgoose wrote:
| > What if AGI just turns out to be exactly like the current
| human mind is now
|
| This is quite literally the point at which things start to
| get scary, and the outcome is highly dependent on who
| controls the technology.
|
| There's the concept of a "collective superintelligence",
| where a large number of mediocre general intelligences
| working towards the same common goal jointly achieve vastly
| superhuman capability. We don't have to look to sci-fi to
| imagine what collective superintelligences are. Large
| corporations and governments today are already an example of
| this.
|
| The achievement of artificial collective superintelligences
| will occur almost immediately after the development of AGI,
| as it's mostly a "copy and run" problem.
| leroy-is-here wrote:
| So you think that AGI is a pre-requisite, a requirement, of
| unlocking a general, Earth-wide collective super-
| intelligence of humans?
| satisfice wrote:
| The feminist complains about feeling disrespected for half the
| interview instead of dealing with the substance of the question.
| When she finally gets around to commenting on his point, it's a
| vacuous and insulting dismissal-- exactly the sort of thing she
| seems to think people shouldn't do to her.
|
| Most of what she says is sour grapes. But when you put all that
| aside, there's something else disturbing going on: apparently the
| AI experts who wish to criticize how AI is being developed and
| promoted can't even agree on the most basic concerns.
|
| It seems to me when an eminent researcher says "I'm worried about
| {X}" with resepct to the focus of their expertise, no reasonable
| person should merely shrug and call it a fantasy.
| nologic01 wrote:
| The biggest risk I see (in the short term) is people being
| _forced_ to accept outcomes where "AI" plays, in one form or
| another a defining role that materially affects human lives.
|
| Thus people accepting implicitly (without awareness) or
| explicitly (as a precondition for receiving important services
| and without any alternatives on offer) algorithmic regulation of
| human affairs that is controlled by specific economic actors.
| Essentially a bifurcation of society into puppets and puppeteers.
|
| Algorithms encroaching into decision making have been an ongoing
| process for decades and in some sense it is an inescapable
| development. Yet the manner in which this can be done spans a
| vast range of possibilities and there is plenty of precedence:
| Various regulatory frameworks and checks and balances are in
| place e.g., in the sectors of medicine, insurance, finance etc.
| where algorithms are used to _support_ important decision making,
| not replace it.
|
| The novelty of the situation rests on two factors that do not
| merely replicate past circumstances:
|
| * the rapid pace of algorithmic improvement which creates a
| pretext for suppressing societal push-back
|
| * the lack of regulation that rather uniquely characterized the
| tech sector, which allowed creating de-facto oligopolies, lock-
| ins and lack of alternatives
|
| The long term risk from AI depends entirely on how we handle the
| short term risks. I don't really believe we'll see AGI or any
| such thing in the foreseeable future (20 years), entirely on the
| basis of how the current AI mathematics looks and feels. Risks
| from other - existential level - flaws of human society feel far
| greater, with biological warfare maybe the highest risk of them
| all.
|
| But the road to AGI becomes dystopic long before it reaches the
| destination. We are actually already in a dystopia as the social
| media landscape testifies to anybody who wants to see. A society
| that is algorithmically controlled and manipulated at scale is a
| new thing. Pandora's box is open.
| drewcoo wrote:
| > Algorithms encroaching into decision making have been an
| ongoing process for decades
|
| When in recorded history have people not followed algorithms?
|
| This seems as misguided as fears about genetically modified
| crops, something else humans have been doing for as long as we
| know.
|
| AI frightens people, in part, because often the reasoning is
| inscrutable. This is similar to how a century ago,
| electrification was seen. All these fancy electrical doo-dads,
| absent well-understood mechanisms, gave us ample material for
| Rube Goldberg.
|
| https://www.rubegoldberg.org/all-about-rube/cartoon-gallery/
|
| > the lack of regulation
|
| Regulation is an algorithm.
|
| > A society that is algorithmically controlled and manipulated
| at scale is a new thing.
|
| Nope. It's as old as laws, skills, and traditions.
|
| > Pandora's box is open.
|
| Algorithms are rules. The opening of pandora's box is exactly
| the opposite of unleashing a set of rules.
| nologic01 wrote:
| > Regulation is an algorithm
|
| I am not frightened by AI, I am frightened by people like you
| developing an amoral, inhumane pseudo-ideology to justify
| whatever they do and feeling entitled to act on it "because
| it was always thus"
| turtleyacht wrote:
| In a sense, isn't AI trained by "frequency of the
| majority?"
|
| Then exceptions may need to be even more so, and it may be
| harder to discuss outliers.
|
| Anyway, once they get through, even if the model is
| retrained, maybe there are _not enough exceptions in the
| world_ to convince it otherwise.
|
| AI that does not have a "stop, something is anomalous about
| this" has no conscience, and perhaps thus has no duty in
| determining moral decisions.
|
| Plus, how does AI evolve without novelty; everyone will be
| stamped with the "collection of statistical weights."
|
| Is that how you feel as well?
| tpoacher wrote:
| The two are not mutually exclusive dangers. If anything, they are
| mutually reinforcing.
|
| The Faro Plague in Horizon Zero Dawn was indeed brought on by Ted
| Faro's shortsightedness, but the same shortsightedness would not
| have caused Zero Dawn had Ted Faro been a car salesman instead.
| (forgive my reliance on non-classical literature for the
| example).
|
| The way this is framed makes me think this framing itself is even
| more dangerous than the dangers of AI per se.
| 29athrowaway wrote:
| The biggest risk is giving unlimited amounts of data to those
| corporations.
| tgv wrote:
| While I'm in the more alarmist camp when it comes to AI, these
| arguments surprised me a bit. This time it isn't "will somebody
| think of the children" but rather "won't someone think of the
| women who aren't white". The argumentation then lays the blame at
| corporations (i.c. Google) for not preventing actual harm that
| happens today. While discrimination is undeniable, and it is an
| actual source of harm, the reasoning seems rather generic and can
| be applied to anything corporate and is more politically inspired
| than the other arguments against AI.
| localplume wrote:
| [dead]
| sp527 wrote:
| AI is turning into a kind of Rorschach test for people's
| deepest misgivings about how the world works. Ted Chiang's New
| Yorker piece was executed similarly, though that at least had
| the benefit of being focused on the big picture near term
| economic repercussions. Almost all of us are going to suffer,
| irrespective of our gender or skin color.
| ChatGTP wrote:
| I just read that piece by Ted Chiang and I have to say it's
| one of the more important articles I've read for a while.
| [1]. I'll share this around. Thanks.
|
| All I can say is that I'm quite happy many many people are
| starting to see the same issues as each other.
|
| For me personally, I was told that "tech" and progress would
| make us all better off. That seemed true for a while but it
| has backfired recently. Inflation is up, food prices up,
| unemployment up, energy prices up, salaries stagnated.
|
| We can't blame tech for this, but we're fools as "tech
| people" if we can't see the realities. Tech is the easy part,
| building a better world for all is hard.
|
| 1] https://www.newyorker.com/science/annals-of-artificial-
| intel...
| wpietri wrote:
| Whittaker's point around minoritized groups is twofold. One,
| when non-white, non-men raised the alarm about LLMs
| previously, they got much less media coverage than Hinton, et
| al, are getting. Two, the harms of LLMs may fall broadly, but
| they will fall disproportionately on minoritized groups.
|
| That should be uncontroversial, because in our society that's
| how the harms generally fall. E.g., if you look at the stats
| from the the recent economic troubles. Or if you look at
| Covid death rates. (Although there the numbers are more even
| because of the pro-disease political fashion among right-wing
| whites.)
|
| There's a difference with a Rorschach test. That test is
| about investigating the unconscious through interpretation of
| _random_ stimuli. But what 's going on here isn't random at
| all. The preexisting pattern of societal bias not only means
| the harms will again fall on minoritized groups. But it also
| means the harms will be larger, because the first people
| sounding the alarm about these technologies weren't listened
| to because of their minoritized status. Whereas the people
| benefiting from the harms tend to be in more favored groups.
| skybrian wrote:
| Yes, the most vulnerable are vulnerable in many ways, and
| here's another one.
|
| I think that's independent of whether it's corporations or
| not, though? There's a large libertarian contingent that's
| thrilled to see LLM's running on laptops, which is not
| great if you're worried about hate groups or partner abuse
| or fraud or any other misuse by groups that don't need a
| lot of resources.
|
| Egalitarianism and effective regulation are at odds. To
| take an extreme example, if you're worried about nuclear
| weapons you probably don't want ordinary people to have
| access to them, and that's still true even though it
| doesn't do anything about which nations have nuclear
| weapons. (Although, it might be hard to argue for
| international treaties about a technology that's not
| regulated locally.)
|
| Keeping the best AI confined to data centers actually would
| be pretty useful for regulation, assuming society can come
| to a consensus about what sort of regulations are needed.
| But that seems unlikely when everyone has different ideas
| about which dangers are most important.
| jahewson wrote:
| Hinton has a much, much higher profile and much, much
| larger contribution to the field than those axe-grinding
| self-appointed "ethics" researchers, who got more than
| enough media coverage.
| intimidated wrote:
| > One, when non-white, non-men raised the alarm about LLMs
| previously, they got much less media coverage than Hinton,
| et al, are getting.
|
| Mainstream (e,g. CNN, BBC) and mainstream-adjacent (e.g.
| Vice, Vox) journalists have spent years pushing the "AI
| will harm POC" framing. AI companies are endlessly required
| to address this specific topic--both in their products and
| in their interaction with journalists alike.
|
| Dr. Hinton is getting a lot of coverage right now, but this
| is the exception, not the rule.
| kortilla wrote:
| > That should be uncontroversial, because in our society
| that's how the harms generally fall.
|
| This is just talking past the conclusion though. Have you
| considered that the reason people are freaking out is
| because this is the first technology directly displacing a
| bunch of white collar work and not blue collar work?
|
| ChatGPT is much sooner going to wipe out a paralegal than a
| construction worker.
|
| > E.g., if you look at the stats from the the recent
| economic troubles
|
| Are you referring to the record low unemployment for
| African Americans?
| wpietri wrote:
| > this is the first technology directly displacing a
| bunch of white collar work
|
| That isn't true at all. IT has been displacing white-
| collar work since the 1960s. Perhaps the reason you
| missed it is that a lot of that white-collar work was
| seen as women's work? E.g., secretarial work,
| administrative work.
|
| > Are you referring to the record low unemployment for
| African Americans?
|
| I think we both know I wasn't.
| kortilla wrote:
| >secretarial work, administrative work.
|
| That's not the work I'm referring to. I'm referring to
| higher paying jobs that required significant training.
| This is the first foray into threatening people with
| under grad and graduate degrees.
|
| ChatGPT will mean the admin assistant stays and types
| stuff into ChatGPT and the paralegal goes.
|
| The very threat of this, realistic or not, is what all of
| the handwringing is about.
|
| >think we both know I wasn't.
|
| Then what are you referring to, because the economic
| trouble TODAY is persistent inflation and the high
| interest rates to combat it, which does not
| disproportionately hit minorities.
| mindslight wrote:
| Maybe "managerial" is a better criteria than "white
| collar" ? US culture has long preached that you are a
| stupid sucker if you perform direct work. The common
| sense recommendation for being successful is to seek out
| a meta position - directing poorly paid less
| skilled/educated/powerful people under yourself, and
| taking some of their production for yourself.
|
| With information technology as the substrate, the meta-
| managerial class has continued to _grow_ in size as ever
| more meta-managerial layers have been created (real world
| software bloat), allowing this type of success to be seen
| as a viable path for all.*
|
| The meta-managerial positions and the upper class had a
| symbiotic relationship, with the upper class needing the
| meta-bureaucracy to keep everyone in line - some human-
| simulating-a-computer has to answer the phone to deny you
| health care. But LLMs simulating humans-simulating-
| computers would seem to be a direct replacement for many
| of these meta positions.
|
| * exclusions may apply.
| iinnPP wrote:
| > This is just talking past the conclusion though. Have
| you considered that the reason people are freaking out is
| because this is the first technology directly displacing
| a bunch of white collar work and not blue collar work?
|
| I've been looking for a way to explain this and I think
| you nailed it. Something about this feels different. I'm
| sure the same feeling struck the people in history, but
| there's also nothing guaranteeing the outcome here will
| be the same.
|
| There's also scale.
|
| A very simplistic comparison would be Netflix DVD and
| Netflix streaming.
| int_19h wrote:
| The big difference between earlier alarms and current ones
| is that the media and the general public hasn't seen
| ChatGPT before, so the earlier warnings were much more
| hypothetical and abstract for most of the audience.
| peteradio wrote:
| The risk is already here, its the data companies of men control
| and the 100 year effort to enhance our ability to mine it. If we
| say AI is the coming risk we are fools.
| amelius wrote:
| Yes, and China etc. simply paying those data companies to get
| all the info they want.
|
| If TikTok is a problem, then so are US based data brokers. But
| Congress doesn't seem to understand that.
| gmuslera wrote:
| Guns don't kill people, at least tightly controlled guns. If they
| do, then the killer was whoever controls it. And not just
| corporations. Intelligence agencies, non-tech corporations,
| actors with enough money and so on.
|
| The not-so-tightly controlled ones, at least in the hands of
| individuals not in a position of power or influence, may run into
| the risk of becoming illegal in a way or another. The system will
| always try to get into an artificial scarcity position.
| mistrial9 wrote:
| this is insightful yes, but the implication is that "control"
| itself is some kind of answer. The history of organized
| warfare, among many topics, speaks otherwise.
| nico wrote:
| The people that control those corporations
|
| It's not AI, it's us
|
| It's humans making the decision
| fredgrott wrote:
| I have a curious question, where did the calculator(tabulator)
| operators go?
|
| Did we suddenly have governments fall when they were replaced by
| computers?
|
| Did we suddenly have massive unemployment when they were
| replaced?
|
| AI is a general purpose tool, and like other general purpose
| tools it expands not only human's reach mind wise it betters
| society and lifts up the world.
|
| We have been through this before, we will get through it quite
| well since the last oh general purpose tool will replace us rumor
| mill reactive noise.
| mitthrowaway2 wrote:
| I agree. If you look at the historical trend with technologies,
| it's very clear: look at the saddle, the stirrup, the chariot,
| the pull-cart, the carriage: all of these inventions increased
| the breadth of tasks that a single horse could do, but each
| time this only _increased_ the overall demand for horses.
| Surely the internal combustion engine will be no different.
| bioemerl wrote:
| And hey guys, there are two big open source communities running
| that focus heavily on running this stuff offline.
|
| KoboldAi
|
| oobabooga
|
| Look them up, join their discords, rent a few GPU servers and
| contribute to the stuff they are building. We've got a living
| solution you can contribute to right now if you're super worried
| about this.
|
| This stuff is actually a very valid way to move towards finding a
| use for LLMs at your workplace, they offer pretty easy tools for
| doing things like fine tuning, so if you have a commercially
| license model you could throw a problem at it and see if it
| works.
| akkartik wrote:
| I see https://github.com/oobabooga but where's the Discord
| posted?
|
| https://github.com/KoboldAI/KoboldAI-Client does link its
| Discord.
| indigochill wrote:
| Where I'm struggling at the moment is that I know about those
| but my local hardware is a bit limited and I haven't figured
| out how the dots connect between running those local interfaces
| against (affordable) rented GPU servers. The info I can find
| assumes you're running everything locally.
|
| For example, I know HuggingFace provides inference endpoints,
| but I haven't found information for how to connect Oobabooga to
| those endpoints. The information's probably out there. I just
| haven't found it yet.
| amelius wrote:
| Where I'm struggling is how to keep up to date on the latest
| LLMs and their performance.
| bioemerl wrote:
| There is something called a run pod but I know I've seen a
| couple of these groups give quick easy links to use. You
| might want to look there.
|
| > I know HuggingFace provides inference endpoints, but I
| haven't found information for how to connect Oobabooga to
| those endpoints
|
| I've never heard of these so I'm guessing there isn't a way.
| 1vuio0pswjnm7 wrote:
| "Because there's a lot of power and being able to withhold your
| labor collectively, and joining together as the people that
| ultimately make these companies function or not, and say, "We're
| not going to do this." Without people doing it, it doesn't
| happen."
|
| The most absurd "excuse" I have seen, many times now online, is,
| "Well, if I didn't do that work for Company X, somebody else
| would have done it."
|
| Imagine trying to argue, "Unions are pointless. If you join a
| union and go on strike, the company will just find replacements."
|
| Meanwhile so-called "tech" companies are going to extraordinary
| lengths to prevent unions not to mention to recruit workers from
| foreign countries who have lower expectations and higher
| desperation (for lack of a better word) than workers in their
| home countries.
|
| The point that people commenting online always seem to omit is
| that not everyone wants to do this work. It's tempting to think
| everyone would want to do it because salaries might be high, "AI"
| people might be media darlings or whatever. It's not perceived as
| "blue collar". The truth is that the number of people who are
| willing to spend all their days fiddling around with computers,
| believing them to be "intelligent", is limited. For avoidance of
| doubt, by "fiddling around", I do not mean sending text messages,
| playing video games, using popular mobile apps and what not. I
| mean grunt work, programming.
|
| This is before one even considers only a limited number of people
| may have actually the aptitude. Many might spend large periods of
| time trying and failing, writing one line of code per day or
| something. Companies could be bloated with thousands of
| "engineers" who can be laid off immediately without any
| noticeable effect on the company's bottom line. That does not
| mean they can replace the small number of people who really are
| essential.
|
| Being willing does not necessary equate to being able. Still, I
| submit that even the number of willing persons is limited. It's a
| shame they cannot agree to do the right thing. Perhaps they lack
| the innate sense of ethics needed for such agreement. That they
| spend all their days fiddling with computers instead of
| interacting with people is not surprising.
| photochemsyn wrote:
| > "What you said just now--the idea that we fall into a kind of
| trance--what I'm hearing you say is that's distracting us from
| actual threats like climate change or harms to marginalized
| people."
|
| Is the argument here that people are rather passive and go along
| with whatever the system serves up to them, hence they're liable
| to 'fall into a trance'? If so, then the problem is that people
| are passive, and it doesn't really matter if they're passively
| watching television or passively absorbing an AI-engineered
| social media feed optimized for advertiser engagement and
| programmed consumption, is it?
|
| If you want to use LLMs to get information about fossil-fueled
| global warming from a basic scientific perspective, you can do
| that, e.g.:
|
| > "Please provide a breakdown of how the atmospheric
| characteristics of the planets Venus, Earth, and Mars affects
| their surface temperature in the context of the Fourier and
| Manabe models."
|
| If you want to examine the various approaches civilizations have
| used to address the problem of economic and social
| marginalization of groups of people, you could ask:
|
| > "How would [insert person here] address the issue of economic
| and social marginalization of groups of people in the context of
| an industrial society experiencing a steep economic collapse?"
|
| Plug in Ayn Rand, Karl Marx, John Maynard Keynes, etc. for
| contrasting ideas. What sounds best to you?
|
| It's an incredibly useful tool, and people can use it in many
| different ways - if they have the motivation and desire to do so.
| If we've turned into a society of brainwashed apathetic zombies
| passively absorbing whatever garbage is thrown our way by state
| and corporate propagandists, well, that certainly isn't the fault
| of LLMs. Indeed LLMs might help us escape this situation.
| superkuh wrote:
| AI's aren't the AIs. The artificial intelligences with non-human
| motives are the non-human legal persons: corporations themselves.
| They've already done a lot of damage to society. Corporate
| persons should not have the same rights as human persons.
| 998244353 wrote:
| What rights, specifically, do you propose to eliminate?
| brigadier132 wrote:
| AI's biggest risk are governments with militaries controlling
| them. Mass human death and oppression has always been carried out
| by governments.
| wpietri wrote:
| Yes and no. As with the current example in Russia, dangerous
| governments are closely allied with the economic/industrial
| elite. Beckert's _Empire of Cotton_ is a good look at the
| history of what he calls "war capitalism", where there's a
| close alliance and a lot of common effort between the nominally
| separate spheres of government and industry.
| eachro wrote:
| At this point there are quite a lot of companies training these
| massive LLMs. We're seeing startups with models that are not
| quite GPT-4 level but close enough to GPT-3.5 pop up on a near
| daily basis. Moreover, model weights are being released all the
| time, giving individuals the opportunity to tinker with them and
| further release improved models back to the masses. We've seen
| this with the llama/alpaca/alpaca.cpp/alpaca-lora releases not
| too long ago. So I am not at all worried about this risk of
| corporate control.
| mmaunder wrote:
| Much of todays conversation around AI mirrors conversations that
| occurred at the dawn of many other technological breakthroughs.
| The printing press, electricity, radio, the microprocessor, PCs
| and packaged software, the Internet and the Web. Programmers can
| now train functions rather than hand coding them. It's just
| another step up.
| fat-chunk wrote:
| I was at a conference called World Summit AI in 2018, where a
| vice president of Microsoft gave a talk on progress in AI.
|
| I asked a question after his talk about the responsibility of
| corporations in light of the rapidly increasing sophistication of
| AI tech and its potential for malicious use (it's on youtube if
| you want to watch his full response). In summary: he said that
| it's the responsibility of governments and not corporations to
| figure out these problems and set the regulations.
|
| This answer annoyed me at the time, as I interpreted it as a "not
| my problem" kind of response, and thereby trying to absolve tech
| companies of any damage caused by rapid development of dangerous
| technology that regulators cannot keep up with.
|
| Now I'm starting to see the wisdom in his response, even if this
| is not what he fully meant, in that most corporations will just
| follow the money and try to be the first movers when there is an
| opportunity to grab the biggest share of a new market, whether we
| like it or not, regardless of any ethical or moral implications.
|
| We as a society need to draw our boundaries and push our
| governments to wake up and regulate this space before
| corporations (and governments) cause irreversible negative
| societal disruption with this technology.
| HybridCurve wrote:
| >We as a society need to draw our boundaries and push our
| governments to wake up and regulate this space before
| corporations (and governments) cause irreversible negative
| societal disruption with this technology.
|
| This works in functioning democracies, but not so much for
| flawed ones.
|
| >he said that it's the responsibility of governments and not
| corporations to figure out these problems and set the
| regulations.
|
| In the US, they will say things like this while simultaneously
| donating to PACs, leveraging the benefits of Citizens United,
| and lobbying for deregulation. It's been really tough to get
| either side of the political spectrum to hold tech accountable
| for anything. Social media companies especially, since they not
| only have access to so much sentiment data, but also are
| capable of altering how information propagates between social
| groups.
| morkalork wrote:
| Too bad America's view on society is so hollow. The very idea
| of _building_ a society that serves its people is seemingly
| dead on arrival.
| wahnfrieden wrote:
| It's because the state is also an oppressive force. I wonder
| why you come across lots of libertarians and lots of
| socialists but not so much the combination of the two (toward
| realities alternative to both state and capital)
| giraffe_lady wrote:
| This is like wondering why there are a lot of floods and a
| lot of forest fires but not really both at the same time.
| morkalork wrote:
| Hey giraffe lady, have you ever owned a pair of giraffe
| patterned heels?
| worik wrote:
| There are a lot of libaterian socialists. You need to get
| out more!
|
| Seriously Libaterian Socialism is another word for
| Anarchism
| turtleyacht wrote:
| I dunno. Are they though?
|
| This link rejects the equivalence, but I don't really
| know. Could you clarify the distinction?
|
| > _socialist economics but have an aversion to vanguard
| parties. Anarchy is a whole lot more than economics._
|
| > _To identify as an anarchist is to take a strong stance
| against all authority, while... other such milquetoast
| labels take no such stance, leaving the door open to all
| kinds of authority, with the only real concern being
| democracy in the workplace._
|
| https://theanarchistlibrary.org/library/ziq-are-
| libertarian-...
| int_19h wrote:
| The combination of "libertarian" and "socialist" is
| "anarchist", at least if you use the word in its original
| meaning.
| turtleyacht wrote:
| History is not quite like computing, at least in terms of
| having a compiler and syntax/semantics matter (and are
| machine-verified).
|
| Other than digesting a whole ton of history at once--or
| debating ChatGPT--how do you establish your axis or
| "quadrants" of political lean?
|
| I wish there were a way to systematically track voting
| record. We're never in the room where it happens, so it
| can be difficult to tell if a political compromise is for
| a future favor, or part of a consistent policy position.
| morkalork wrote:
| I don't know why, but my spouse is a health care worker in
| long term care for the elderly. She tells me how nearly
| everyone in their care are either mentally in decline or
| physically, never both. And those that are both, don't live
| long.
|
| Anyways, since the state is a tool of oppression and the
| state should reflect the will of the people, it'd be nice
| if people chose negative things to oppress like extreme
| inequality, rampant exploitation, and extortion (looking at
| you healthcare system aka "your money or your life" highway
| robbers).
| majormajor wrote:
| And yet if non-government-level American society wasn't so
| constantly self-focused at the expense of others, the state
| would be far less needed!
|
| Are other countries as dysfunctional in terms of voting
| themselves policies that aren't consistent with our
| internal behaviors? E.g. "someone" should do something
| about homelessness but I don't want to see it?
| morkalork wrote:
| Someone should do something about it but _I_ don 't want
| to see it, pay for it or be responsible for it. A modest
| proposal if you will.
| ssklash wrote:
| This I think is a result of the mythology of "rugged
| individualism" so prevalent in the US.
| ftxbro wrote:
| Just a heads up, when the moderator 'dang' sees this he's going
| to put it into his highlights collection that tracks people who
| share identifying stories about themselves. I hope that's OK
| with you. https://news.ycombinator.com/highlights
| turtleyacht wrote:
| I think /highlights just shows the top upvoted, parent-level
| comment per thread. Do you observe that too?
|
| It may be coincidence that PII just happens to be in there.
| Folks love a good yarn, and establishing context helps.
| erikerikson wrote:
| Do you have a corroborating source for rule?
| generalizations wrote:
| What kind of axe are you grinding? That's totally not what
| the highlights are about, and it's obvious from reading
| through them.
| quickthrower2 wrote:
| You were right to be annoyed. It is a very sad answer. Almost a
| "if I didn't peddle on this street corner someone else would".
| The answer is a cop out.
|
| Individual citizens have much less power than big tech because
| they don't have the lobbying warchest, the implied credibility,
| the connections or even the intelligence (as in the sheer
| number of academics/researchers). Companies are run by people
| with a conscious or not and those people should lead these
| pushes for the right thing. They are in the ideal spot to do
| so.
| gcheong wrote:
| It sounds great until you realize that, in the US at least, the
| corporations spend a lot of money lobbying Washington to have
| the rules set in their favor if not eliminated. Fix that first
| and then I will believe we can have a government that would
| actually try to place appropriate ethical boundaries on
| corporations.
| turtleyacht wrote:
| If more people were directly invested in laws favoring their
| means and ends, would they take the time to lobby too?
|
| Folks certainly outnumber corporations (?), and they could
| create representatives for their interests.
|
| Maybe the end-to-end process--from idea to law--is less
| familiar to most. Try explaining how a feature gets into
| production to a layperson, for example :)
|
| Maybe we need more "skeletal deployments" in action, many dry
| runs, accreted over time, to enough folks. This could be done
| virtually and repeated many times before even going there.
|
| Just seems like a lot of work, too.
| satisfice wrote:
| Exactly.
|
| I attended a public meeting of lawyers on the revision of the
| Uniform Commercial Code to make it easier for companies to ship
| bad software without getting sued by users. When I objected to
| some of the mischaracterizations about quality and testing that
| were being bandied around, the lawyer in charge said "well that
| doesn't matter, because a testing expert would never be allowed
| to sit on a jury in a software quality case."
|
| I was, of course, pissed off about that. But he was right. Laws
| about software are going to be made and administered by people
| who don't know much about software. I was trying to talk to
| lawyers who represent companies, but that was the wrong group.
| I needed to talk to lawmakers, themselves, and lawyers who
| represent users.
|
| Nothing about corporations governs them except the rule of law.
| The people within them are complicit, reluctantly or not.
| pvillano wrote:
| The paperclip maximizer is a thought experiment described by
| Swedish philosopher Nick Bostrom in 2003.
|
| > Suppose we have an AI whose only goal is to make as many
| paper clips as possible. The AI will realize quickly that it
| would be much better if there were no humans because humans
| might decide to switch it off. Because if humans do so, there
| would be fewer paper clips. Also, human bodies contain a lot of
| atoms that could be made into paper clips. The future that the
| AI would be trying to gear towards would be one in which there
| were a lot of paper clips but no humans.
|
| Corporations are soulless money maximizers, even without the
| assistance of AI. Today, corporations perpetuate mass
| shootings, destroy the environment, rewrire our brains for
| loneliness and addiction, all in the endless pursuit of money
| Matrixik wrote:
| Universal Paperclips the game:
| https://www.decisionproblem.com/paperclips/index2.html
| TurkishPoptart wrote:
| Warning, this will steal 15+ hours of your life, and it's
| not even fun.
| GolfPopper wrote:
| Yep. We've had AI for years - it's just slow, and uses human
| brains as part of its computing substrate.
|
| Or, to look at it from another angle, modern corporations are
| awfully similar to H.P. Lovecraft's Great Old Ones.
| jacooper wrote:
| Its not artificial though, its just intelligence.
| cmarschner wrote:
| I have found that companies that are owned by foundations are
| the better citizens, as they think more long term and are
| more susceptible to goals that, while still focusing on
| profit, might also take other considerations into account.
| tomrod wrote:
| I like that. How do I set one up?
| bostik wrote:
| > _Corporations are soulless money maximizers, even without
| the assistance of AI._
|
| Funny you should say that. Charlie Stross gave a talk on that
| subject - or more accurately, read one out loud - at CCC a
| few years back. It goes by the name "Dude, you broke the
| future". Video here:
| https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future
|
| His thesis is that corporations are _already_ a form of AI.
| While they are made up of humans, they are in fact all
| optimising for their respective maximiser goals, and the
| humans employed by them are merely agents working towards
| that aim.
|
| (Full disclosure: I submitted that link at the time and it
| eventually sparked quite an interesting discussion.)
| slg wrote:
| > While they are made up of humans
|
| I don't know why we always gloss over this bit.
| Corporations don't have minds of their own. People are
| making these decisions. We need to get rid of this notation
| that a person making an amoral or even immoral decision on
| behalf of their employer clears them of all culpability in
| that decision. People need to stop using "I was just doing
| my job" as a defense of their inhumane actions. That logic
| is called the Nuremberg Defense because it was the excuse
| literal Nazis used in the Nuremberg trials.
| int_19h wrote:
| The way large organizations are structured, there's
| rarely any particular person making a hugely
| consequential decision all by themselves. It's split into
| much smaller decisions that are made all across the org,
| each of which is small enough that arguments like "it's
| my job to do this" and "I'm just following the rules"
| consistently win because the decision by itself is not
| important enough from an ethical perspective. It's only
| when you look at the system in aggregate that it becomes
| evident.
|
| (I should also note that this applies to _all_
| organizations - e.g. governments are as much affected by
| it as private companies.)
| slg wrote:
| > I should also note that this applies to all
| organizations
|
| Yes, including the Nazi party. Like I said, this is the
| exact defense used in Nuremberg. People don't get to
| absolve themselves of guilt just because they weren't the
| ones metaphorically or literally pulling the trigger when
| they were still knowingly a cog in a machine of genocide.
| mindslight wrote:
| You're not really engaging with the problem. Sure, one
| can take your condemnation to heart, and reject working
| for most corporations, just like an individual back in
| Nazi Germany _should_ have avoided helping the Nazis. But
| the fact is that most people won 't.
|
| Since assigning blame harder won't actually prevent this
| "nobody's fault" emergent behavior from happening, the
| interesting/productive thing to do is forgo focusing on
| collective blame and analyze the workings of these
| systems regardless.
| NumberWangMan wrote:
| And this is why I'm really scared of AGI. Because we can
| see that corporations, _even though_ they are composed of
| humans, who _do_ care about things that humans care about,
| they _still_ do things that end up harming people.
| Corporations need humanity to exist, and still fall into
| multi-polar traps like producing energy using fossil fuels,
| where we require an external source of coordination.
|
| AGI is going to turbo-charge these problems. People have to
| sleep, and eat, and lots of them aren't terribly efficient
| at their jobs. You can't start a corporation and then make
| a thousand copies of it. A corporation doesn't act faster
| than the humans inside it, with some exceptions like
| algorithmic trading, which even then is limited to an
| extremely narrow sphere of influence. We can, for the most
| part, understand why corporations make the decisions they
| make. And corporations are not that much smarter than
| individual humans, in fact, often they're a lot dumber (in
| the sense of strategic planning).
|
| And this is just if you imagine AGI as being obedient, not
| having a will of its own, and doing exactly what we ask it
| to, in the way we intended, not going further, being
| creative only with very strict limits. Not improving sales
| of potato chips by synthesizing a new flavor that also
| turns out to be a new form of narcotic ("oops! my bad").
| Not improving sales of umbrellas by secretly deploying a
| fleet cloud-seeding drones. Not improving sales of anti-
| depressants using a botnet to spam bad news targeting
| marginally unhappy people, or by publishing papers about
| new forms of news feed algorithms with subtle bugs in an
| attempt to have Google and Facebook do it for them. Not
| gradually taking over the company by recommending hiring
| strategy that turns out to subtly bias hiring toward people
| who think less for themselves and trust the AI more, or by
| obfuscating corporate policy to the point where humans
| can't understand it so it can hide rules that allow it to
| fire any troublemakers, or any other number of clever
| things that a smart, amoral machine might do in order to
| get the slow, dim-witted meat-bags out of the way so it
| could actually get the job done.
| turtleyacht wrote:
| Well, if it means anything I think there may be
| legislation to "bring my own AI to work," so to speak,
| recognizing the importance of having a diversity of ideas
| --just because, it would disadvantage labor to be
| discriminated.
|
| "I didn't understand what was signed" being the watchword
| of AI-generated content.
|
| Someday, perhaps. Sooner than later.
| erikerikson wrote:
| > Corporations are soulless money maximizers
|
| This seems stated as fact. That's common. I believe it is
| actually a statement of blind faith. I suspect we can at
| least agree that it is a simplification of underlying
| reality.
|
| Financial solvency is eventually a survival precondition.
| However, survival is necessary but not sufficient for
| flourishing.
| cwkoss wrote:
| Many corporations choose corporate survival over the
| survival of their workers and customers.
|
| Humans shouldn't be OK with that.
| erikerikson wrote:
| So far as I can tell, most aren't. I think you're right
| that we get a better as well as more productive and
| profitable world if no humans are okay with that.
| robocat wrote:
| > all in the endless pursuit of money
|
| Money is not the goal. Optimisation is the goal. Anything
| with different internal actors (e.g. a corporation with
| executives) has multiple conflicting goals and different
| objectives apart from just money (e.g. status, individual
| gains, political games, etcetera). Laws are constraints on
| the objective functions seeking to gain the most.
|
| We use capitalism as an optimisation function - creating a
| systematic proxy of objectives.
|
| Money is merely a symptom of creating a system of seeking
| objective gain for everyone. Money is an emergent property of
| a system of independent actors all seeking to improve their
| lot.
|
| To remove the problems caused by corporations seeking money,
| you would need to make it so that corporations did not try to
| optimise their gains. Remove optimisation, and you also
| remove the improvement in private gains we individually get
| from their products and services. Next thing you write a
| Unabomber manifesto, or throw clogs into weaving machines.
|
| The answer that seems to be working at present is to restrict
| corporations and their executives by using laws to put
| constraints on their objective functions.
|
| Our legal systems tend to be reactive, and some countries
| have sclerotic systems, but the suggested alternatives I have
| heard[1] are fairly grim.
|
| It is fine to complain about corporate greed (the simple
| result of _our_ economic system of incentives). I would like
| to know your suggested alternative, since hopefully that
| shows you have thought through some of the implications of
| why our systems are just as they currently are (Chesterton's
| fence), plus a suggested alternative allows us all to chime
| in with hopefully intelligent discourse - perhaps gratifying
| our intellectual curiosity.
|
| [1] Edit: metaphor #0: imagine our systems as a massively
| complex codebase and the person suggesting the fix is a
| plumber that wants to delete all the @'s because they look
| pregnant. That is about the level of most public economic
| discourse. Few people put the effort in to understand the
| fundamental science of complex systems - even the "simple"
| fundamental topics of game theory, optimisation, evolutionary
| stable strategies. Not saying I know much, but I do attempt
| to understand the underlying reasons for our systems, since I
| believe changing them can easily cause deadly side effects.
| FrustratedMonky wrote:
| This is all correct, and the standard capitalist's party
| line. What it misses is conflating Money and Optimization.
| Money is absolutely the complete and only goal, and yes
| corporation Optimize to make more money. Regulations put
| guard rails on the optimization. It was only a few decades
| ago that rivers were catching fire because it was cheaper
| to just dump waste. There will always be some mid-level
| manager that needs to hit a budget and will cut corners, to
| dump waste or cut baby formula with poison, or skip
| cleaning cycles and kill a bunch of kids with tainted
| peanut butter(yes, happened).
|
| But, your are correct, there really isn't an answer.
| Government is supposed to be the will of the people to put
| structure, through laws/regulation, on how they want to
| live in a society, to constrain the Corporation.
| Corporations will always maximize profit and we as a
| society have chosen that the goal of Money is actually the
| most important thing to us. So guess we get what we get.
| m4nu3l wrote:
| > Money is absolutely the complete and only goal
|
| If that were the case it would be easy to optimize. Just
| divert all resources to print more money.
| cookingrobot wrote:
| You're sort of reinforcing the point. Only laws prevent
| companies from running printing presses to print money.
| worik wrote:
| Money is not currency
| FrustratedMonky wrote:
| ah, the old lets play at being a stickler on vocabulary
| to divert attention from the point. so lets grant the
| point that we could be using sea shells for currency, and
| that printed money is a 'theoretical stand in for
| something like trust, or a promise or other million
| things that theoreticians can dream up'. It doesn't
| change any argument at all.
| FrustratedMonky wrote:
| This did use to happen. IN the 20's, companies could just
| print more shares and sell them, with no notification to
| anybody that they had diluted them. Until there were laws
| created to stop it.
| FrustratedMonky wrote:
| To complete my thought. Yes Money is used as an
| optimization function, its just that we have chosen Money
| as the Goal of our Money Optimization function. We aren't
| trying to Optimize 'resources' as believed, that is just
| a byproduct that sometimes occurs, but not necessarily.
| robocat wrote:
| That seems backwards. There is an optimisation system of
| independent actors, and money is emergent from that. You
| could get rid of money, but you just end up with another
| measure.
|
| > we as a society have chosen that the goal of Money is
| actually the most important thing to us
|
| I disagree. We enact laws as constraints because our
| society says that many other things are more important
| than money. Often legal constraints cost corporations
| money.
|
| Here are a few solutions I have heard proposed:
|
| 1: stop progress. Opinion: infeasible.
|
| 2: revert progress back to a point in the past. Opinion:
| infeasible.
|
| 3: kill a large population. Opinion: evil and probably
| self-destructive.
|
| 4: revolution - completely replace our systems with
| different systems. Opinion: seen this option fail plenty
| and hard to find modern examples of success. Getting rid
| of money would definitely be wholesale revolution.
|
| 5: progress - hope that through gradual improvements we
| can fix our mistakes and change our systems to achieve
| better outcomes and (on topic) hopefully avoid
| catastrophic failures. Opinion: this is the default
| action of our current systems.
|
| 6: political change - modify political systems to make
| them effective. Opinion: seen failures in other
| countries, but in New Zealand and we have had some so-far
| successful political reforms. I would like the US to
| change its voting system (maybe STV) because the current
| bipartisan system seems to be preventing necessary
| legislation - we all need better checks and balances
| against the excesses of capitalism. I don't even get a
| vote in the USA, so my options to effect change in the
| USA are more limited. In New Zealand we have an MMP
| voting system: that helped to somewhat fix the bipartisan
| problem, but unfortunately MMP gave us unelected (list)
| politicians which is arse. The biggest strength of
| democracy is voting those we don't like out (every
| powerful leader or group wants to stay in power).
|
| 7: world war - one group vying for power to enlighten the
| other group. Opinion: if it happens I hope me and those I
| love are okay, but I would expect us all to be fucked
| badly even in the comparatively safe and out-of-the-way
| New Zealand.
| FrustratedMonky wrote:
| Think you are missing the last 10 years. The US has
| backtracked on regulations. From Bush to Trump,
| Republicans have made it a party plank issue to de-
| regulate anything they can. Corporations need to be 'un-
| fettered' and given free-reign to profit. Why wont the
| dirty liberals stop trying to make the world better, the
| free market will do it. So regulations are removed, and
| people die. So through elections, we have voted to kill
| ourselves, to make the profit motive 'the most important
| thing to us'.
| schiffern wrote:
| >Corporations are [intelligent agents _non-aligned_ with
| human wellbeing], even without the assistance of AI.
|
| Just to put a fine point on it!
| usrusr wrote:
| And it's going almost unchallenged because so many of those
| who like talking about not all being rosy in capitalism are
| blinded by their focus on the robber baron model of
| capitalism turning sour.
|
| But the destructively greedy corporation is completely
| orthogonal to that. It could even be completely held by
| working class retirement funds and the like while still being
| the most ruthless implementation of soulless money maximiser
| algorithm. Running on its staff, not on chips. All it takes
| are modest number of ownership indirections and everything is
| possible.
| generalizations wrote:
| > before corporations (and governments) cause irreversible
| negative societal disruption
|
| I think the cat's out of the bag. These tools have already been
| democratized (e.g. llama) and any legislation will be as futile
| as trying to ban movie piracy.
| dragonwriter wrote:
| IMO, the regulation that is necessary is largely (1) about
| government and government-adjacent use, (2) technology-
| neutral regulation of corporate, government, etc., behavior
| that is informed by the availability of, but not specific to
| the use of, AI models.
|
| Democratization of the technology, IMV, just means that more
| people will be informed enough to participate in the
| discussion of policy, it doesn't impair its effectiveness.
| benreesman wrote:
| I'm just completely at a loss for how so many people ostensibly
| so highly qualified even start with absurd, meaningless terms
| like "Artificial General Intelligence", and then go on to
| conclude that there's some kind of Moore's Law going on around an
| exponent, an exponent that fucking Sam Altman has publicly
| disclaimed. The same showboat opportunist that has everyone
| changing their drawers over the same 10-20% better that these
| things have been getting every year since 2017 is managing
| investor expectations down, and everyone is losing their shit.
|
| GPT-4 is a wildly impressive language model that represents an
| unprecedented engineering achievement as concerns any kind of
| trained model.
|
| It's still regarded. It makes mistakes so fundamental that I
| think any serious expert has long since decided that forcing
| language arbitrarily hard is clearly not to path to arbitrary
| reasoning. It's at best a kind of accessible on-ramp into the
| latent space where better objective functions will someday not
| fuck up so much.
|
| Is this a gold rush thing at the last desperate end of how to get
| noticed cashing in on hype? Is it legitimate fear based on too
| much bad science fiction? Is it pandering to Sam?
|
| What the fuck is going on here?
| drewcoo wrote:
| > so many people ostensibly so highly qualified
|
| > Is this a gold rush thing at the last desperate end of how to
| get noticed cashing in on hype?
|
| https://www.themarginalian.org/2016/01/12/the-confidence-gam...
|
| That article has a wonderful quote from Mark Twain. In part,
| this:
|
| "The con is the oldest game there is. But it's also one that is
| remarkably well suited to the modern age. If anything, the
| whirlwind advance of technology heralds a new golden age of the
| grift. Cons thrive in times of transition and fast change, when
| new things are happening and old ways of looking at the world
| no longer suffice."
| nico wrote:
| No corporation controls AI
|
| AI is open
|
| AI is the new Linux
|
| And it's people in control, not corporations
| 13years wrote:
| I wouldn't constrain it to only corporations, but all entities.
|
| Ultimately, most of the dangers, at least those close enough to
| reason about all are risks that come about from how we will use
| AI on ourselves.
|
| I've described those and much more from the following.
|
| "Yet, despite all the concerns of runaway technology, the
| greatest concern is more likely the one we are all too familiar
| with already. That is the capture of a technology by state
| governments and powerful institutions for the purpose of social
| engineering under the guise of protecting humanity while in
| reality protecting power and corruption of these institutions."
|
| https://dakara.substack.com/p/ai-and-the-end-to-all-things
| krono wrote:
| Relevant recent announcement by Mozilla regarding their
| acquisition of an e-commerce product/review scoring "AI" service,
| with the intent to integrate it into the core Firefox browser:
| https://blog.mozilla.org/en/mozilla/fakespot-joins-mozilla-f...
|
| Mozilla will be algorithmically profiling you and your actions on
| covered platforms, and if it ever decides you are a fraud or
| invalid for some reason, it very conveniently advertise this
| accusation to all its users by default. Whether you will be able
| to sell your stuff or have your expressed opinion of a product be
| appreciated and heard by Firefox users will be in Mozilla's
| hands.
|
| A fun fact that serves to show what these companies are willing
| to throw overboard just to gain the smallest of edges, or perhaps
| simply to display relevance by participating in the latest
| trends: the original company's business strategy was essentially
| Mozilla's Manifesto in reverse, and included such things as
| selling all collected data to all third parties (at least their
| policies openly admitted to this). The person behind all that is
| now employed by Mozilla, the privacy proponent.
___________________________________________________________________
(page generated 2023-05-06 23:01 UTC)