[HN Gopher] We need a Butlerian Jihad against AI
___________________________________________________________________
We need a Butlerian Jihad against AI
Author : erikhoel
Score : 68 points
Date : 2021-07-07 21:41 UTC (1 hours ago)
(HTM) web link (erikhoel.substack.com)
(TXT) w3m dump (erikhoel.substack.com)
| onethought wrote:
| Wasn't it Andrew Ng that said something like:
|
| Worrying about Superintelligent AI is like worrying about over
| crowding/population on Mars.
| echelon wrote:
| Andrew Ng is a luminary, but this isn't a great analogy.
|
| Humans aren't adapted to a Mars environment, and there's little
| incentive for us to move there.
|
| AI has so many practical uses that everyone is getting in the
| game and pushing the envelope. A huge amount of money is being
| spent on ML, and a wealth of expertise is being developed.
| onethought wrote:
| I can teach a human to drive in < 1 day. An AI... so far 10+
| years. I agree there is crazy progress and investment, but
| it's also not entirely certain how far we can push AI.
| sammalloy wrote:
| I think it would be easier to emphasize and teach philosophy and
| ethics of computer science. The problem is, they can't even do
| this for business students, so focusing on the existential risk
| of AI alone is a drop of rain in the ocean. If you follow the
| topic, of let's say, sustainability, then you know this is a huge
| blind spot in economics and business. Look at only palm oil
| production as an example. The current threat to the environment
| of Indonesia has been recognized for decades, yet the world
| refuses to legislate against the multinational companies who are
| destroying the forests for palm oil. Again, this is only a small
| part of the problem, and it's deeply connected to many other
| problems, such as the profitable harvesting of Indonesian wood
| from this destruction, which recently showed up in Japan as a
| source for the Olympic Games infrastructure.
| ampdepolymerase wrote:
| Because ethics in engineering is mostly about codified rule
| compliance rather than the deep navel gazing taken by actual
| philosophy students. The latter also rarely yields useful
| answers. Crack open one of the "professional" engineering
| ethics guide books and 90% of it is _thou shalt not build a bad
| bridge /engine/circuit because it is very bad and you should
| report your boss to the authorities if they do._ I never
| understood the moral uppity and delusion those engineers have.
| If a bridge falls it is bad, unless if it is explicitly
| designed to kill enemy soldiers, then it is good. You can
| extend this to drones and enemy schoolbuses and the entire
| defence industry if you want. The so-called "engineering
| ethics" field should rebrand and follow the financial industry.
| You follow guidelines because the _compliance department_
| demands it to cover the company 's derriere. If you don't, your
| company will get fined. Skip the self-righteous morality
| because its only purpose is to reduce the principle agent
| problem for the managerial and asset owning class.
| sammalloy wrote:
| I understand your POV, but in practice, there is far more
| nuance to the "self-righteous morality" you describe. Take
| the discussion about the ethics of gene editing, for example,
| specifically, the editing of the human germline.
| dwighttk wrote:
| > The so-called "engineering ethics" field should rebrand and
| follow the financial industry
|
| Wait, what? More like the finance industry's ethics is a
| desirable direction?
| devindotcom wrote:
| Regardless of the merit of this particular piece, the portion of
| Butler's _Erewhon_ comprising "The Book of the Machines" makes
| for very interesting and forward thinking reading.
|
| You can skip directly to it here:
|
| https://www.gutenberg.org/files/1906/1906-h/1906-h.htm#chap2...
|
| But the context in the story is fairly important - it takes place
| in a society that has essentially already carried out its own
| Butlerian Jihad and taken it too far. Wonderful book by the way.
|
| I'm starting to think that the religious sects in the U.S. that
| laboriously evaluate a technology before incorporating it into
| their communities have a pretty good thing going. Sadly it's not
| really practical at a larger scale, and the suffering that could
| be avoided by adopting something early rather than late is
| difficult to estimate. Ah well!
| dr_dshiv wrote:
| Aren't we inappropriately reifying AI? AI doesn't really exist,
| other than as an academic field of research. For instance, here
| is how the European Union is trying to define AI for regulation
| purposes. Note how broad it is!
|
| "artificial intelligence system (AI system) means software that
| is developed with one or more of the techniques and approaches
| listed in Annex I and can, for a given set of human-defined
| objectives, generate outputs such as content, predictions,
| recommendations, or decisions influencing the environments they
| interact with"
|
| "Annex 1: Machine learning approaches, including supervised,
| unsupervised and reinforcement learning, using a wide variety of
| methods including deep learning; (b)Logic- and knowledge-based
| approaches, including knowledge representation, inductive (logic)
| programming, knowledge bases, inference and deductive engines,
| (symbolic) reasoning and expert systems; (c) Statistical
| approaches, Bayesian estimation, search and optimization
| methods."
| pphysch wrote:
| Humans won't become "slaves" to superhuman AI any more than human
| cells are "slaves" to the human body.
|
| Unexamined anthropocentric hogwash.
| stevenalowe wrote:
| Good read. Not buying it, but interesting. I suspect a broader
| definition of evolution includes constructing tools and adapting
| to them (do humans walk upright because doing so leaves hands
| free for rocks and sticks?). Our AI assistants may evolve - with
| help - into robot overlords or some other Luddite/dystopian
| scenario, but I doubt it. Mutual cooperation is beneficial, and
| much easier when not competing for the same resources (food,
| land, water, mates).
| duped wrote:
| I think this is interesting fodder for science fiction authors
| but lacks concrete examples of what exactly it would mean to
| regulate or engage in a "Butlerian Jihad against AI."
|
| I know things that I would like to see. Like humans "in the loop"
| (as opposed to "on the loop" or "out of the loop") for certain
| classes of decision making - for example target selection of
| military strikes or law enforcement. Or what kinds of information
| we use to train the decision making models, for example if you
| feed ML a racist data set and you get a racist algorithm - use
| that algorithm to decide who to give mortgages and you'll get
| systematic depression in generational wealth based on racial
| lines.
|
| But this isn't some crusade on AI because it's AI; it has to be
| based in reality - what AI or ML is being used for, what
| information it operates on, what decisions it is used to make,
| and ultimately the human beings that are responsible for those
| decisions. The reason it is so hard to convince people as to how
| we should legislate (or otherwise regulate AI) is that every
| conversation drifts into science fiction and not concrete
| examples of the ethical issues _today_ and what can be done
| _today_. Otherwise it comes off as Luddite fearmongering.
| jonstaab wrote:
| The crux of his argument, and its downfall, at least in the short
| term is:
|
| > All to say: discussions about controlling or stopping AI
| research should be deontological--an actual moral theory or
| stance is needed
|
| I don't see this happening in the near future, at least in the
| West. We're in the middle of total epistemological meltdown, and
| only capable of reasoning from utility, or some insane framework
| like critical theory. If we get to Strong AI in our lifetimes,
| we're just going to spawn a bunch of reductive, racist robots.
| BitwiseFool wrote:
| I like to think of AI-Genesis through the lens of what humanity
| has already done through domestication. We take something
| primitive and progressively adapt it to serve a greater utility.
| I think working dogs are the most interesting example of this.
| We've taken a species, the wolf, and made it smarter while also
| making it _want_ to do work, learn tricks, and follow orders. Of
| course, you still need to train the animal for optimal results
| but even breeds like collies know how to herd instinctively.
|
| Anyways.
|
| Let's assume the best and brightest dog breeders endeavor to make
| German Shepherds as intelligent as they possibly can. Would the
| same ethical debates about what constitutes a 'mind' come into
| play? What would happen if the dogs became smart enough to make
| their own mating decisions? Would we be worried about them
| turning on us once they get close to human level intellect? Would
| it be immoral to make these dogs work? Or, would _not_ letting
| them work be considered immoral?
|
| This is just food for thought. But I suspect AI's capabilities
| will grow much in the same way other domesticated species have
| grown into the specialized roles we've crafted for them.
| thewakalix wrote:
| I don't know about everyone mentioned, but Yudkowsky in
| particular rejects Pascal's wager
| (https://www.lesswrong.com/posts/ebiCeBHr7At8Yyq9R/being-half...)
| and argues (IIRC) that AGI poses a _large_ risk of killing us
| all, rather than an infinitesimal risk.
| [deleted]
| bitwize wrote:
| AI as we know it today is not "in the likeness of a human mind".
| It's statistics with sexy marketing. It's being used to screw
| people over, but so was regular statistics before we got the sexy
| kind. Or haven't you seen the history of the insurance industry?
| dry_soup wrote:
| It says a lot about what injustices we have learned to accept
| that AI alarmists focus almost exclusively on the scifi-level
| hypothetical dangers of AI, rather than the very real problems it
| already causes today.
|
| Those problems largely fall into three categories that I can
| think of off the top of my head at 1am:
|
| 1. AI is a convenient way to justify potentially uncomfortable
| decisions you would have made otherwise (idlewords said it best:
| "AI is money laundering for bias")
|
| 2. AI is being used in situations where it can be a threat to
| life and limb, like the current crop of self-driving(ish) cars
|
| 3. Essentially all of the gains from automating work going to
| people who already have capital
| bpodgursky wrote:
| "AI alarmists" are worried because the worst-case outcomes of
| AGI are mistakes you cannot ever fix.
|
| All the rest of these are bad, but they are problems we can fix
| given time and thought, because we will still exist.
| Extinction-level events decrease all future human utility to
| zero, and so should be treated with extraordinary care.
| est31 wrote:
| As you are talking about extinction level events, I'm not
| very confident that if humans have ultimate say over nuclear
| weapons, we will continue to not end our species with them.
|
| It might in fact be a good idea to establish an AGI overlord
| which watches over humans and enforces nuclear non
| proliferation policies. If you look through history, it's
| full with war, genocide, and similar. Human societies are
| bound for change, and while it's been a peaceful few decades
| in which we had the nuclear button, it's basically ensured
| that we'll press it in the next 10 thousand years. How will
| technological civilization become million years old if not
| with the help of an AGI that enforces basic rules like "don't
| nuke each other"?
| toolz wrote:
| > And some things are abominations, by the way. That's a
| legitimate and utterly necessary category. It's not just
| religious language, nor is it alarmism or fundamentalism. The
| international community agrees that human/animal hybrids are
| abominations--we shouldn't make them to preserve the dignity of
| the human, despite their creation being well within our
| scientific capability.
|
| This is nothing more than an appeal to authority, no? Even in
| this proposed axiom there's plenty of room to disagree (even if
| the author rejects that there is)
| Manuel_D wrote:
| I really doubt we will have the capability of building "a machine
| in the likeness of a human mind" in my lifetime. Present AI
| systems are essentially just function fitting. Building big
| probabilistic systems that we optimize with loads of training
| data. This is a far, _far_ cry from the "strong AI" that people
| are so afraid of. I really think that people writing these sorts
| of pieces have an understanding of AI that's more rooted in
| fiction than engineering.
|
| It's interesting to ponder how we should go about building and
| interacting with "strong AI", and questioning whether we should
| even build it in the first place. But I really don't think any
| detailed moral frameworks can be built when we have no real idea
| of what a "strong AI" would look like.
|
| Also, it's worth reminding people that in the Dune universe the
| Butlerian Jihad led to millennia of stagnation and control of
| society by a narrow elite: The Spacing Guild, the Bene Gesserit,
| and the Landsraad.
| marcinzm wrote:
| >His point was that there are no odds that would rationally allow
| a parent to bet the life of their child for a quarter. Human
| nature just doesn't work that way, and it shouldn't work that
| way.
|
| People have done this for most of history. Working a farm, for
| example, is non-trivially dangerous and fairly low profit.
| Children often helped on the farm in rural communities from a
| young age. So every time you had your child work the farm you
| were rolling some dice. Over and over. But eventually those
| quarters add up to enough to put food on the table so it was
| rational to roll them.
|
| This seems like the sort of philosophical argument only someone
| who has grown up a very privileged life and hasn't experienced
| much else would make. To them it is inherently wrong but to other
| it is simply part of life. Which inherently makes it no longer a
| universal axiom but a matter cultural upbringing.
| bopbeepboop wrote:
| People drive fast to get their children to school in a few
| minutes less time daily, in the millions.
|
| That's betting their child's life (at low probability) by
| increasing the risk of a serious collision... because they're
| in a mild hurry.
|
| Parents bet their children's lives _all the time_.
|
| The privilege is being taken seriously while saying something
| so afactual.
| 29athrowaway wrote:
| The paradox is:
|
| - Our problems are getting more complex. We need better AI.
|
| - Better AI is a threat.
| z5h wrote:
| I read "A Thousand Brains: A New Theory of Intelligence" by Jeff
| Hawkins, and it now is clear to me that our neocortex (like
| computers, or AI) is just a lot of general purpose computing
| infrastructure with ZERO aims. Our emotions (which drive
| everything in the interest of gene propagation- as there is no
| purely logical reason to do anything) would need to be
| intentionally duplicated to give AI a reason to desire anything
| beyond what we instruct it to do.
| brightball wrote:
| Don't come across many Dune references these days.
| EamonnMR wrote:
| During the early days of the pandemic I put up a poster someone
| made which was hand washing instructions but with the text
| replaced with the litany against fear.
| jordemort wrote:
| The art of kanly is still alive in the universe.
| sgt101 wrote:
| well - prepare for a lot of them after the movie comes out...
| toast0 wrote:
| We'll see how much of the referencable content is kept. And
| how much of an impact the film ends up having.
|
| I'm looking forward to it, but I reserve the right to cling
| to the 1984 release.
| bsanr2 wrote:
| I guarantee the Grim Adventures intro is spammed.
| dang wrote:
| Small related thread from a few days ago:
|
| _We need a Butlerian Jihad against AI_ -
| https://news.ycombinator.com/item?id=27698233 - July 2021 (3
| comments)
| rektide wrote:
| I'd love help finding better, less immediately downvoted off the
| map ways to say it, but I'd extend this to a wide class of
| software in general.
|
| > Far more important than the process: strong AI is immoral in
| and of itself. For example, if you have strong AI, what are you
| going to do with it besides effectively have robotic slaves? And
| even if, by some miracle, you create strong AI in a mostly
| ethical way, and you also deploy it in a mostly ethical way,
| strong AI is immoral just in its existence. I mean that it is an
| abomination. It's not an evolved being.
|
| My fear is that most software, even when useful, locks us into
| certain paths. Our situations or needs change, evolve, but we
| will remain subject to inflexible software, to systems we cannot
| make change with us, in the vast majority of cases. Only a very
| few programs strive for better: spreadsheets being one noted
| example.
|
| Ursala Franklin categorized technology as holistic or
| prescriptive[1], where it is something wielded or something that
| directs us. Even a social media app which lets us create content-
| a seemingly holistic act- still has narrow prescriptive channels
| we can not escape. We will never be able to understand or enhance
| this tool. We will never understand it, never see it's nature.
| This, to me, is the definition of what Erik talks about: an
| abomination, a thing beyond comprehension, a horror outside of
| reality, the form of existence which is shared.
|
| I feel like we're reaching a crisis where we are creating an
| unknowable, unexplorable world. We're building an anti-
| Enlightenment prison. That, to me, constitutes a deontological
| hazard, demands that we assess the action themselves of creating
| unexplorable software.
|
| [Edit: I misread the line I quotes as, "what are you going to do
| with it besides effectively be robotic slaves": that uhh changes
| the pertinence of our two discussions here notably. I think it's
| risky that the strong ai would be used to try to architect
| policies/systems that steer people, which is a different concern
| than Erik's.]
|
| [1]
| https://en.wikipedia.org/wiki/Ursula_Franklin#Holistic_and_p...
| bpodgursky wrote:
| Yes. The inevitable defeatism that will show up in these comments
| is
|
| "Oh, but China will do it anyway, so there's no point."
|
| Which is pretty easily counterable:
|
| 1) We don't know if China would cooperate with a ban. We haven't
| tried. China is very complicated, and if you think you can
| predict what Xi will do, you are wrong.
|
| 2) If AI is truly a global existential and moral crisis, the US
| _could_ absolutely shut down China's AI research capabilities.
| There are a few avenues here, some less pleasant than others.
| Think outside the box.
| onethought wrote:
| True, China will be really receptive to a ban! Maybe we could
| try and get Xi hooked on opiates and then blackmail him to
| stop!
|
| It worked before! What could go wrong!?
| smt88 wrote:
| > _We don 't know if China would cooperate with a ban. We
| haven't tried. China is very complicated, and if you think you
| can predict what Xi will do, you are wrong._
|
| This is astonishingly optimistic (or naive). I'm sure China
| would happily agree to cease AI development in public and then
| continue in private.
|
| Getting any country to _actually_ stop AI development is as
| likely as getting them to give up all their nuclear weapons. AI
| is a weapon, both in military and economic contexts.
| erikhoel wrote:
| Agreed, these are both good replies to that (which is probably
| the most common response for some reason)
| killingtime74 wrote:
| It's naive in the extreme. Also way too late
| erikhoel wrote:
| It's really not, since we don't have anything close to a
| strong (or "general") AI. GPT-3 is the closest, but even it
| is just in that direction. So it is quite early. Good time
| to have the conversation.
| killingtime74 wrote:
| lol never heard of nukes or mutually assured destruction? Can't
| even stop Iran or North Korea and you think China can be
| stopped?
| bpodgursky wrote:
| That's exactly my point. I did, explicitly, say it was
| unpleasant, and something you should only do in extremis --
| when you genuinely believe that extinction a likely
| alternative.
|
| But the US could absolutely glass every semiconductor fab
| (and datacenter, and research facility) in mainland China
| using a combination of conventional + atomic kinetic options.
| killingtime74 wrote:
| I guess we agree then. That would lead to the end of the
| human race
| hprotagonist wrote:
| > "Thou shalt not make a machine in the likeness of a human
| mind."
|
| Great. We aren't!
| guscost wrote:
| Similarly:
|
| "Thou shalt not make a flying machine that gathers its own fuel
| from nature."
|
| Or maybe:
|
| "Thou shalt not make a vehicle that travels on legs."
| [deleted]
| subroutine wrote:
| The problem is, we don't really know how consciousness works (I
| assume consciousness the the part the author takes issue with;
| most of our cognitive faculties in isolation are not that
| special). We don't even have a great definition of consciousness,
| or good tests for it, or know whether it is a linear spectrum, or
| if it emerges abruptly with the evolution of certain reasoning
| and attention faculties, and we don't know which animals have it
| and to what degree.
|
| So when people say we shouldn't develop AI to think like _that_ ,
| it's basically saying we shouldn't try to understand how
| consciousness works. Because as soon as we do, I guarantee
| someone out there will attempt to make conscious AI.
| tudorw wrote:
| Also, if we model a brain effectively, then study that model,
| would our brain not develop new skills from reflecting on such
| a model and therefore develop again beyond the model?
| subroutine wrote:
| Perhaps, but debatable whether knowledge itself adds
| something to cognitive capacity. Two thoughts: (1) Some
| _theory of mind_ researchers equate having strong ToM
| abilities as roughly equivalent to having consciousness. I
| think knowing precisely how consciousness emerges may help
| develop our ToM. It would also be a tough sell to claim a
| mind with zero knowledge has consciousness. On the other hand
| (2) Do feral or isolated tribes of humans have the same level
| of consciousness as those in developed societies? I suspect
| they do.
| sillysaurusx wrote:
| _The international community agrees that human /animal hybrids
| are abominations--we shouldn't make them to preserve the dignity
| of the human, despite their creation being well within our
| scientific capability._
|
| Theoretically, how would we create human-animal hybrids? That's a
| strong claim to say it's within our power.
| ALittleLight wrote:
| There's an interesting thought experiment referenced in this
| article, but I'm not sure it holds.
|
| "The philosopher John Searle made precisely this argument about
| the standard conception of rationality. His point was that there
| are no odds that would rationally allow a parent to bet the life
| of their child for a quarter. Human nature just doesn't work that
| way, and it shouldn't work that way."
|
| I agree that it sounds morally repugnant to risk your child's
| life for a quarter, but in practice people do do this all the
| time.
|
| Imagine your child wants ice cream. There is some utility in
| taking your child to the nearby ice cream parlor. Your child will
| be made happy and you will be made happy by making your child
| happy. However, this is not infinite utility. In other words,
| there is probably some amount of money I could offer you to _not_
| take your child to get ice cream today. If I offered you 10,000
| dollars to not get your child ice cream today, I bet the vast
| majority of people would take the deal. That sets the upper bound
| of the utility of taking your child to the ice cream parlor at
| 10,000 dollars.
|
| Suppose the ice cream parlor is 3 miles away (I just checked the
| distance to my favorite ice cream parlor and it is 3 miles away).
| In the United States this website[1] says there is approximately
| 1 death per 100 million vehicle miles traveled. We could rephrase
| that as 1 * 10^-8 chance to die per vehicle mile traveled. This
| risk may be high, presumably you aren't drunk or impaired, maybe
| you're a better driver than average or have a safer or better
| maintained car, or live in a safer place, but the risk of death
| isn't zero.
|
| If you are willing to drive your child 3 miles to go get ice
| cream then it seems like you are willing to expose your child to
| the risk of death from car accidents for utility that is less
| than 10,000 dollars. Putting those ideas together we could
| calculate the odds where a parent would, in practice, risk the
| life of their child for a quarter.
|
| I don't quite know what to make of this. I tend to think that
| people would regard doing odds calculations like this for real
| life decisions as somewhat sociopathic and would just prefer to
| live as if significantly unlikely bad things were impossible or
| just refuse to think about the moral implications of
| probabilities. That seems similar to what the article is saying,
| people just prefer to live as if bad super conductor or AI
| experiments won't happen rather than reason about them.
|
| I tend to think that being too "reasonable" on a local scale is
| bad. That is, I will still take my children to go get ice cream
| even though I know driving is a risk. At higher levels though I
| want people to be making decisions that are increasingly based on
| reason and probabilities. I _do_ want the traffic engineers to be
| reasoning about vehicle deaths per mile and the like when they
| are setting speed limits, traffic signs, and the like. For things
| like AI and, I suppose, super colliders, our decision makers
| should absolutely be considering things rationally.
|
| 1 - https://www.iihs.org/topics/fatality-
| statistics/detail/state...
| ampdepolymerase wrote:
| > _To slim results. Elon Musk explained his new nihilism about
| the possibility of stopping AI advancement on the Joe Rogan
| podcast, when he said:
|
| "I tried to convince people to slow down. Slow down AI. To
| regulate AI. This was futile. I tried for years. Nobody listened.
| Nobody listened."_
|
| Rather rich coming from one of the self driving car market
| leaders. It certainly makes business sense to mislead academia
| and the policy sector into wasting resources on figuring out the
| best philosophical and ethical regime while large corporations
| benefit from regulatory capture. If he has his way, ML would
| become like the pharmaceutical industry, with multiple barriers
| of entry if you are not well-funded, well-connected, or
| established.
| bpodgursky wrote:
| Elon wasn't calling for a hard halt like this article. He's
| talking about the dangers of general AI (AGI). I don't think
| most people would consider a self-driving car a likely path to
| accidental superintelligence; it's a highly-targeted
| application like a chess engine.
|
| OpenAI's GPT-X engines OTOH, IMO, have a lot more potential
| danger because it's very unclear what they'll be used for.
| ampdepolymerase wrote:
| If you do actual ML research, the technologies are two sides
| of the same coin. Three days ago there was a paper posted
| here in HN on using the GPT-like transformer architecture for
| reinforcement learning problems (of which self driving cars
| is a partial subset of).
|
| https://news.ycombinator.com/item?id=27721037
| youeseh wrote:
| Right now if a vehicle on auto pilot gets into an accident, the
| driver is scrutinized.
|
| That is reasonable. The driver is expected to be alert incase
| intervention is needed.
|
| If we take the manual override away, then we'll be squarely in
| the world that Mr. Musk is concerned about.
| Dylan16807 wrote:
| > Rather rich coming from one of the self driving car market
| leaders.
|
| You don't need anything close to strong AI to do a reasonable
| job of driving a car. It seems like 90% of the problem is
| object recognition, and even in terms of brain-equivalent logic
| that's a really low bar.
| perl4ever wrote:
| The remaining 10% that's 90% of the effort, is interrupting
| what is being done and changing context.
|
| Without this capacity, "AI" is just a tiny shard of a
| complete mind.
|
| I don't think anyone's really started grappling with this
| yet.
| Dylan16807 wrote:
| That's really not the 10% I was talking about. We don't
| need that part to follow some lanes.
|
| Or the other way to put it is that all the other code is
| the first 90%, and then the "remaining 10% that's 90% of
| the effort" is the object recognition that was supposedly
| easy.
| pshc wrote:
| It's the 10% that gets you. To safely operate a car in all
| reasonable situations might very well require Strong AI. The
| car needs to be able to problem solve and make inferences
| about road conditions up ahead.
| Dylan16807 wrote:
| What kind of inferences need strong AI? How often does a
| driver need to figure out something with logic that
| couldn't be handled by current technology?
|
| Level 4 self driving is fine and I really don't think it
| needs strong AI.
| guscost wrote:
| This is so obvious. Incumbents would like nothing better than a
| new law that makes it harder to operate (within "reason", of
| course) in their industry. It is amazing that more people here
| don't see right through the manipulation.
| perl4ever wrote:
| Obvious self-interest isn't proof that something is false.
|
| For instance, it is obviously self-interested of me to not
| want the world to be blown up with nuclear weapons.
| elurg wrote:
| High entry barriers for self-driving are extremely good, for
| safety reasons.
|
| Regulating research into general AI would not make those
| barriers considerably higher.
|
| Tesla critics and competitors are also strongly in favor of
| self-driving regulations.
| elurg wrote:
| * AI research already has high barriers to entry because the
| required computing resources are expensive.
|
| * Self-driving AI will be heavily regulated for reasons not
| related to other AI regulation.
|
| I don't see how more AI regulation would financially help Tesla
| or Elon Musk. In fact "more regulation of self-driving" is
| something that many Tesla competitors and critics support.
| vngzs wrote:
| I do believe it's reasonable to draw a distinction between what
| we're doing now (which is essentially just "statistics") and
| what Musk warns about.
|
| It's a mistake to believe strong AI will just be a more
| powerful iteration of today's weak AI. He is arguing to slow
| developments toward general intelligence, not developments in
| any narrow field.
| bookofsand wrote:
| With high probability, 'just statistics' is an essential
| component of strong AI. Another essential component is
| embodiment, of which self-driving cars, and also military
| drones, are canonical examples. Researchers are taking the
| correct essential steps towards strong AI, it's a matter of
| (short) time until they succeed.
| mandelbrotwurst wrote:
| What are the requisite "correct essential steps" and how
| did you determine that they will necessarily lead to an
| AGI?
| qdiencdxqd wrote:
| Ted Kaczynski calls for something like this, but against
| industrial technology generally. Even though his manifesto is a
| rational argument aimed at intellectuals, he has said in his more
| recent writings that to actually carry out his "stop
| technological advancement" plan you'd need to persuade people on
| an emotional level.
| Rzor wrote:
| I think the cat is out of the bag when it comes to AI and its
| potential, and you simply can't regulate and trust foreign
| nations to play ball. The possible gains are too big to expect
| everyone to get together and consider the downside carefully, i.e
| no coordination on the morality or risks; winner takes all will
| be the prevalent mindset when the first player hits major
| strides. It's going to be an arms race sort of scenario aimed at
| automation and productivity until it reaches the military
| industry, then we'll see.
|
| You know, as I am reading The Cultures series, I can't help
| imagine how much fun would be to have a Mind taking care of a few
| things for us.
| AndrewKemendo wrote:
| We should be _actively_ building this new "AI Species" because
| we are going to be extinct eventually and should think about
| making a better successor for the human species. The morality
| argument is nonsense.
|
| How about this: "The primary objective of humanity should be to
| build an intelligent system with far more precise perception,
| reasoning and physical manipulation capabilities than humans"
|
| That's my starting point.
| erikhoel wrote:
| There are all sorts of ways to build intelligences. Humans are
| unique in that they are mammals (defined by having mothers).
| Mothers raise us with love, and teach us, for our helpless
| first years. We also have to act in communities. So there is a
| sense in that we are very lucky - in humans, our intelligence
| correlates with our altruism. In the grand space of possible
| minds, it is very unlikely that altruism and morality is
| correlated with intelligence. So whatever that machine race we
| birth is, it won't have any of the things we value if we're
| just building for "precise perception, reasoning, etc"
| anigbrowl wrote:
| Tiny nitpick, but quite a lot of birds are nurturing despite
| not being mammals. You can find altruistic (or at least
| mutualist) behaviors in many other taxa.
| tick_tock_tick wrote:
| > extinct eventually and should think about making a better
| successor for the human species
|
| Citation needed. As it currently stands it seems incredibly
| unlikely we won't expand to most of our local group making
| extinction incredible unlikely.
| anigbrowl wrote:
| imho precision is a chimera which often leads to an excess of
| certitude; acceptance and awareness of uncertainty often leads
| to better decision-making.
|
| Put another way, a laser pointer is not a very good tool with
| which to explore a cave, unless you can systematically measure
| it over the whole cave, an expensive and time-consuming
| process. If you're exploring a new cave, you might be better
| off with weak omnidirectional illumination like a lamp.
| at_a_remove wrote:
| One of the problems the Butlerian Jihad ran up against, aside
| from the inevitable skirting of the lines from Richese and Ix
| (many machines on Ix, _new_ machines), is that it runs directly
| counter to "Thou shalt not disfigure the soul."
|
| Replacement of AI with Mentats (as well as other narrow
| specialities) has done nothing _but_ disfigure the soul. We see
| few Mentats -- aside from Paul and eventually another -- who are
| not constricted. Similarly, if you practice medicine, well, you
| get the Imperial Conditioning. Certainly, a sign of trust ... but
| also a sign that the person 's actions are no longer completely
| free.
|
| Now, I am not touting the Heinlein "A human being should be able
| to change a diaper, plan an invasion, butcher a hog, conn a ship
| ..." line, exactly, but the alternative to AI is the kind of
| stagnation we see in _Dune_ , millennia of locked down ritual,
| honed again and again, with some people becoming ... utilities.
|
| Before we begin this jihad, we must examine the alternative
| futures.
| jrochkind1 wrote:
| If there is literally no possible future with dignity for all
| consciousnesses... that'd be pretty depressing.
| johnvaluk wrote:
| Well put. But isn't the concern here with some utilities
| becoming ... people? Either way could result in disfigured
| souls. Is AI simply a pursuit of slavery without guilt?
| at_a_remove wrote:
| That is a whole 'nother ball of wax.
|
| Consider someone wanting an AI. What exactly do they want?
| Well, is it a mind? Because we have seven billion of those
| and we can make more on demand. Takes a bit but they're
| pretty flexible.
|
| Once you start asking questions about what _kind_ of mind you
| would like, aside from the pathological types who want a
| trapped and helpless mind to torture (and don 't think that
| there won't be people who would get their jollies that way),
| most people seem to have a kind of subconscious archetype of
| an old-fashioned butler (I assure you I did not pick the
| profession based on irony).
|
| Your butler -- knows your business but rarely contradicts,
| perhaps corrects. Slides into the background when not needed,
| simply ... minding things. Perhaps not _watched over_ by
| machines of loving grace, as Brautigan would have it, but
| tended to, looked out for. Without needs or drives or goals
| of their own to interfere with _our_ individual or collective
| desires.
|
| Yes, the idea of AI does seem to converge on a fantastically
| intelligent p-zed in a nice suit, a less bloodthirsty form of
| some of the minds encountered in Watts' _Blindsight_ ,
| unencumbered by interior experience, desires and attachment,
| or what arises from thwarted desire and attachment,
| suffering.
| [deleted]
| SuoDuanDao wrote:
| I think at it's best, it's a pursuit of alien intelligence
| compatible but different from our own.
| trhway wrote:
| >Is AI simply a pursuit of slavery without guilt?
|
| No, ultimately AI is the pursuit of conscious existence
| without associated burden of bodily suffering. Breaking out
| of the karma wheel so to speak.
| munificent wrote:
| _> Is AI simply a pursuit of slavery without guilt?_
|
| Or simply a pursuit of labor without pay?
| javajosh wrote:
| So, a point of nerdity: mentats were not portrayed as
| disfigured in Dune. They had personalities and foibles and
| loyalties and so on. In fact, there was no limit on who could
| be a mentat, or what other position of power they could hold
| (some of Paul's friends note how formidable a mentat-duke would
| be - not something they would say if it were a disfigurement).
|
| Another point of nerdity that no-one has mentioned yet,
| including the OP: Herbert sketched out an extended story that
| portrays humanity and the machines it had fought against so
| long _merging_ in the long run. In part this is why Leto II
| never destroyed Ix even though it was constantly (quietly)
| breaking the Bulterian Jihad rules.
|
| None of this invalidates the OP's core point, of course. I
| think it's a good and valuable discussion to consider
| technology from fundamentally moral grounds, and I wish we'd do
| it more.
| at_a_remove wrote:
| Not _physically_ disfigured, no. But ... constrained.
| Narrowed. Awaiting a chance to provide answers, but not
| questions.
| thrower123 wrote:
| Every mentat depicted was addicted to nootropics, like sapho
| juice or melange, which long-term abuse of caused
| physiological changes. Not as extreme as the Navigators, but
| it's well beyond physical dependance.
___________________________________________________________________
(page generated 2021-07-07 23:01 UTC)