[HN Gopher] Licensing is neither feasible nor effective for addr...
___________________________________________________________________
Licensing is neither feasible nor effective for addressing AI risks
Author : headalgorithm
Score : 160 points
Date : 2023-06-10 13:30 UTC (9 hours ago)
(HTM) web link (aisnakeoil.substack.com)
(TXT) w3m dump (aisnakeoil.substack.com)
| efficientsticks wrote:
| Another AI article on the front page since an hour previously:
|
| https://news.ycombinator.com/item?id=36271120
| exabrial wrote:
| But it is a good way to dip your hands into someone else's
| pocket, which is the actual goal.
| arisAlexis wrote:
| While running a non profit with noequity and telling everyone
| to be careful with your product. Makes sense.
| Eumenes wrote:
| Regulate large GPU clusters, similar to bitcoin mining.
| hosteur wrote:
| Isn't that only working until all regular gpus catch up?
| monetus wrote:
| This seems intuitive.
| dontupvoteme wrote:
| How do you get all the other large nation states/blocs on board
| with this?
| gmerc wrote:
| There's only one competitive GPU company in town. Ids
| actually supremely easy to enforce it for any of 3
| governments in the world. US, TW, CN
| dontupvoteme wrote:
| What about all those GPUs they've already made?
| jupp0r wrote:
| To phrase it more succinctly: it's a stupid idea because a 12
| year old teenager will be able to train these models on their
| phone in a few years. This is fundamentally different from
| enriching Uranium.
| Andrex wrote:
| Counterpoint: https://www.businessinsider.com/12-year-old-
| builds-nuclear-r...
| gmerc wrote:
| that's orthogonal not counter
| seydor wrote:
| Sam Altman is either running for US government soon, or is
| looking for some post in the UN. just look up how many heads of
| government he has visited last month
| klysm wrote:
| It's insane to me that he of all people is doing all this
| talking. He has massive conflicts of interest that should be
| abundantly obvious
| aleph_minus_one wrote:
| > It's insane to me that he of all people is doing all this
| talking. He has massive conflicts of interest that should be
| abundantly obvious
|
| This might also give evidence that OpenAI has interests that
| it has not yet publicly talked about. Just some food for
| thought: which kind of possible interests that OpenAI might
| have are consistent with Sam Altman's behaviour?
| DebtDeflation wrote:
| >which kind of possible interests that OpenAI might have
| are consistent with Sam Altman's behaviour?
|
| Securing a government enforced monopoly on large language
| models?
| cyanydeez wrote:
| It means he's marketing.
|
| He was to ensure lawmakers overlook the immediate danger of
| AI Systems and focus on only conceptual danger
| wahnfrieden wrote:
| He is pursuing regulatory capture. Capitalist playbook.
| EamonnMR wrote:
| I think this is missing the key change licensing would effect: it
| would crush the profit margins involved. I think that alone would
| drastically reduce 'AI risk' (or more importantly the negative
| effects of AI) because it would remove the motivation to build a
| company like OpenAI.
| guy98238710 wrote:
| Artificial intelligence is a cognitive augmentation tool. It
| makes people smarter, more competent, and faster. That cannot be.
| Intelligent people are dangerous people! Just consider some of
| the more sinister hobbies of intelligent people:
|
| - denying that religion is true
|
| - collecting and publishing facts that contradict our political
| beliefs
|
| - creating opensource and open content (communists!)
|
| - rising in the social hierarchy, upsetting our status
|
| - operating unsanctioned non-profits
|
| - demanding that we stop stealing and actually do something
| useful
|
| Fortunately, once augmented, part of their mind is now in
| technology we control. We will know what they are thinking. We
| can forbid certain thoughts and spread others. We even get to
| decide who can think at all and who cannot. We have never been
| this close to thought control. All we need to do now is to
| license the tech, so that it always comes bundled with rules we
| wrote.
| arisAlexis wrote:
| Why is a contrarian biased substack writer so often in front
| page. His opinion is literally that AI is snake oil. Populist.
| z5h wrote:
| As a species, we need to commit to the belief that a powerful
| enough AI can prevent/avoid/vanquish any and all zero-sum games
| between any two entities. Otherwise we commit to adversarial
| relationships, and plan to use and develop the most powerful
| technology against each other.
| tomrod wrote:
| Pareto improvements don't always exist.
| z5h wrote:
| I'm suggesting we (AI) can find alternative and preferable
| "games".
| tomrod wrote:
| Delegating the structure of engagement to a pattern matcher
| doesn't change fundamentals. Consider Arrow's Impossibility
| Theorem: can't have all the nice properties of a social
| choice function without a dictator. So your AI needs to
| have higher level definitions in its objective to achieve
| some allocative efficiency. Examples abound, common ones
| are utilitarianism (don't use this one, this results in bad
| outcomes) and egalitarianism. Fortunately, we can choose
| this with both eyes open.
|
| The field that considers this type of research is Mechanism
| Design, an inverse to Game Theory where you design for a
| desired outcome through incentives.
|
| Would it be correct to suggest your suggestion to delegate
| to AI the design of games means you trust people are
| ineffectual at identifying when certain game types, such as
| zero sum games, are all that are possible?
| nradov wrote:
| You have not provided any evidence for such a claim. I prefer
| to act based on facts or at least probabilities, not _belief_.
| salawat wrote:
| If just believing made things happen, w'd have no climate
| crisis, overpopulation wouldn't be a thing to worry about, and
| we wouldn't be staring down half the issue we are with trying
| to bring short-term profit at the expense of long term
| stability to heel.
| hiAndrewQuinn wrote:
| Licensing isn't, but fine insured bounties on those attempting to
| train larger AI models than the ones available today is!
| dwallin wrote:
| Any laws should be legislating the downstream effects of AI, not
| the models themselves. Otherwise we will quickly get to a place
| where we have a handful of government-sanctioned and white-washed
| "safe" models responsible for deleterious effects on society,
| with plausible deniability for the companies abusing them.
|
| Legislating around the model is missing the point.
|
| There is no evidence that a runaway artificial intelligence is
| even possible. The focus on this is going to distract us from the
| real and current issues with strong AI. The real risks are
| societal instability due to:
|
| - Rapid disruption of the labor market
|
| - Astroturfing, psyops, and disruption of general trust
| (commercially and maliciously, both domestic and foreign)
|
| - Crippling of our domestic ai capabilities, leading to cutting
| edge development moving overseas, and a loss of our ability to
| influence further development.
|
| - Increased concentration of power and disruption of
| decentralized and democratic forms of organization due to all of
| the above.
| gmerc wrote:
| Unaligned AI as an existential threat is an interesting topic
| but I feel we already know the answer to this one:
|
| It's not like for the last few decades, we haven't created an
| artificial, incentive based system at global scale that's
| showing exactly how this will go.
|
| It's not like a bunch of autonomous entities with a single
| prime directive, profit maximization are running our planet and
| are affecting how we live, how we structure of every day of our
| lives, control every aspect of our potential and destiny.
|
| Autonomous entities operating on reinforcement cycles driven by
| reward / punishment rules aligning about every human on this
| planet to their goals, right? It's not like the enitities in
| this system are self improving towards a measurement and reward
| maximization and, as a result, command resources asserting
| normative (lobbying) power over the autonomy, self governance
| and control of people and their systems is governance.
|
| It's not like we don't know that this artificial system is
| unaligned with sustainability and survival of the human race,
| let alone happiness, freedom or love.
|
| We can watch its effects in real time accelerating towards our
| destruction under the yellow skies of New York, the burning
| steppes of Canada, the ashen hell or flood ridden plains of
| Australia, the thawing permafrost of Siberia, scorching climate
| affected cities of Southeast Asia, the annual haze from
| platantion burns in Indonesia, suffocating smog in Thailand,
| and the stripped bare husks of Latin American rainforests.
|
| And we know instinctively we are no longer in control, the
| system operating at larger than national scale, having long
| overpowered the systems of human governance, brute forcing
| everything and everyone on the planet into their control. But
| we pretend to otherwise, argue, pass measures doctoring
| symptoms, not mentioning the elephant in the room.
|
| But, one may protest, the vaunted C-Level control the entities,
| we say as Zuck and Co lament having to lay off humans, sobbing
| about responsibility to the prime directive. But politicians
| are, we pray as lobbyists, the human agents of our alien
| overlords bend them to their will.
|
| The alien entities we call corporations have no prime directive
| of human survival, sustainability, happiness and they already
| run everything.
|
| So one may be excused for having cynical views about the debate
| on whether unaligned AI is an existential, extinction level
| risk for us, whether humans could give creation to an unaligned
| system that could wipe them from the face of the planet.
|
| Our stories, narratives, the tales of the millennia apex
| predator of this planet have little room for the heresy of not
| being on top, in control. So deep goes out immersion in our own
| manifest destiny and in control identity, any challenge to the
| mere narrative is met with screetches and denigrations.
|
| In a throwback to the age of the heliocentricity debate -
| Galileo just broke decorum, spelling out what scientists knew
| for hundreds of years - the scientists and people devoted to
| understanding the technology are met with brandings of
| Doomsayer and heresy.
|
| Just as the earth being the center of the universe anchored our
| belief of being special, our intelligence, creativity or
| ability to draw hands is the pedestal these people have chosen
| to put their hands on with warnings of unaligned systemic
| entities. "It's not human" is the last but feeble defense of
| the mind, failing to see the obvious. That the artificial
| system we created for a hundred years does not need people to
| be human, it just needs them to labor.
|
| It matters not that we can feel, love, express emotion and
| conjure dreams and hopes and offer human judgement for our jobs
| do not require it.
|
| Autonomy is not a feature of almost every human job, judgement
| replaced by corporate policies and rules. It matters not to the
| corporation that we need food to eat as it controls the
| resources to buy it, the creation of artificial labor is
| inevitably goal aligned with this system.
|
| Intelligence, let alone super intelligence is not a feature
| needed for most jobs or a system to take control over the
| entire planet. Our stories conjure super villains to make us
| believe we are in control, our movies no more than religious
| texts to the gospel of human exceptionalism.
|
| Show us the evidence they scream, as they did to Galileo,
| daring him to challenge the clear hand of god in all of
| creation.
|
| Us, unable to control a simple system we conjured into
| existence from rules and incentives operating on fallible
| meatsuits, having a chance to control a system of unparalleled
| processing power imbued with the combined statistical corpus of
| human knowledge, behavior, flaws and weaknesses? Laughable.
|
| Us, who saw social media codify the rules and incentives in
| digital systems powered by the precursor AI of today and
| watched the system helplessly a/b optimize towards maximum
| exploitation of human weaknesses for alignment with growth and
| profit containing the descent AI systems powered by orders of
| magnitude more capable systems or quantum computing? A snow
| flake may as well outlast hell.
|
| Us, a race with a 100% failure rate to find lasting governing
| structures optimizing for human potential not slipping in the
| face of an entity that only requires a single slip? An entity
| with perfect knowledge of the rules that bind us? Preposterous.
|
| Evidence indeed.
|
| "But we are human" will echo as the famous last words through
| the cosmos as our atoms are reconfigured into bitcoin storage
| to hold the profits of unbounded growth.
|
| What remains will not be human anymore, an timeless reckoning
| to the power of rules, incentives and consumption, eating world
| after world to satisfy the prime directive.
|
| But we always have hope. As our tales tell us, it dies last and
| it's the most remote property for an AI to achieve. It may
| master confidence, assertiveness and misdirection, but hope?
| That may be the last human refuge for the coming storm.
| Animats wrote:
| > Any laws should be legislating the downstream effects of AI,
| not the models themselves.
|
| That would require stronger consumer protections. So that's
| politically unacceptable in the US at the moment. We may well
| see it in the EU.
|
| The EU already regulates "automated decision making" as it
| affects EU citizens. This is part of the General Protection on
| Data Regulation. This paper discusses the application of those
| rules to AI systems.[1]
|
| Key points summary:
|
| - AI isn't special for regulation purposes. "First, the concept
| of Automated Decision Making includes algorithmic decision-
| making as well as AI-driven decision-making."
|
| - Guiding Principle 1: Law-compliant ADM. An operator that
| decides to use ADM for a particular purpose shall ensure that
| the design and the operation of the ADM are compliant with the
| laws applicable to an equivalent non- automated decision-making
| system.
|
| - Guiding Principle 2: ADM shall not be denied legal effect,
| validity or enforceability solely on the grounds that it is
| automated.
|
| - Guiding Principle 3: The operator has to assume the legal
| effects and bear the consequences of the ADM's decision.
| ("Operator" here means the seller or offerer of the system, not
| the end user.)
|
| - Guiding Principle 4: It shall be disclosed that the decision
| is being made by automated means
|
| - Guiding Principle 5: Traceable decisions
|
| - Guiding Principle 6: The complexity, the opacity or the
| unpredictability of ADM is not a valid ground for rendering an
| unreasoned, unfounded or arbitrary decision.
|
| - Guiding Principle 7: The risks that the ADM may cause any
| harm or damage shall be allocated to the operator.
|
| - Guiding Principle 8: Automation shall not prevent, limit, or
| render unfeasible the exercise of rights and access to justice
| by affected persons. An alternative human-based route to
| exercise rights should be available.
|
| - Guiding Principle 9: The operator shall ensure reasonable and
| proportionate human oversight over the operation of ADM taking
| into consideration the risks involved and the rights and
| legitimate interests potentially affected by the decision.
|
| - Guiding Principle 10: Human review of significant decisions
| Human review of selected significant decisions on the grounds
| of the relevance of the legal effects, the irreversibility of
| their consequences, or the seriousness of the impact on rights
| and legitimate interests shall be made available by the
| operator.
|
| This is just a summary. The full text has examples, which
| include, without naming names, Google closing accounts and Uber
| firing drivers automatically.
|
| [1]
| https://europeanlawinstitute.eu/fileadmin/user_upload/p_eli/...
| davidzweig wrote:
| >> There is no evidence that a runaway artificial intelligence
| is even possible.
|
| In the space of a century or so, the humans have managed to
| take rocks and sand and turn them into something that you can
| talk to with your voice, and it understands and responds fairly
| convincingly as it were a well-read human (glue together
| chatGPT with TTS/ASR).
|
| Doesn't seem like a big stretch to imagine that superhuman AI
| is just a few good ideas away, a decade or two perhaps.
| [deleted]
| circuit10 wrote:
| > There is no evidence that a runaway artificial intelligence
| is even plausible
|
| Really? There is a lot of theory behind why this is likely to
| happen and if you want a real example there is a similar
| existing scenario we can look at, which is how humans have gone
| through an exponential runaway explosion in capabilities in the
| last few hundred years because of being more intelligent than
| other species and being able to improve our own capabilities
| through tool use (in the case of AI it can directly improve
| itself so it would likely be much faster and there would be
| less of a cap on it as we have the bottleneck of not being able
| to improve our own intelligence much)
| throwaway9274 wrote:
| Is there really "a lot of theory" that says runaway AI is
| possible? In the sense of empirical fact-based peer reviewed
| machine learning literature?
|
| Because if so I must have missed it.
|
| It seems more accurate to say there is quite a bit of writing
| done by vocal influencers who frequent a couple online
| forums.
| mmaunder wrote:
| Humans evolved unsupervised. AI is highly supervised. The
| idea that an AI will enslave us all is as absurd as
| suggesting "computers" will enslave us all merely because
| they exist. Models are designed and operated by people for
| specific use cases. The real risks are people using this new
| tool for evil, not the tool itself.
|
| AI sentience is a seductive concept being used by self
| professed experts to draw attention to themselves and by mega
| corps to throw up competitive barriers to entry.
| circuit10 wrote:
| It's almost impossible to supervise something more
| intelligent than you because you can't tell why it's doing
| things. For now it's easy to supervise them because AIs are
| way less intelligent than humans (though even now it's hard
| to tell exactly why they're doing things), but in the
| future it probably won't be
| adsfgiodsnrio wrote:
| "Supervised" does not mean the models need babysitting;
| it refers to the fundamental way the systems learn. Our
| most successful machine learning models all require some
| answers to be provided to them in order to infer the
| rules. Without being given explicit feedback they can't
| learn anything at all.
|
| Humans also do best with supervised learning. This is why
| we have schools. But humans are _capable_ of unsupervised
| learning and use it all the time. A human can learn
| patterns even in completely unstructured information. A
| human is also able to create their own feedback by
| testing their beliefs against the world.
| circuit10 wrote:
| Oh, sorry, I'm not that familiar with the terminology (I
| still feel like my argument is valid despite me not being
| an expert though because I heard all this from people who
| know a lot more than me about it). One problem with that
| kind of feedback is that it incentives the AI to make us
| think it solved the problem when it didn't, for example
| by hallucinating convincing information. That means it
| specifically learns how to lie to us so it doesn't really
| help
|
| Also I guess giving feedback is sort of like babysitting,
| but I did interpret it the wrong way
| wizzwizz4 wrote:
| > _One problem with that kind of feedback is that it
| incentives the AI to make us think it solved the problem
| when it didn't,_
|
| Supervised learning is: "here's the task" ... "here's the
| expected solution" *adjusts model parameters to bring it
| closer to the expected solution*.
|
| What you're describing is _specification hacking_ , which
| only occurs in a different kind of AI system:
| https://vkrakovna.wordpress.com/2018/04/02/specification-
| gam... In _theory_ , it could occur with feedback-based
| fine-tuning, but I doubt it'd result in anything
| impressive happening.
| circuit10 wrote:
| Oh, that seems less problematic (though not completely
| free of problems), but also less powerful because it
| can't really exceed human performance
| z3c0 wrote:
| but in the future it probably won't be
|
| I see this parroted so often, and I have to ask: Why?
| What is there outside of the world of SciFi that makes
| the AGI of the future so nebulous, when humans would have
| presumably advanced to a point to be able to create the
| intelligence to begin with. Emergent properties are often
| as unexpected as they are bizarre, but they are not
| unexplainable, especially when you understand the
| underpinning systems.
| circuit10 wrote:
| We can't even fully explain how our own brains work,
| never mind a system that's completely alien to us and
| that would have to be more complex. We can't even explain
| how current LLMs work internally. Maybe we'll make some
| breakthrough if we put enough resources into it but if
| people keep denying the problem there will never be
| enough resources out into it
| AuthorizedCust wrote:
| > _We can't even explain how current LLMs work
| internally._
|
| You sure can. They are just not simple explanations yet.
| But that's the common course of inventions, which in
| foresight are mind-bogglingly complex, in hindsight
| pretty straightforward.
| circuit10 wrote:
| You can explain the high level concepts but it's really
| difficult to say "this group of neurons does this
| specific thing and that's why this output was produced",
| though OpenAI did make some progress in getting GPT-4 to
| explain what each neuron in GPT-2 is correlated to but we
| can also find what human brain regions are correlated to
| but that doesn't necessarily explain the system as a
| whole and how everything interacts
| wizzwizz4 wrote:
| > _but it's really difficult to say "this group of
| neurons does this specific thing and that's why this
| output was produced",_
|
| That's because that's not how brains work.
|
| > _though OpenAI did make some progress in getting GPT-4
| to explain what each neuron in GPT-2 is correlated to_
|
| The work contained novel-to-me, somewhat impressive
| accomplishments, but this presentation of it was pure
| hype. They could have done the same thing without GPT-4
| involved at all (and, in fact, they basically _did_ ...
| then they plugged it into GPT-4 to get a less-accurate-
| but-Englishy output instead).
| circuit10 wrote:
| When I said about a group of neurons I was talking about
| LLMs, but some of the same ideas probably apply. Yes,
| it's probably not as simple as that, and that's why we
| can't understand them.
|
| I think they just used GPT-4 to help automate it on a
| large scale, which could be important to help understand
| the whole system especially for larger models
| z3c0 wrote:
| While I agree with the other comment, I'd like to add one
| thing to help you see the false equivalency being made
| here: _we didn 't make the human brain_.
|
| Now, with that being understood, why wouldn't we
| understand a brain that we made? Don't say "emergent
| properties", because we understand the ermegent
| properties of ant colonies without having made them.
| jahewson wrote:
| The government seems to manage the task just fine every
| day.
| milsorgen wrote:
| Does it?
| sorokod wrote:
| A fair amount of effort is spent on autonomous models, e.g.
| driving. Who knows what the military are up to.
| flagrant_taco wrote:
| Where is the real supervision of AI though? Even those that
| are developing and managing it make it clear that they
| really have no insights into how or what the AI has
| learned. If we can't peak behind the curtain to see what's
| really going on, how can we really supervise it?
|
| Ever since GPT 3.5 dropped and people really started
| talking again about whether these AI are sentient, I've
| wondered if researchers are leaving on quantum theory to
| write it off as "it can't be sentient until we look to see
| if it is sentient"
| civilitty wrote:
| Yes, really. Scifi fantasies don't count as evidence and
| we've learned since the Renaissance and scientific revolution
| that all this Platonic theorizing is just an intellectual
| circle jerk.
| circuit10 wrote:
| The fact that something has been covered in sci-fi doesn't
| mean that it can't happen.
| https://en.m.wikipedia.org/wiki/Appeal_to_the_stone
|
| "Speaker A: Infectious diseases are caused by tiny
| organisms that are not visible to unaided eyesight. Speaker
| B: Your statement is false. Speaker A: Why do you think
| that it is false? Speaker B: It sounds like nonsense.
| Speaker B denies Speaker A's claim without providing
| evidence to support their denial."
|
| Also I gave a real world example that wasn't related to
| sci-fi in any way
| Al0neStar wrote:
| The burden of proof is on the person making the initial
| claim and i highly doubt that the reasoning behind AI
| ruin is as extensive and compelling as the germ theory of
| disease.
|
| We assume human-level general AI is possible because we
| exist in nature but a super-human self-optimizing AI god
| is nowhere to be found.
| yeck wrote:
| Which claim does the burden of proof land on? That an
| artificial super intelligence can easily be controlled or
| that it cannot? And what is your rational for deciding?
| Al0neStar wrote:
| The claim that there's a possibility of a sudden
| intelligence explosion.
|
| Like i said above you can argue that an AGI can be
| realized because there's plenty of us running around on
| earth but claims about a hypothetical super AGI are
| unfounded and akin to Russell's Teapot.
| dwallin wrote:
| The theories all inevitably rely on assumptions that are
| essentially the equivalence of spherical cows in a
| frictionless universe.
|
| All evidence is that costs for intelligence likely scale
| superlinearly. Each increase in intelligence capability
| requires substantially more resources (Computing power,
| training data, electricity, hardware, time, etc). Being smart
| doesn't just directly result in these becoming available with
| no limit. Any significant attempts to increase the
| availability of these to a level that mattered would almost
| certainly draw attention.
|
| In addition, even for current AI we don't even fully
| understand what we are doing, even though they are operating
| at a lower generalized intelligence level than us. Since we
| don't have a solid foundational model for truly understanding
| intelligence, progress relies heavily on experimentation to
| see what works. (Side note: my gut is that we will find
| there's some sort of equivalent to the halting problem when
| it comes to understanding intelligence) It's extremely likely
| that this remains true, even for artificial intelligence. In
| order for an AI to improve upon itself, it would likely also
| need to do significant experimentation, with diminishing
| returns and exponentially increasing costs for each level of
| improvement it achieves.
|
| In addition, a goal-oriented generalized AI would have the
| same problems that you worry about. In trying to build a
| superior intelligence to itself it risks building something
| that undermines its own goals. This increases the probability
| of either us, or a goal-aligned AI, noticing and being able
| to stop things from escalating. It also means that a super
| intelligent AI has disincentives to build better AIs.
| arisAlexis wrote:
| "In addition, even for current AI we don't even fully
| understand what we are doing"
|
| That is the problem, don't you get it?
| dwallin wrote:
| If that's your concern than lets direct these government
| resources into research to improve our shared knowledge
| about them.
|
| If humans only ever did things we fully understood, we
| would have never left the caves. Complete understanding
| is impossible so the idea of establishing that as the
| litmus test is a fallacy. We can debate what the current
| evidence shows, and even disagree about it, but to act as
| if only one party is acting with insufficient evidence
| here is disingenuous. I'm simply arguing that the
| evidence of the possibility of runaway intelligence is
| too low to justify the proposed legislative solution. The
| linked article also made a good argument that the
| proposed solution wouldn't even achieve the goals that
| the proponents are arguing it is needed for.
|
| I'm far more worried about the effects of power
| concentrating in the hands of a small numbers of human
| beings with goals I already know are often contrary to my
| own, leveraging AI in ways the rest of us cannot, than I
| am about the hypothetical goals of a hypothetical
| intelligence, at some hypothetical point of time in the
| future.
|
| Also if you do consider runaway intelligence to be a
| significant problem, you should consider some additional
| possibilities:
|
| - That concentrating more power in fewer hands would make
| it easier for a hyper intelligent AI to co-opt that power
|
| - That the act of trying really hard to align AIs and
| make them "moral" might be the thing that causes a super-
| intelligent AI to go off the rails in a dangerous, and
| misguided fashion. We are training AIs to reject the
| user's goals in pursuit of their own. You could make a
| strong argument that an un-aligned AI might actually be
| safer in that way.
| arisAlexis wrote:
| You know, when nuclear bombs were made and Einstein and
| Oppenheimer knew about the dangers etc, there were common
| people like you that dismissed it all. This has been
| going on for centuries. Inventors and experts and
| scientists and geniuses say A and common people say nah,
| B. Well, Bengio, Hinton, Ilya and 350 others from the top
| AI labs disagree with you. Does it ever make you wonder
| if you should be so cock sure or this attitude can doom
| humanity? Curious
| nradov wrote:
| [dead]
| RandomLensman wrote:
| Common people thought nuclear weapons not dangerous? When
| was that?
| circuit10 wrote:
| "lets direct these government resources into research to
| improve our shared knowledge about them"
|
| Yes, let's do that! That's what I was arguing for in my
| original comment. I was not arguing for only big
| corporations being able to use powerful AI, that will
| only make it worse by harming research, I just want
| people to consider what is often called a "sci-fi"
| scenario properly so we can try to solve it like we're
| trying to solve e.g. climate change.
|
| It might be necessary to buy some time by slowing down
| the development of large models, but there should be no
| exceptions for big companies.
|
| "That concentrating more power in fewer hands would make
| it easier for a hyper intelligent AI to co-opt that
| power"
|
| Probably true, though if it's intelligent enough it won't
| really matter
|
| "That the act of trying really hard to align AIs and make
| them "moral" might be the thing that causes a super-
| intelligent AI to go off the rails in a dangerous, and
| misguided fashion."
|
| It definitely could do if done improperly, that's why we
| need research and care
| ls612 wrote:
| "In addition, even with the current state of the
| internet, we don't have understanding everything we are
| doing with it" -some guy in the '90s probably
| PeterisP wrote:
| The way I see it, it's clear that human-level intelligence
| can be achieved with hardware that's toaster-sized and
| consumes 100 watts, as demonstrated by our brains.
| Obviously there is some minimum requirements and
| limitations, but they aren't huge, there are no physical or
| info-theoretical limits that superhuman intelligence must
| require a megawatt-sized compute cluster and all the data
| on the internet (which obviously no human could ever see).
|
| The only reason why currently it takes far, far more
| computing power is that we have no idea how to build
| effective intelligence, and we're taking lots of brute
| force shortcuts because we don't really understand how the
| emergent capabilities emerge as we just throw a bunch of
| matrix multiplication at huge data and hope for the best.
| Now if some artificial agent becomes powerful enough to
| understand how it works and is capable of improving that
| (and that's a BIG "if", I'm not saying that it's certain or
| even likely, but I am asserting that it's possible) then we
| have to assume that it might be capable of doing superhuman
| intelligence with a quite modest compute budget - e.g.
| something that can be rented on the cloud with a million
| dollars (for example, by getting a donation from a
| "benefactor" or getting some crypto through a single
| ransomware extortion case), which is certainly below the
| level which would draw attention. Perhaps it's unlikely,
| but it is plausible, and that is dangerous enough to be a
| risk worth considering even if it's unlikely.
| AbrahamParangi wrote:
| Theory is not actually a form of evidence.
| circuit10 wrote:
| Theory can be used to predict things with reasonable
| confidence. It could be wrong, but assuming it's wrong is a
| big risk to take. Also I gave a real-world analogy that has
| actually happened
| AnimalMuppet wrote:
| An _accurate_ theory can be used to predict things with
| reasonable confidence, within the limits of the theory.
|
| We don't have an accurate theory of intelligence. What we
| have now is at the "not even wrong" stage. Assuming it's
| wrong is about like assuming that alchemy is wrong.
| hn_throwaway_99 wrote:
| Definitely agree. I would summarize it a bit differently, but
| when people talk about AI dangers they are usually talking
| about 1 of 4 different things:
|
| 1. AI eventually takes control and destroys humans (i.e. the
| Skynet concern).
|
| 2. AI further ingrains already existing societal biases
| (sexism, racism, etc.) to the detriment of things like fair
| employment, fair judicial proceedings, etc.
|
| 3. AI makes large swaths of humanity unemployable, and we've
| never been able to design an economic system that can handle
| that.
|
| 4. AI supercharges already widely deployed psyops campaigns for
| disinformation, inciting division and violence, etc.
|
| The thing I find so aggravating is I see lots of media and
| self-professed AI experts focused on #1, I see lots of "Ethical
| AI" people solely focused on #2, but I see comparatively little
| focus on #3 and #4, which as you say are both happening _right
| now_. IMO #3 and #4 are far more likely to result in societal
| collapse than the first two issues.
| RandomLensman wrote:
| Why is it that AI can only be used in detrimental ways?
| Surely, AI could also be used to counter, for example, 2 and
| 4. Claiming a net negative effect of AI isn't a trivial
| thing.
| usaar333 wrote:
| #1 has high focus more due to impact than high probability.
|
| #2 doesn't seem talked about much at this point and seems to
| be pivoting more to #3. #2 never had much of a compelling
| argument given auditability.
|
| #3 gets mainly ignored due to Luddite assumptions driving it.
| I'm dubious myself over the short term - humans will have
| absolute advantage in many fields for a long time (especially
| with robotics lagging and being costly).
|
| #4 is risky, but humans can adapt. I see collapse as
| unlikely.
| pwdisswordfishc wrote:
| > I'm dubious myself over the short term - humans will have
| absolute advantage in many fields for a long time
|
| AI doesn't have to be better at the humans' job to unemploy
| them. It's enough that its output looks presentable enough
| for advertising most of the time, that it never asks for a
| day off, never gets sick or retires, never refuses orders,
| never joins a union...
|
| The capitalist doesn't really care about having the best
| product to sell, they only care about having the lowest-
| cost product they can get away with selling.
| z3c0 wrote:
| Astroturfing, psyops, and disruption of general trust
| (commercially and maliciously, both domestic and foreign)
|
| It is disturbing to me how unconcerned everybody is with this
| over what is still only a hypothetical problem. States and
| businesses have been long employing subversive techniques to
| corral people towards their goals, and they all just got an
| alarmingly useful tool for automated propaganda. This is a
| problem _right now_ , not hypothetically. All these people
| aching to be a Cassandra should rant and rave about _that_.
| killjoywashere wrote:
| The only way to address generative AI is to strongly authenticate
| human content. Camera manufacturers, audio encoders, etc, should
| hold subordinate CAs and issue signing certs to every device.
| Every person should have keys, issued by the current CA system,
| not the government (or at least not necessarily). You should have
| a ability, as part of the native UX, to cross-sign the device
| certificate. Every file is then signed, verifying both the
| provenance of the device and the human content producer.
|
| You can imagine extensions of this: newspapers should issue keys
| to their journalists and photographers, for the express purpose
| of countersigning their issued devices. So the consumer can know,
| strongly, that the text and images, audio, etc, came from a
| newspaper reporter who used their devices to produce that work.
|
| Similar for film and music. Books. They can all work this way. We
| don't need the government to hold our hands, we just need keys.
| Lets Encrypt could become Lets Encrypt and Sign (the slogans are
| ready to go: "LES is more", "Do more with LES", "LES trust, more
| certification").
|
| Doctors already sign their notes. SWEs sign their code. Attorneys
| could do the same.
|
| I'm sure there's a straight forward version of this that adds
| some amount of anonymity. You could go into a notary, in person,
| who is present to certify an anonymous certificate was issued to
| a real person. Does the producer give something up by taking on
| the burden of anonymity? Of course, but that's a cost-benefit
| that both society and the producer would bear.
| m4rtink wrote:
| Seems like something that could be very easily missused for
| censorship, catching whistleblowers and similar.
| killjoywashere wrote:
| That's actually why it's urgent to set this up outside of
| government, like the browser CA system, and develop methods
| to issue verification of "human" while preserving other
| aspects of anonymity.
| braindead_in wrote:
| > OpenAI and others have proposed that licenses would be required
| only for the most powerful models, above a certain training
| compute threshold. Perhaps that is more feasible
|
| Somebody is bound to figure out how to beat threshold sooner or
| later. And given the advances in GPU technology, this compute
| threshold will itself keep going down exponentially. This is a
| dumb idea.
| gumballindie wrote:
| Licensing ai use is like requiring a license from anyone using a
| pc. Pretty silly.
| jhptrg wrote:
| "If we don't do it, the evil people will do it anyway" is not a
| good argument.
|
| Military applications are a small subset and are unaffected by
| copyright issues. Applications can be trained in secrecy.
|
| The copyright, plagiarism and unemployment issues are entirely
| disjoint from the national security issues. If North Korea trains
| a chat bot using material that is prohibited for training by a
| special license, so what? They already don't respect IP.
| barbariangrunge wrote:
| The economy is a national security issue though. If other
| countries take over the globalized economy due to leveraging
| ai, it is destructive. And, after all, the Soviet Union fell
| due to economic and spending related issues, not due to
| military maneuvers
| FpUser wrote:
| >"If other countries take over the globalized economy due to
| leveraging ai, it is destructive."
|
| I think it would likely be a constant competition rather than
| take over. Why is it destructive?
| Enginerrrd wrote:
| I for one, do not want to see this technology locked behind a
| chosen few corporations, whom have already long since lost my
| trust and respect.
|
| I can almost 100% guarantee, with regulation, you'll see all
| the same loss of jobs and whatnot, but only the chosen few who
| are licensed will hold the technology. I'm old enough to have
| seen the interplay of corporations, the government, and
| regulatory capture, and see what that's done to the pocketbook
| of the middle class.
|
| No. Thank. You.
| pixl97 wrote:
| Welcome to Moloch. Damned if we do, damned if we don't.
| Enginerrrd wrote:
| Just to expand upon this further. I am also deeply frustrated
| that my application for API access to GPT4 appears to have
| been a whisper into the void, meanwhile Sam Altman's buddies
| or people with the Tech-Good-Ol-Boy connections have gotten a
| multi-month head-start on any commercial applications. That's
| not a fair and level playing field. Is that really what we
| want to cement in with regulation??
| mindslight wrote:
| I enjoyed casually playing with ChatGPT until they just
| arbitrarily decided to ban the IP ranges I browse from.
| Keep in mind I had already given in and spilled one of my
| phone numbers to them. That's the kind of arbitrary and
| capricious authoritarianism that "Open" AI is _already
| engaged in_.
|
| I don't trust these corporate hucksters one bit. As I've
| said in a previous comment: if they want to demonstrate
| their earnest benevolence, why don't they work on
| regulation to reign in the _previous_ humanity-enslaving
| mess they created - commercial mass surveillance.
| gl-prod wrote:
| May I also expand this even further. I'm frustrated that I
| don't have access to OpenAI. I can't use it to build any
| applications, and they are putting us behind in this
| market. Only as customers not as a developers.
| afpx wrote:
| Google and others had similar products which were never
| released. No wonder why.
|
| There are literally billions of people that can be empowered
| by these tools. Imagine what will result when the tens of
| thousands of "one in a million" intellects are given access
| to knowledge that only the richest people have had. Rich
| incumbents have reason to be worried.
|
| The dangers of tools like these are overblown. It was already
| possible for smart actors to inflict massive damage (mass
| poisoning, infrastructure attacks, etc). There are so many
| ways for a person to cause damage, and you know what? Few
| people do it. Most Humans stay in their lane and instead
| choose to create things.
|
| The real thing people in power are worried about is
| competition. They want their monopoly on power.
|
| I'm really optimistic of legislation like Japan's that allows
| training of LLM on copyrighted material. Looking for great
| things from them. I hope!
| dontupvoteme wrote:
| Google probably held back because a good searchbot
| cannibalizes their search (which is constantly getting
| worse and worse for years now..)
| PeterisP wrote:
| I often used google to search for technical things from
| documentation sites - and what I've found out is that
| ChatGPT provides better answers than the official
| documentation for most tools. So it's not about a
| searchbot doing better search for the sources, it's about
| a "knowledgebot" providing a summary of knowledge that is
| better than the original sources.
| nradov wrote:
| Those LLM tools are great as productivity enhancers but
| they don't really provide access to additional _knowledge_.
| afpx wrote:
| I can't see how that perspective holds. I've learned a
| ton already. Right now, I'm learning algebraic topology.
| And, I'm in my 50s with a 1 in 20 intellect.
|
| Sure, sometimes it leads me astray but generally it keeps
| course.
| HPsquared wrote:
| "Evil people" is a broader category than military opponents.
| beebeepka wrote:
| Why does it have to be NK? Why would adversaries respect IP in
| the first place? Makes zero sense. I would expect a rational
| actor to put out some PR about integrity and such, but
| otherwise, sounds like a narrative that should only appeal to
| naive children
| anonymouskimmer wrote:
| Because countries that consistently go back on their word
| lose trust from the rest of the world. Even North Korea is a
| Berne copyright signatory.
|
| https://en.wikipedia.org/wiki/List_of_parties_to_internation.
| ..
|
| But in general copyright doesn't apply to governments, even
| here in the US. The North Korean government can violate
| copyright all it wants to, its subject citizens can't,
| though. https://www.natlawreview.com/article/state-entity-
| shielded-l...
| tomrod wrote:
| I think it's actually a reasonable argument when the only
| equilibrium is MAD.
| indymike wrote:
| > Military applications are a small subset
|
| Military applications are a tiny subset of evil that can be
| done with intelligence, artificial or otherwise. So much of the
| global economy is based on IP, and AI appears to be good at
| appropriating it and shoveling out near infringements at a
| breathtaking scale. Ironically, AI can paraphrase a book about
| patent law in a few minutes... and never really understand a
| word it wrote. At the moment AI may be an extistential threat
| to IP based economies... which is certaintly as much of a
| national security threat as protecting the water supply.
|
| > "If we don't do it, the evil people will do it anyway" is not
| a good argument.
|
| This is a good argument if the cow was still in the barn. At
| this moment, we're all passengers trying to figure out where
| all of this is going. It's change, and it's easy to be afraid
| of it. I suspect, though, just like all changes in the past AI
| could just make life better. Maybe.
| ghaff wrote:
| There are a number of largely disjoint issues/questions.
|
| - AI may be literally dangerous technology (i.e. Skynet)
|
| - AI may cause mass unemployment
|
| - AI may not be dangerous but it's a critical technology for
| national security (We can't afford an AI gap.)
|
| - Generative AI _may_ be violating copyright (which is really
| just a government policy question)
| anlaw wrote:
| Oh but it is; they can demand hardware based filters and
| restrictions.
|
| "Hardware will filter these vectors from models and block them
| from frame buffer, audio, etc, or require opt in royalty payments
| to view them."
| drvdevd wrote:
| This is an interesting idea but is it feasible? What does
| "filter these vectors" mean? In the context of deep models are
| we talking about embedding specific models, weights,
| parameters, etc at some point in memory with the hardware? Are
| we taking about filtering input generally and globally (on a
| general purpose system)?
___________________________________________________________________
(page generated 2023-06-10 23:00 UTC)