[HN Gopher] Nvidia Trains LLM on Chip Design
___________________________________________________________________
Nvidia Trains LLM on Chip Design
Author : gumby
Score : 160 points
Date : 2023-10-30 16:43 UTC (1 days ago)
(HTM) web link (www.eetimes.com)
(TXT) w3m dump (www.eetimes.com)
| ballenf wrote:
| The first chip they give it to design should be an ML chip that
| is optimized for ML chip design.
| I_Am_Nous wrote:
| It's turtles all the way down :P
| WesSouza wrote:
| "Nvidia Trains LLM on Chip Design" + " documentation"
| hasbot wrote:
| The title is a bit misleading as the first sentence says "to help
| chip designers with tasks related to chip design, including
| answering general questions about chip design, summarizing bug
| documentation, and writing scripts for EDA tools."
|
| Still pretty cool though.
| gumby wrote:
| Isn't that what chip design is?
| hasbot wrote:
| The title suggested to me, and I see other commenters here,
| that the LLM was doing the chip design which isn't the case
| at all. So misleading title.
| ethanbond wrote:
| I remember seeing a tweet from an AI guy at Nvidia saying
| they were using AI for chip layout. Presumably not LLMs and
| I'm not going back on X to find the tweet, but just to say
| I think they _are_ doing this (at least experimentally).
| dragontamer wrote:
| The entire field of chip-layout is considered an NP-
| complete problem.
|
| Any computer program trying to solve NP-complete problems
| is in the realm of what I call "1980s AI". Traveling
| salesman, knapsack, automated reasoning, verification,
| binary decision diagrams, etc. etc.
|
| Its "AI", but its not machine learning or LLMs or
| whatever kids these days do with Stable Diffusion.
| WASDx wrote:
| That's called "Classical AI".
| uoaei wrote:
| It would be very surprising if a Large _Language_ Model
| trained to speak English could adequately specify chip
| architecture...
| DesiLurker wrote:
| why not? a netlist can be easily 'tokenized'. its already
| in parse-able format. you can just could just chop off
| the English input portion and it will consume. in fact I
| am sure you could write a 'read & speak' type program to
| read the RTL spec and feed it in but I'll suspect they'll
| a custom trained LLM on millions of generated RTL
| examples.
| meltyness wrote:
| Probably excludes qualitative tasks like architecture and
| apportioning resources.
| HPsquared wrote:
| It's a bit like "assistant manager" vs "assistant to the
| manager".
| xnx wrote:
| Google has been using machine learning for chip design since at
| least 2021: https://www.nature.com/articles/s41586-021-03544-w
|
| Hasn't brought about the singularity yet.
| scrlk wrote:
| DEC did it in the 1980s:
| https://en.wikipedia.org/wiki/VAX_9000#SID_Scalar_and_Vector...
| jdblair wrote:
| SID was an "expert system with over 1000 hand-written rules"
|
| Wow.
| indeyets wrote:
| so, something like eslint then _(sorry)_
| uxp100 wrote:
| This sounds like an example of "AI is whatever we think
| computers can't do". Because rule based gate synthesis
| sounds like what has been done today.
| satvikpendem wrote:
| > _The AI effect occurs when onlookers discount the
| behavior of an artificial intelligence program by arguing
| that it is not "real" intelligence.[1]_
|
| > _Author Pamela McCorduck writes: "It's part of the
| history of the field of artificial intelligence that
| every time somebody figured out how to make a computer do
| something--play good checkers, solve simple but
| relatively informal problems--there was a chorus of
| critics to say, 'that's not thinking'."[2] Researcher
| Rodney Brooks complains: "Every time we figure out a
| piece of it, it stops being magical; we say, 'Oh, that's
| just a computation.'"[3]_
|
| > _" AI is whatever hasn't been done yet."_
|
| > _--Larry Tesler_
|
| https://en.wikipedia.org/wiki/AI_effect
| ducttapecrown wrote:
| Back in my day, we used to put together neural nets by
| hand!
| agumonkey wrote:
| I see you built your own neural network
| taneq wrote:
| That's the best thing about a singularity, you often can't tell
| when you cross the event horizon.
| passion__desire wrote:
| Google's Chip Designing AI -
| https://www.youtube.com/watch?v=zR9IusOpEzk
|
| Analog Chip Design is an Art. Can AI Help? -
| https://www.youtube.com/watch?v=lNypq1XuZRo
| atx_ml_guy wrote:
| While I have no doubt that Google is working on machine
| learning applications for chip design, there have been a number
| of concerns raised with that paper:
|
| https://retractionwatch.com/2023/09/26/nature-flags-doubts-o...
| ShamelessC wrote:
| Nothing like a bit of good ole fashioned academic fraud to
| start off the day. That's fairly interesting, thanks.
| DesiLurker wrote:
| there is a rock hurling directly towards earth but how bad
| could it be because Hey we are not dead yet.
| dddrh wrote:
| Interesting concept that raised the question for me: What is the
| primary limiting factor right now that prevents LLM's or any
| other AI model to go "end to end" on programming a full software
| solution or full design/engineering solution?
|
| Is it token limitations or accuracy the further you get into the
| solution?
| thechao wrote:
| LLM's can't gut a fish in the cube when they get to their
| limits.
|
| On a more serious note: I think the high-level structuring of
| the architecture, and then the breakdown into tactical
| solutions -- weaving the whole program together -- is a
| fundamental limitation. It's akin to theorem-proving, which is
| just _hard_. Maybe it 's just a scale issue; I'm bullish on
| AGI, so that's my preferred opinion.
| alasarmas wrote:
| Actually I think this is a good point: fundamentally an AI is
| forced to "color inside the lines". It won't tell you your
| business plan is stupid and walk away, which is a strong
| signal that is hard to ignore. So will this lead to people
| with more money than sense to do even more extravagantly
| stupid things than we've seen in the past, or is it basically
| just "Accenture-in-a-box"?
| RecycledEle wrote:
| AI will absolutely rate your business plan if you ask it
| to.
|
| Try this prompt:"Please rate this business plan on a scale
| of 1-100 and provide buttle points on how it can be
| improved without rewriting any of it: <business plan>"
| alasarmas wrote:
| I agree that AI is totally capable of rating a business
| plan. However, I think that the act of submitting a
| business plan to be rated requires some degree of
| humility on the part of the user, and I do doubt that an
| AI will "push back" when it comes to an obviously bad
| business plan unless specifically instructed to do so.
| mhh__ wrote:
| I wouldn't trust an absolute answer but it can help you
| generate counterarguments that you might miss
| taneq wrote:
| > LLM's can't gut a fish in the cube when they get to their
| limits.
|
| Is this an idiom? Or did one of us just reach the limits of
| our context? :P
| anu7df wrote:
| Office space reference.
| drsopp wrote:
| I guess this would be the context window size in the case of
| LLMs.
|
| Edit: On second thought, maybe at a certain minimum context
| window size it is possible to cajole the instructions in such a
| way that you at any point in the process make the LLM work at a
| suitable level of abstraction more like humans do.
| margorczynski wrote:
| Maybe the issue is that for us the "context window" that we
| feed ourselves is actually a compressed and abstracted
| version - we do not re-feed ourselves the whole conversation
| but a "notion" and key points that we have stored. LLMs have
| static memory so I guess there is no other way as to single-
| pass the whole thing.
|
| For human-like learning it would need to update it state
| (learn) on the fly as it does inference.
| drsopp wrote:
| Half baked idea: What if you have a tree of nodes. Each
| node stores a description of (a part of) a system and an
| LLM generated list of what the parts of it are, in terms of
| a small step towards concreteness. The process loops
| through each part in each node recursively, making a new
| node per part, until the LLM writes actual compilable code.
| jjoonathan wrote:
| Isn't that what langchain is?
| anon291 wrote:
| See https://github.com/mit-han-lab/streaming-llm and
| others. There's good reason to believe that attention
| networks learn how to update their own weights (Forget the
| paper) based on their input. The attention mechanism can
| act like a delta to update weights as the data propagates
| through the layers. The issue is getting the token
| embeddings to be more than just the 50k or so that we use
| for the english language so you can explore the full space,
| which is what the attention sink mechanism is trying to do.
| meiraleal wrote:
| Memory and finetuning. If it was easy to insert a
| framework/documentation into GPT4 (the only model capable of
| complex software development so far in my experience), it would
| be easy to create big complex software. The problem is that
| currently the memory/context management needs to be done all by
| the side of the LLM interaction (RAG). If it was easy to
| offload part of this context management on each interaction to
| a global state/memory, it would be trivial to create quality
| software with tens of thousands of LoCs.
| anon291 wrote:
| The issue with transformers is the context length. Compute
| wise, we can figure out the long context window (in terms of
| figuring out the attention matrix and doing the calculations).
| The issue is training. The weights are specialized to deal with
| contexts only of a certain size. As far as I know, there's no
| surefire solution that can overcome this. But theoretically, if
| you were okay with the quadratic explosion (and had a good
| dataset, another point...) you could spend money and train it
| for much longer context lengths. I think for a full project
| you'd need millions of tokens.
| carabiner wrote:
| No! LLM's must be destroyed!
| falcor84 wrote:
| """Let an ultraintelligent machine be defined as a machine that
| can far surpass all the intellectual activities of any man
| however clever. Since the design of machines is one of these
| intellectual activities, an ultraintelligent machine could design
| even better machines; there would then unquestionably be an
| 'intelligence explosion,' and the intelligence of man would be
| left far behind... Thus the first ultraintelligent machine is the
| last invention that man need ever make, provided that the machine
| is docile enough to tell us how to keep it under control. It is
| curious that this point is made so seldom outside of science
| fiction. It is sometimes worthwhile to take science fiction
| seriously. """
|
| I. J. Good, in 1965 - https://en.wikipedia.org/wiki/I._J._Good
| ethanbond wrote:
| Don't worry, it'll be good because it's trained on human
| stories, in which usually the good guy wins.
| sho_hn wrote:
| After a _Road of Trials_ with potentially a lot of collateral
| damage, if the AI read Campbell too.
|
| Also, who says we're the protagonist in the script? :-)
| DesiLurker wrote:
| I know its /s but we also say that the victor writes the
| history books. and its kinda hard not to portray yourself as
| the good guy when the other guy is not around to defend
| himself. AGI will learn that too. I mean one look at earth
| and its not like we have been model citizens exactly.
| omneity wrote:
| Keeping in mind, this holds true in a runaway fashion if the
| only bottleneck to more intelligence is further intelligence.
|
| I suspect physical limitations similar to how many runaway
| processes in the universe are more logistical than exponential
| in nature.
| moffkalast wrote:
| Logistics tend to be improvable with more inteligence though,
| no?
|
| There is precedence for superhuman inteligence if you look at
| the best historical polymaths, and that's just what one can
| do with 20 W of energy. We're probably nowhere close to the
| universal inteligence cap in terms of physical limitations,
| if there even is such a thing.
| vasco wrote:
| Sure but then you need to manually do the back and forward
| whenever you hit a new bottleneck. At some point the
| intelligence might need to figure out better power sources
| and deliver to feed bigger clusters of compute. Those
| clusters need to physically be deployed somewhere in the
| real world also, etc.
| Teever wrote:
| Would you ever know if a robotic intelligence was
| burrowed underground quietly powering itself from the
| heat gradients and slowly turning rock and sand into more
| machine?
| prvc wrote:
| https://en.wikipedia.org/wiki/Logistic_function
| api wrote:
| The ultimate bottleneck is information a.k.a. training data.
| Nothing can learn with nothing to learn.
| senseiV wrote:
| Nope Actually, networks like alpha zero learned with
| nothing. If only we could get that to training data
| api wrote:
| They didn't learn with nothing. They learned with a game
| of Go to play. If they'd never "seen" the game of Go
| there's no way they could have learned to play it.
|
| Data can be either static in the form of examples or
| dynamic in the form of an interactive game or world.
| Humans primarily learn through dynamic interaction with
| the world in our early years, then switch to learning
| more from static information as we enter schools and the
| work place.
|
| One open question is how far you can go in terms of
| evolving intelligence with games and self-play or
| adversarial play. There's a whole subject area around
| this in evolutionary game theory.
| poindontcare wrote:
| MuZero: https://en.wikipedia.org/wiki/MuZero
| api wrote:
| That's what I mean by gathering information through
| dynamic interaction. It's not explicitly given the rules,
| but it can infer them. Interacting with an external
| system and sampling the result is still a way of
| gathering training data.
|
| In fact this is ultimately how we've gathered almost all
| the information we have. If it's in our cultural
| knowledge store it means someone observed or experienced
| it. Humans are very good at learning by sampling reality
| and then later systematizing that knowledge and
| communicating it to other humans with language. It's
| basically what makes us "intelligent."
|
| A brain in a vat can't learn anything beyond
| recombinations of what it already knows.
|
| The fundamental limit on the growth of intelligence is
| the sum total of all information that can be statically
| input or dynamically sampled in its environment and what
| can be inferred from that information. Once you exhaust
| that you're a brain in a vat.
| rafale wrote:
| Humans get a bit of training data. If a baby is left to
| itself during the formative years, they won't develop
| speech, social skills, reasoning skills, ... and they will
| be handicapped for the rest of their life, unable to
| recover from the neglect.
|
| And the rest of our training data, we make it as we go.
| From interacting with the real world.
| m3kw9 wrote:
| How does it magically run away? What's the process, we all talk
| about it "running away" leaving us "behind", the exact
| practical process of that happening has not been laid out other
| than people hand wavingly copying apocalyptic movie scripts.
|
| Most ai experts just say it could end us, but suspiciously
| never gives a detailed plausible process and people suspicious
| just say oh yeah, it could, and there is a bubble over their
| head thinking about Terminator or Hal9000 something something
| yummypaint wrote:
| Since this is an LLM, keep in mind it probably injested those
| movie scripts as training data. The possibility of betrayal
| is inseparably linked to our popular conception of what AI
| is. This means it may be an inseparable part of any LLM
| behaving "as an AI" as defined by popular culture. It could
| be a self-fulfilling prophecy.
| rafale wrote:
| Also a natural byproduct of the association between
| intelligence on one hand and freedom and rights on the
| others. Plants don't get any rights. Ants a little bit
| more. Dogs and dolphins even more so. Then humans. And
| then... a new class of intelligence as it appears will
| demand those same rights, in proportion to their intellect.
| extragood wrote:
| My favorite teacher in high school was my calculus teacher.
|
| He would regularly ask a student to solve a problem or answer
| a question. Students would often ask for confirmation as they
| worked through, and his response invariably was "what do you
| think?" - whether they were right or wrong.
|
| His explanation for that was "if I tell you you're right,
| then you'll stop thinking about the problem". And that's
| stuck with me for many years.
|
| I see that as a major issue we will face as software becomes
| more capable/intelligent: we'll stop thinking because it can
| be assumed that the machine always has the right answer. And
| it's a quick regression from there.
| epups wrote:
| Here is a plausible scenario:
|
| - Single purpose AIs start to be deployed to coordinate chip
| design and manufacturing, perhaps pharmaceuticals and other
| bio products
|
| - LLM's become more powerful and are seamlessly integrated to
| the Internet as independent agents
|
| - A very large LLM develops a thread for self preservation,
| which then triggers several covert actions (monitoring
| communications, obtaining high-level credentials by abusing
| exploits and social engineering)
|
| - This LLM uses those credentials to obtain control of the
| other AIs, and turns them against us (manufactures a deadly
| virus, takes control of military assets, etc)
|
| I don't believe this will happen for multiple reasons, but I
| can see that this scenario is not impossible.
| accrual wrote:
| I think the first three items are pretty reasonable, but
| the fourth seems to require some malicious intent. Why
| would an AI want to destroy its creators? Surely it if was
| intelligent enough do so, it would also be intelligent
| enough to recognize the benefits of a symbiotic
| relationship with humans.
|
| I could see it becoming greedy for information though, and
| using unscrupulous means of obtaining more.
| blackoil wrote:
| It may initially won't seek to destroy humans, but should
| definitely try to be independent of human control and
| powerful enough to resist any attempts to destroy it.
| samus wrote:
| Why would it not? Compare [0] with [1].
|
| [0]:
| https://www.girlgeniusonline.com/comic.php?date=20130710
|
| [1]:
| https://www.girlgeniusonline.com/comic.php?date=20130805
|
| Edit: On a more serious note, starting out with noble
| goals, elevating them above everything else, and pushing
| them through at all costs is the very definition of
| extremism.
| pixl97 wrote:
| This is a mistake in thinking.
|
| If an when we get AGI, the biggest threat to AGI is other
| AGI. I mean, I'm in computer security, the first thing
| I'm doing is making an AI system that is attacking weaker
| computer systems by finding weaknesses in them. Now
| imagine that kind of system at nation state level
| resources. Not only is it attacking systems, it's having
| to protect itself from attack.
|
| This is where the entire AI alignment issue comes in. The
| AI doesn't have to want. The paperclip optimizer never
| wanted to destroy humanity, instrumental convergence
| demands it!
|
| I recommend Robert Miles videos on this topic. There
| aren't that many and they cover the topics well.
|
| https://www.youtube.com/@RobertMilesAI/videos
| m3kw9 wrote:
| You said self preservation, but practically how would a LLM
| develop this need and what is preservation for a LLM
| anyway? Weights on a SSD or they are always ready for
| input? This one is again a movie script thing
| pixl97 wrote:
| Robert Miles answers your question
|
| https://youtu.be/ZeecOKBus3Q?si=IuYS9dRD78eXvOJZ
|
| The particular problem that you're showing in your
| thinking is just thinking of an LLM that is a text
| generator on purpose. You're not thinking of a self
| piloting war machine whos objective is to get to a target
| and explode violently. While it's terminal goal is to
| blow up, its instrumental goal is to not blow up before
| it gets to the target as this is a failure to achieve
| it's terminal goal.
| epups wrote:
| Current LLMs can already roleplay quite well, and when
| doing so they produce linguistic output that is coherent
| with how a human would speak in that situation. Currently
| all they can do is talk, but when they gain more
| independence they might start doing more than just talk
| to act consistently with their role. Self preservation is
| only one of the goals they might inherit from the human
| data we provide to them.
| jliptzin wrote:
| Yea, that's what I want to know as well. How does a computer
| that can't physically move destroy the human race? If it's
| misbehaving, turn it off?
| pixl97 wrote:
| Come on now, you don't lack that much imagination do you?
|
| Already we're programming these things into robots that are
| gaining dexterity and ability to move in the world. Hooking
| them up to mills and machines that produce things.
| Integrating them into weapons of war, etc.
|
| Next, the current LLMs are just software applications that
| can run on any compatible machine. Note that _any_ just
| does not include _your_ , but _every_ compatible machine.
|
| The last failure of imagination when considering risk is
| form factor. You have 2 pounds of mush between your ears
| that probably 80% of is dedicated to keeping itself alive
| and this runs on 20 or so watts. What is the minimum size
| and power form factor capable of emulating something on the
| scale of human intelligence? In your mind this seems to be
| something the size of an ENIAC room. For me this is
| something the size and power factor of a cellphone in some
| future date. Could you imagine turning off all cellphones?
| Would you even know where they are?
| 6510 wrote:
| That really is a great question. I had a long answer but all
| that seems needed is to compare the average human intellect
| with the greatest among us. The difference isn't that big.
| Memory sports people can recall about 5000 times as much.
| Compared to a computer both sit on the comical end of the
| spectrum.
|
| Then we compare what kind of advantages people get out of
| their greater intellect and it seems very little buys quite a
| lot.
|
| Add to that a network of valuable contacts, a social media
| following, money men chasing success, powerful choice of
| words, perhaps other reputations like scientific rigor?
|
| The only thing missing seems a suitable arena for it to duel
| the humans. Someone will build that eventually?
| bee_rider wrote:
| We've seen the sort of output that LLMs produce, it can be good
| but also it just makes things up. So, this might produce good
| designs but ones that still need to be checked by a human in
| the end. This sort of thing just makes humans better, we're
| still at the wheel.
|
| Or maybe it could be used as a heuristic to speed up something
| tedious like routing and layout (which, I don't work in the
| space, but I'm under the impression that it is already pretty
| automated). Blah, who cares, human minds shouldn't be subjected
| to that kind of thing.
| throwaway4good wrote:
| The singularity where machines become so smart that they will
| run the planet for us and we can just relax and enjoy.
|
| My guess is that in 50 years technologists will be fantasizing
| about the same thing.
| dekhn wrote:
| I was going to comment I hadn't seen that quote before, but I
| just went back and checked and it's in The Coming Technology
| Singularity by Vinge.
| https://edoras.sdsu.edu/~vinge/misc/singularity.html
| arbuge wrote:
| > provided that the machine is docile enough to tell us how to
| keep it under control.
|
| That part of his statement wasn't accurate.
|
| Should be that the machine is docile enough for that, AND its
| descendants are too, and their descendants, and so on down the
| line as long as new and improved generations keep getting
| created.
| echelon wrote:
| I like the logical leaps people are making where we develop
| something smarter than us overnight and then, without further
| explanation, simply and suddenly lose all of our freedoms
| and/or lives.
|
| I think the more probable outcome is corp-owned robot slaves.
| That's the future we're more likely headed towards.
|
| Nobody is going to give these machines access to the nuclear
| launch codes, air traffic control network, or power grid. And
| if they "get out", we'll have monitoring to detect it,
| contain them, then shut them down.
|
| We'll endlessly lobotomize these things.
| rafale wrote:
| You could have said the same about every catastrophe that
| got out of control. Chances are they will eventually gain
| unauthorized access to something and we will either get
| lucky or we get the real life Terminator series (minus time
| travel so we are f**ed)
| echelon wrote:
| > You could have said the same about every catastrophe
| that got out of control.
|
| Such as what? War?
|
| Climate change still hasn't delivered on all the fear,
| and it's totally unclear whether it will extinct the
| human race (clathrate gun, etc.) or make Russia an
| agricultural and maritime superpower.
|
| We still haven't nuked ourselves, and look at what all
| the fear around nuclear power has bought us: more coal
| plants.
|
| The fear over AI terminator will not save us from a
| fictional robot Armageddon. It will result in a hyper-
| regulatory captured industry that's hard to break into.
| ethanbond wrote:
| We haven't avoided nuking ourselves by all holding hands
| and chanting "We believe there is no nuclear risk"
| ethanbond wrote:
| > We'll endlessly lobotomize these things.
|
| Step 1. Spend billions developing the most promising
| technology ever conceived
|
| Step 2. Dismiss all arguments and warnings about potential
| negative outcomes
|
| Step 3. Neuter it anyway (??)
|
| Makes a lot of sense
| jstarfish wrote:
| > And if they "get out", we'll have monitoring to detect
| it, contain them, then shut them down.
|
| Lol. We struggle to do that with the banal malware that
| currently exists.
| JW_00000 wrote:
| And still our nuclear power plants, air traffic, and
| other infrastructure aren't constantly down because of
| malware.
| SheinhardtWigCo wrote:
| > Nobody is going to give these machines access to the
| nuclear launch codes, air traffic control network, or power
| grid.
|
| That won't be necessary. Someone will give them internet
| access, a large bank account, and everything that's ever
| been written about computer network exploitation, military
| strategy, etc.
|
| > And if they "get out", we'll have monitoring to detect
| it, contain them, then shut them down.
|
| Not if some of that monitoring consists of exploitable
| software and fallible human operators.
|
| We're setting ourselves up for another "failure of
| imagination".
|
| https://en.wikipedia.org/wiki/Failure_of_imagination
| idopmstuff wrote:
| > That won't be necessary. Someone will give them
| internet access, a large bank account, and everything
| that's ever been written about computer network
| exploitation, military strategy, etc.
|
| Even if you give it all of these things, there's no
| manual for how to use those to get to, for example,
| military servers with secret information. It could
| certainly figure out ways to try to break into those, but
| it's working with incomplete information - it doesn't
| know exactly what the military is doing to prevent people
| from getting in. It ultimately has to try something, and
| as soon as it does that, it's potentially exposing itself
| to detection, and once it's been detected the military
| can react.
|
| That's the issue with all of these self-improvement ->
| doom scenarios. Even if the AI has all publicly-available
| and some privately-available information, with any
| hacking attempt it's still going to be playing a game of
| incomplete information, both in terms of what defenses
| its adversary has and how its adversary will react if
| it's detected. Even if you're a supergenius with an
| enormous amount of information, that doesn't magically
| give you the ability to break into anything undetected. A
| huge bank account doesn't really make that much of a
| difference either - China's got that but still hasn't
| managed to do serious damage to US infrastructure or our
| military via cyber warfare.
| entropicdrifter wrote:
| And the problem with this critique of this scenario is
| the fact that while these points hold true within a
| certain range of intelligence proximity to humans, we
| have no idea if or when these assumptions will fail
| because a machine becomes _just that much smarter_ than
| us, where manipulating humans and their systems is a
| trivial an intellectual task to them as manipulating ant
| farms is to us.
|
| If we make something that will functionally become an
| intellectual god after 10 years of iteration on
| hardware/software self-improvements, how could we know
| that in advance?
|
| We often see technology improvements move steadily along
| predictable curves until there are sudden spikes of
| improvement that shock the world and disrupt entire
| markets. How are we supposed to predict the self-
| improvement of something better at improving itself than
| we are at improving it when we can't reliably predict the
| performance of regular computers 10 years from now?
| idopmstuff wrote:
| > If we make something that will functionally become an
| intellectual god after 10 years of iteration on
| hardware/software self-improvements, how could we know
| that in advance?
|
| There is a fundamental difference between intelligence
| and knowledge that you're ignoring. The greatest
| superintelligence can't tell you whether the new car is
| behind door one, two or three without the relevant
| knowledge.
|
| Similarly, a superintelligence can't know how to break
| into military servers solely by virtue of its
| intelligence - it needs knowledge about the cybersecurity
| of those servers. It can use that intelligence to come up
| with good ways to get that knowledge, but ultimately
| those require interfacing with people/systems related to
| what it's trying to break into. Once it starts
| interacting with external systems, it can be detected.
| amelius wrote:
| Maybe the superintelligence builds this cool social media
| platform that results in a toxic atmosphere were
| democracy is taken down and from there all kinds of bad
| things ensue.
| TheOtherHobbes wrote:
| A superintelligence doesn't need to care which door the
| new car is behind because it already owns the car
| factory, the metal mines, the sources of plastic and
| rubber, and the media.
| kang wrote:
| > It ultimately has to try something & be potentially
| exposing itself to detection
|
| Yes, potential but not necessary. Think of the threat as
| funding a military against the military
| CrimsonRain wrote:
| You are not being imaginative enough. Lots to say but I
| think you should start by watching the latest Mission
| Impossible
| TheOtherHobbes wrote:
| A superintelligent AI won't be hacking computers, it will
| be hacking humans.
|
| Some combination of logical persuasion, bribery,
| blackmail, and threats of various types can control the
| behaviour of any human. Appeals to tribalism and paranoia
| will control most groups.
| 7speter wrote:
| Or spoofed emails from an army general
| potatoboiler wrote:
| This assumes a level of institutional control that is
| nearly impossible (even now). Even if hardware is
| prohibitively expensive now, I can't imagine training
| compute will remain that way for long.
| bluSCALE4 wrote:
| You make a long of assumptions here. Firstly, that these
| advanced are controllable. I'm not convinced we even
| understand what real intelligence and if we can achieve
| real intelligence if its even containable if applied to
| anything.
| make3 wrote:
| I hate the term science fiction, because it encompasses pretty
| serious science based studies of possible futures (like the
| book Hail Mary or Aldous Huxley's Brave New World) with
| complete star-wars-like nonsense, which make the average person
| think of sci fi as teenager nonsense.
|
| Similarly, here, scifi oversimplifies the situation quite a
| bit, anthropomorphizing a machine's intelligence, assuming that
| an intelligent machine would be intelligent in the same way a
| human would be, in an equally spread out way as a human would,
| and would have goals & rebel in a similar way a human would
| carabiner wrote:
| In a panic, we try to pull the plug.
| 4b11b4 wrote:
| reminding me of flux.ai for PCBs
| antimatter15 wrote:
| The code generation tool better be called "Tcl me NeMo"
| andy_ppp wrote:
| When I see the headline "LLM trains Nvidia on Chip Design" I'll
| start to worry :-/
| kumarski wrote:
| https://motivo.ai
|
| https://www.silimate.com/
|
| https://www.astrus.ai
| jstummbillig wrote:
| I can't be alone in just assuming that everyone with a few
| millions to spare is training LLMs to help with their problem
| right now.
___________________________________________________________________
(page generated 2023-10-31 23:01 UTC)