[HN Gopher] Developing AI models or giant GPU clusters? Uncle Sa...
___________________________________________________________________
Developing AI models or giant GPU clusters? Uncle Sam would like a
word
Author : rntn
Score : 57 points
Date : 2023-11-05 19:45 UTC (3 hours ago)
(HTM) web link (www.theregister.com)
(TXT) w3m dump (www.theregister.com)
| wslh wrote:
| Is this some kind of new Clipper chip [1] I understand the
| differences but I see this as a type of allegory?
|
| [1] https://en.wikipedia.org/wiki/Clipper_chip
| mpalmer wrote:
| I'm not sure what connection you're seeing here. How does a set
| of reporting requirements resemble the attempt to encourage
| telecom businesses to add a backdoor in communications
| infrastructure?
| wslh wrote:
| I mentioned explicitly the word allegory.
|
| First, the term backdoor applies loosely to the Clipper chip
| since it would have been public that this chip could be used
| for accessing private information. I think lock is a better
| security term. Backdoors generally are secret. My memories
| from that time are that the term backdoor was also an irony.
|
| Second, the link is about informing private information
| because there is a security concern. It is not because the
| government want to have stats to inform the public about how
| to run AI nodes.
| mpalmer wrote:
| No, I think backdoor is the appropriate term, not least of
| all because the first sentence of the Wikipedia entry you
| linked identifies the Clipper chip as such.
|
| As long as we're nitpicking word choice, the word allegory
| does not apply here. An allegory is an intentional
| narrative device employed by an author or artist. Two
| things you find to be similar does not constitute an
| allegory.
|
| The link you identify is pretty tenuous. With the Clipper
| chip, the government wanted access to _live communications_
| , not information about the infrastructure the
| communications were passing through. That is not the case
| with the recent EO.
| wslh wrote:
| Sorry, if I can say it differently it reminds me of the
| Clipper chip because of the security concerns for the use
| of technology in a specific way. AI now, crypto before.
| Please take it as a personal connection then.
| amzn12333 wrote:
| Just do it in China or Russia then.
| threeseed wrote:
| There are export controls for the highest end GPUs.
|
| And history has shown that companies don't just move to
| China/Russia because the US market is so lucrative.
| cscurmudgeon wrote:
| Where did Apple manufacture its phones?
| threeseed wrote:
| Apple never moved to China.
|
| And their phones are manufactured in China, India and
| Vietnam.
| Dalewyn wrote:
| 1. Highest-end silicon is not strictly necessary. Most people
| here can surely appreciate just how fast microprocessors in
| general are today?
|
| 2. The export controls have exceptions, loopholes, and people
| with strong wills.[1][2]
|
| 3. China is slowly but steadily freeing itself from import
| dependence.[3]
|
| [1]: https://tech.slashdot.org/story/23/08/21/203241/china-
| keeps-...
|
| [2]:
| https://mobile.slashdot.org/story/23/09/09/1849221/huawei-
| sh...
|
| [3]: https://mobile.slashdot.org/comments.pl?sid=23071327&cid
| =638...
| roenxi wrote:
| For most of history, China wasn't the wealthy superpower it
| has been since around 2020.
|
| There is an argument that China has the world's largest
| economy and it is nestled in the middle of the region that is
| best known for high-tech manufacturing. It is still possible
| that they'll fumble this somehow, but the fundamentals are
| solidly on the side of China becoming the place to do AI
| training.
| cscurmudgeon wrote:
| That's what will happen and we will wondering how China got to
| be number one in AI (like we do now for China and
| manufacturing).
| Tyr42 wrote:
| China has similar rules, but tighter u less it's for national
| security.
| akira2501 wrote:
| The order itself is ridiculous election year pandering. It
| imagines we are suddenly in an "age of AI" and that it's clueless
| meddling is required to establish an industry that already exists
| and to prevent algorithmic dark patterns that are already
| entrenched from forming.
|
| It then sets every federal agency out on a quest to identify and
| then create a plan to ameliorate all the "scary AI boogey men"
| sci-fi fever dreams that have been associated with the deployment
| of a technology that doesn't even actually exist yet.
|
| And, of course, more H1-B visas, because.. you know.. we wouldn't
| want to be "left behind."
|
| [0]: https://www.whitehouse.gov/briefing-room/presidential-
| action...
| ketzo wrote:
| Edit: I read H1-B as a generalization for "skilled immigration"
| - I now see that people have issues with the H1-B program,
| specifically.
|
| The H1-B thing is kind of a non-sequitur. I think anyone
| involved in tech (software, hardware, anything) should be
| _wildly_ in favor of massive H1-B increases, AI-related or not.
|
| The U.S. has the opportunity to cement itself as the center of
| the world as far as technological progress from now into
| eternity. Why wouldn't we take every brilliant immigrant we can
| get?
|
| If you're a software developer and you are afraid for your
| job/wages, I get that intuitively; but how many times do we
| have to learn the lesson that "more people making software
| leads to _more_ software jobs"
| jgalt212 wrote:
| > I think anyone involved in tech (software, hardware,
| anything) should be wildly in favor of massive H1-B
| increases, AI-related or not.
|
| Not if you like high wages, and you want them to stay that
| way.
| mschuster91 wrote:
| There's way too much work to do in the AI space and - even
| worldwide - not enough skilled people by far.
|
| The danger, IMHO, isn't wage dilution anyway (that one can
| be counteracted by politics) - it is that sensitive
| knowledge will make its way back to China and other current
| or potentially hostile nations, and it would not be the
| first time either that this happens.
| ketzo wrote:
| My final paragraph addresses that, but to expand:
|
| Devs have been worried about offshoring/immigrants
| replacing them/lowering their wages for decades.
|
| The only outcome we have _ever_ observed from having more
| people building software is that dramatically more software
| jobs have become possible and in-demand.
|
| Also, and I understand why you might not make this
| argument, but let's be honest: software engineering wages
| are _extraordinarily_ high. Slowing or even reversing that
| growth by small amounts _on average_ would still leave
| millions of extremely well-paid jobs.
| srackey wrote:
| Except we have no control to compare to.
|
| Software is eating the world after all. Seems likely that
| demand would be high anyway. Perhaps if there were no
| H1Bs, entry level grads would make $250k, with the
| average senior dev making $750k+.
| akira2501 wrote:
| > The U.S. has the opportunity to cement itself as the center
| of the world as far as technological progress from now into
| eternity.
|
| I don't accept this premise and I don't think serves as a
| reasonable excuse for government interference in labor
| markets. Even if you do accept this, then the solution
| sacrifices long-term labor stability for short-term labor
| monopolization.
|
| Either way, I don't see this as a positive outcome, and I
| regret every administrations attempt to expand the program
| using any excuse that happens across their desks.
|
| > but how many times do we have to learn the lesson that
| "more people making software leads to more software jobs"
|
| The connection between this outcome and increased H1-B visas
| for mostly _corporate sponsors_ is sketchy, at best.
| threeseed wrote:
| a) There has been no change to H1B visa intake since 1990.
|
| b) There is no evidence that H1B visas have caused labor
| instability in the IT market.
| Al-Khwarizmi wrote:
| Why do you see giving more visas as a government
| interference in labor markets?
|
| In my view it's the opposite, a perfectly free labor market
| would be one where anyone can apply to a job. Restricting
| immigration by denying visas _is_ a government interference
| in the market. So more visas means less interference.
|
| (Note: I'm not claiming anything about whether it's a good
| idea or not).
| xtreme wrote:
| The government is already interfering with the labor market
| by denying work visas to many US-educated students,
| limiting number of employment based green cards, handing
| out diversity visas through random lottery, and in general
| dictating who should or should not be eligible for work
| permits.
| arthurcolle wrote:
| Not trying to speak for GP but I think they would
| probably tell you that what you're talking about is
| _also_ an interference and should go away as well :)
| dukeyukey wrote:
| H1-B increases are a band-aid at best. They are a shitty,
| substandard substitute for an actual, proper, modern working
| visa system.
| oceanplexian wrote:
| Oh hell no. H1B isn't there to bring in the best and
| brightest; those workers would qualify for skilled worker
| visas. It's there for low-skill workers, who are set up to be
| abused by corporations under the threat of deportation, while
| replacing American workers.
|
| The US doesn't need to "cement itself" as anything. Thanks,
| but no thanks. Ironically, your kind of attitude is the same
| one that gave rise to the insane, populist politics of the
| last few years and has done more harm to immigrants than any
| other single policy.
| fnordpiglet wrote:
| There are only 30,000 skilled worker visas issued per year.
| There are additionally 65,000 H1B visas issued, with an
| additional 20,000 issued to folks with a masters or higher
| from a US university.
|
| H1-B is specifically for specialized occupations and
| generally requires a minimum of a bachelor's degree and
| specialized skills in demand. It is, in fact, called the
| H1-B Specialty Occupations Visa.
|
| I suspect you're thinking of the Diversity Visa program,
| which offers a lottery of 55,000 visas annually to anyone
| (except for some eye brow raising exceptions).
|
| There are also other programs like migrant worker programs
| that allow unskilled labor into the country for a limited
| time to fill seasonal work gaps.
| idrios wrote:
| The US should be giving more freedom to existing H1-B holders
| so they aren't indentured to the companies that employ them
| for 10+ years, at H1-B minimum salary, while waiting for them
| to generously sponsor their green card. Make H1-B not tied to
| employment so companies stop abusing them.
| threeseed wrote:
| a) The election is in 2024 and this isn't a top 10 issue for
| voters.
|
| b) It is inarguable that we are in an age of AI.
|
| c) There are legitimate concerns with AI that shouldn't be
| blindly waved away as "boogey man". Especially with what we
| have been seeing in Ukraine with their use of autonomous and
| remotely controlled drones. Adding an AI layer into the mix
| needs to be regulated.
|
| d) Number of H1B visas are set by Congress not the President.
| akira2501 wrote:
| > The election is in 2024 and this isn't a top 10 issue for
| voters.
|
| How many days from today until that election day?
|
| > It is inarguable that we are in an age of AI.
|
| We have companies hawking large language models. It's
| entirely arguable that we are in an "age of AI." You're about
| to be in an age of "no more easy CMOS gains." The
| intersection between these two points is going to be
| interesting.
|
| > There are legitimate concerns with AI that shouldn't be
| blindly waved away as "boogey man"
|
| This is why we have courts and a legislature with committee
| powers. I do not believe that an eager top down federal
| agency approach is going to solve real problems without
| creating more encumbrances than it's worth.
|
| This is also why it's pandering. Look at the list of issues
| they bring up, those most definitely rank with voters.
|
| > Number of H1B visas are set by Congress not the President.
|
| Well then it's potentially even a bigger problem. They're
| going to change prioritization for those applicants against a
| limited pool.
| downvotetruth wrote:
| Wonder how much of a write off Nvidia will claim for Mellanox due
| to the 100 GB/s limit.
| Ukv wrote:
| > One requires reporting of any model trained using more than
| 10^26 integer or floating point operations total
|
| _On an unrelated note, OpenAI announces GPT5 will be trained
| with fixed-point arithmetic._
| zitterbewegung wrote:
| Why are they using parameters or ops using trained when an
| objective tests that exist or compare to other state of the art
| systems which makes more sense. This executive order is a joke
| when you could just almost train a bit less and get the same
| output...
| WalterBright wrote:
| By hobbling American AI development, it just hands AI leadership
| to other countries.
| mdale wrote:
| The regulations as presently defined are pretty lax IMHO; it
| seems people jump to the all regulations are a unbearable
| constraint perspective without some measured look at what is
| being proposed?
|
| I get the don't give an inch or they will take a mile, but also
| some regulation structures are probably warranted by such a
| fundamental shift on par with the advent of the Internet.
|
| Back then we both got it right with DMCA in that big Internet
| companies could thrive in the US; but also problematic
| concentration of Monopolistic power in big Internet companies.
|
| Will have to do some iteration to see what makes sense with the
| advent with this new computing model / capabilities at this
| scale.
| arthurcolle wrote:
| I know that its more the capabilities angle that the US
| Government is mainly interested in here, rather than copyright
| infringement... But just curious, could packages like google's
| fully-homomorphic-encryption or the difficult to build Pyfhel be
| used to mitigate government intrusion into model training /
| datasets?
|
| I know there is some work being done to 'extract' training data
| from models (although given the compression, not entirely sure
| you could really extract _everything_ but curious if anyone is
| seriously working on training models on purely encrypted data
| PeterisP wrote:
| FHE does still add a huge overhead (measured in orders of
| magnitude, not percentages), so it would not be applicable for
| large models that are comparable with state of art models. You
| can apply FHE to neural networks in general (e.g.
| https://developers.googleblog.com/2023/08/expanding-our-full...
| lists it as one of potential use cases), but the unsaid
| assumption is that they are relatively small NNs, and that's in
| an era where we know that simply using a larger model brings
| meaningful improvements to the outcome, and the main limiting
| factor for people training their own models is the cost of
| compute.
| thomastjeffery wrote:
| Dear Uncle Sam,
|
| You have been sorely misinformed about AI. Even the name itself
| has been used to mislead you! Artificial Intelligence _does not
| exist today_. While new systems may be "intelligent" in their
| designs, none of them is "an intelligence".
|
| Artificial Intelligence has been the pursuit of Computer Science
| since the earliest days of software design, back when the AI
| department at Bell Labs developed Programming Languages. What is
| called "AI" today is no more than the newest efforts in that
| _pursuit_. Despite the excitement at these newer efforts, _the
| goal that is AI_ is as mysterious as it ever was.
|
| So what's new? Inference Models. These allow computers to
| navigate _ambiguous data_. This was entirely impossible before,
| and is only somewhat possible today. While computers do not get
| completely stuck on ambiguity the way they used to, they are
| still unable to _conclusively resolve_ that ambiguity. They can
| only be trained to guess. With careful training on very large
| datasets, some impressive results have been obtained.
| Unfortunately, those impressive results are always closely tied
| to embarrassing mistakes. This is a _feature_ of contemporary
| inference models.
|
| The goal in mind is to perform this process well enough to have a
| novel and useful system. Many believe that once a model is big
| enough and trained well enough that it will become _reliably_
| more accurate. There is no conclusive evidence that is the case.
| While they are often introduced as a "limitation", behaviors
| like "overconfidence" and "hallucinations" are _features_ of
| these systems. In order to remove a feature from a system, a
| _new_ system must be invented. In the mean time, let 's consider
| what _does_ exist, rather than get lost in our own dreams.
|
| So what _should_ you be worried about? Contemporary inference
| models are powerful enough to create _convincing results_. What
| does that mean for the people of the United States? We need to
| recognize the reality in front of us: people can tell lies. Data
| alone is never a reliable _source_ of information. This has
| always been true, but the inherent difficulty in storytelling has
| made some lies _impractical_ to tell. As technology improves, so
| does each person 's ability to tell a story.
|
| We have all watched the journalistic integrity of our world
| suffer at the hands of Social Media, and at the failure of large
| corporations to moderate content. At best, people have grown to
| distrust scientific discovery and leadership; and at worst,
| untamed hate speech has lead to genocide.
|
| So what can we do about it? Readers need to be able to
| differentiate content, not by its substance, but by its _source_.
| The good news is that this has been a solved problem for 50
| years. All that an author needs to attach their identity to their
| writing, is to provide readers their unique public key and a
| signature of their work. Unfortunately, the best tools to do
| this, including GPG, are very technical and difficult for the
| layman to use. It should be the priority of the United States,
| for the sake of national security, to improve this landscape by
| creating (or motivating the creation of) easy-to-use public-key
| encryption software.
| Spivak wrote:
| Y'all just need to read the executive order (or at least the
| white house fact sheet) itself it's got way more stuff than this
| article covers and to my eyes is overwhelmingly positive. It's
| like the white house read all of the complaints people on HN have
| had about inappropriate uses of LLms and bundled them into one
| huge omnibill telling every federal agency to stop people from
| behaving recklessly with AIs that are are mirror of the average
| Redditor.
|
| Outside of reporting that you're working on a new huge model and
| some maybe down the road future NIST guidelines they don't
| actually restrict the production of models at all. It's all about
| telling the humans jumping on the move fast break things train
| that no you can't use these tools for for <obviously dystopian
| thing> like renter screening, price fixing, and judicial
| sentencing.
|
| Good on the white house for recognizing that the harm these
| models produce is almost entirely the humans connecting them to
| the real world.
___________________________________________________________________
(page generated 2023-11-05 23:01 UTC)