[HN Gopher] Anyone else witnessing a panic inside NLP orgs of bi...
___________________________________________________________________
Anyone else witnessing a panic inside NLP orgs of big tech
companies?
Author : georgehill
Score : 186 points
Date : 2023-03-16 11:00 UTC (12 hours ago)
(HTM) web link (old.reddit.com)
(TXT) w3m dump (old.reddit.com)
| dserban wrote:
| The PR folks at my current company are in full panic mode on
| Linkedin, judging from the passive-aggressive tone of their posts
| (sometimes very nearly begging customers not to use ChatGPT and
| friends).
|
| They fully understand that LLMs are stealing lunch money from
| established information retrieval industry players selling
| overpriced search algorithms. For a long time, my company was
| deluded about being protected by insurmountable moats. I'm
| watching our PR folks going through the five stages of grief very
| loudly and very publicly on social media (particularly noticeable
| on Linkedin).
|
| Here's a new trend happening these days. Upon releasing new non-
| fiction books to the general public, authors are simultaneously
| offering an LLM-based chatbot box where you can ask the book any
| question.
|
| There is no good reason this should not work everywhere else, in
| exactly the same way. Take for example a large retailer who has a
| large internal knowledge base. Train an LLM on that corpus, ask
| the knowledge base any question. And retail is a key target
| market of my company.
|
| Needless to say I'm looking for employment elsewhere.
| swatcoder wrote:
| > There is no good reason this should not work everywhere else,
| in exactly the same way. Take for example a large retailer who
| has a large internal knowledge base. Train an LLM on that
| corpus, ask the knowledge base any question.
|
| Since LLM's can't scope themselves to be strictly true or
| accurate, there are indeed good reasons, like liability for
| false claims and added traditional support burden from
| incorrect guidance.
|
| Everybody is getting so far ahead of the horse with this stuff,
| but we're just not there yet and don't know _for sure_ how far
| we're going to get.
| iandanforth wrote:
| "LLM's can't scope themselves to be strictly true or
| accurate"
|
| This isn't true though the techniques to do so are 1. Not as
| yet widespread 2. Decrease the generality of the model and
| its perceived effectiveness.
| shawntan wrote:
| I'm interested to hear what these techniques are.
| Decreasing the generality will help, but I fail to see how
| that scopes the output. At best that mitigates the errors
| to an extent.
| astockwell wrote:
| If they are accurate for ~80% of the questions, they will be
| as accurate as any 1st or 2nd line help desk.
| mashygpig wrote:
| > Here's a new trend happening these days. Upon releasing new
| non-fiction books to the general public, authors are
| simultaneously offering an LLM-based chatbot box where you can
| ask the book any question.
|
| Can you link to an example?
| org3 wrote:
| https://portal.konjer.xyz/
| throwayyy479087 wrote:
| Some of the responses I've had so far to this are
| remarkable. Kind of scary.
| dserban wrote:
| I saw at least two examples of this here on HN. One of the
| books was about tech entrepreneurship 101, and I remember
| asking how to launch if you're a sole developer with no legal
| entity behind the product. I remember the answer being fairly
| coherent and useful. I don't have the URL handy, I suspect if
| you search HN for "entrepreneur book" you'll find it.
| [deleted]
| craftyguy98 wrote:
| Haha, you work at Algolia. RIP F o7
| twawaaay wrote:
| I think education goal for people shifted. I teach my kids to be
| flexible and embrace the change. Invest in abilities that
| transfer well to various things you could be doing during your
| life. Be a problem solver.
|
| In the future -- forget about cosy job you can be doing for the
| rest of your life. You no longer have any guarantees even if you
| own the business and even if you are farmer.
|
| What you absolutely don't want is spend X years at uni learning
| something, and then 5-10 years into your "career" finding out it
| was obsoleted overnight and you now don't have plan B.
| SketchySeaBeast wrote:
| > What you absolutely don't want is spend X years at uni
| learning something, and then 5-10 years into your "career"
| finding out it was obsoleted overnight and you now don't have
| plan B.
|
| That seems to be running directly opposite of the current trend
| of admin assistant jobs requiring 2 years specialized admin
| assistant diplomas. Tech (and I would guess the world of the
| business MBA) is a unique space where people are learning and
| changing so quickly, but for a lot of those outside the bubble
| things seem to be calcifying and requiring more and more
| training at the expensive of the worker.
| yoyohello13 wrote:
| Really the only safe career in the moderate future is going to
| be manual labor. There is always need to send a bunch of humans
| into the middle of nowhere to dig ditches.
| throwayyy479087 wrote:
| https://grist.org/energy/electrician-shortage-electrify-
| ever...
|
| Extremely relevant story
| rr888 wrote:
| Liberal arts education will one day be back in fashion.
| twawaaay wrote:
| Oh I do believe it. There will always be a market for snobs
| who will want to pay extra for handmade things vs AI-
| generated. The issue here is that it is all driven by fads
| and unstable. If you want to make money you will have to be
| flexible.
| version_five wrote:
| This is imo a wake-up call about the value of having "AI teams"
| embedded in companies.
|
| Bad analogy- if you had an integrated circuit team in your
| product company building custom CPUs and Intel came out with the
| 8080 (or whatever was the first modern commercial chip), probably
| time to disband the org and use the commercial tech
| rdedev wrote:
| My university professor who specialises in NLP kinda feels like
| what's the point of research in the time of chatgpt. He says for
| now it's not possible to scale retrieval easily when using these
| llms so that's what he is looking into for now
| bsder wrote:
| I guess I'm not panicked about my job in the face of AI because
| _objective correctness_ is required. I _dream_ about the day that
| OpenAI can write the 100 lines of code that connect the BLE
| stack, the ADC sensor and the power management code so that my
| IoT sensor doesn 't crash once every 8 days.
|
| I see the AI stuff as _very_ different from, say, the
| microcomputer revolution. People had _LOTS_ of things they wanted
| to use computers for, but the computers were simply too
| expensive.
|
| As soon as microprocessors arrived, people had _LOTS_ of things
| they were already waiting to apply them to. Factory automation
| was _screaming_ for computers. Payroll management was _screaming_
| for computers.
|
| I don't see that with the current AI stuff. What thing was
| waiting for NLP/OpenAI to get good enough?
|
| Yes, things like computer games opened up whole new vistas, and
| maybe AI will do that, but that's a 20 year later thing. What
| stuff was screaming for AI right now? Maybe transcription?
|
| When I see the search bar on any of my favorite forums suddenly
| become useful, I'll believe that OpenAI stuff actually works.
|
| Finally, the real problem is that OpenAI needs to cough up what I
| want but then it needs to cough up the _original references_ to
| what I want. I normally don 't make other humans do that. If I'm
| asking someone for advice, I've already ascertained that I can
| trust them and I'm probably going to accept their answers. If
| it's random conversation and interesting or unusual, I'll mark
| it, but I'm not going to incorporate it until I verify.
|
| Although, given the current political environment, pehaps I
| _should_ ask other humans to give me more references.
| [deleted]
| TMWNN wrote:
| Is the entire field of data science (Itself maybe a decade old in
| terms of being a college major?) now obsolete, in terms of being
| a distinct job field? Are all data science majors now going to be
| "just" coming up with the proper prompts to get GPT to correctly
| massage datasets?
| theGnuMe wrote:
| No. It's always been about posing the right question.
| drewda wrote:
| I wonder if this will be a repeat of what happened with speech
| recognition. It used to be a specialized field dominated by
| smaller companies like Nuance.
|
| More recently Google, Microsoft, Apple, etc. decided they wanted
| to have speech recognition as an internal piece of their
| platforms.
|
| Google poached lots of Nuance's talent. And then Microsoft bought
| what remained of the company.
|
| Now speech recognition is a service integrated into the larger
| tech company's platforms, and also uses their more statistical/ML
| approaches, rather than being a component created by specialist
| companies/groups.
|
| (I'm sure I'm grossly simplifying this -- just seeing a potential
| parallel.)
| dongobread wrote:
| I worked in a research capacity in the voice assistant org of a
| big tech company until very recently. There was a lot of panic
| when ChatGPT came out, as it became clear that the vast bulk of
| the org's modeling work and research essentially had no future. I
| feel bad for some of my colleagues who were really specialized in
| specific NLP technology niches (e.g. building NLU ontologies)
| which have been made totally obsolete by these generalized LLMs.
|
| Personally - I'm moving to more of a focus on analytical
| modeling. There is really nothing interesting about deep learning
| to me anymore. The reality is that any new useful DL models will
| be coming out of mega-teams in a few companies, where improving
| output through detailed understanding of modeling is less cost
| effective than simply increasing data quality and scale. Its all
| very boring to me.
| mr_toad wrote:
| " Seeking an improvement that makes a difference in the shorter
| term, researchers seek to leverage their human knowledge of the
| domain, but the only thing that matters in the long run is the
| leveraging of computation. "
|
| http://www.incompleteideas.net/IncIdeas/BitterLesson.html
| shawntan wrote:
| I've seen many interpretations of this article and I'm
| curious as to the mainstream CS reading of it.
|
| One could look at the move from linear models to non-linear
| models or the use of ConvNets (yes I know ViTs exist, to my
| knowledge the base layers are still convolution layers) as
| 'leveraging human knowledge'. Only after those shifts were
| made did the leveraging of computation help. It would seem to
| me that the naive reading of that quote only rings true
| between breakthroughs.
| hn_throwaway_99 wrote:
| Wow - this is just wild. I've seen lots of arguments around "AI
| won't take everyone's job, it will just open up new areas for new
| jobs." Even if you take that with the benefit of the doubt (which
| I don't really think is warranted):
|
| 1. You don't need to take everyone's job. You just need to take a
| shitload of people's jobs. I think a lot of our current
| sociological problems, problems associated with wealth
| inequality, etc., are due to the fact that lots of people no
| longer have competitive enough skills because technology made
| them obsolete.
|
| 2. The state of AI progress makes it impossible for humans in
| many fields to keep up. Imagine if you spent your entire career
| working on NLP, and now find GPT-4 will run rings around whatever
| you've done. What do you do now?
|
| I mean, does anyone think that things like human translators,
| medical transcriptionists, court reporters, etc. will exist as
| jobs at all in 10-20 years? Maybe 1-2 years? It's fine to say
| "great, that can free up people for other thing", but given our
| current economic systems, how are these people supposed to eat?
|
| EDIT: I see a lot of responses along the lines of "Have you seen
| the bugs Google/Bing Translate has?" or "Imagine how frustrated
| you get with automated chat bots now!" Gang, the _whole point_ is
| that GPT-4 blows these existing models out of the water. People
| _who work in these fields_ are blown away by the huge advances in
| quality of output in just a short time. So I 'm a bit baffled why
| folks are comparing the annoyances of ordering at a McDonald's
| automated kiosk to what state-of-the-art LLMs can do. And
| reminder that the first LLM was only created in 2018.
| JamesAdir wrote:
| You are a founder of a startup. A notable VC wants to invest
| millions of dollars but insists that the contract will be in
| their language which is Finnish. Would you trust GPT to
| translate the contract or reach out to a professional human
| translator? We've got Google translate from 2006, and there are
| still millions of translators at work all around the world. I
| wouldn't be so quick to dismiss those jobs.
| parker_mountain wrote:
| I don't think it's so simple.
|
| A few counter-notes
|
| - Google translate and its ilk have already significantly cut
| down the number of translators required for multinational
| companies. Google translate in 2006 is also a bad example, it
| really only got excellent in the past few years.
|
| - I would trust GPT to write the first draft, and then hire a
| translator to check it. That goes from many billable hours to
| one, or two. That is a material loss of work for said
| translator.
|
| - High profile translations, as your example is, are a sharp
| minority of existing translator jobs.
| q845712 wrote:
| I was just using bing translate last night, and it was
| literally making up english words that do not exist - I tried
| to google for them to see if it was just some archaic word,
| and it was complete fabrication. So I dunno how many years
| are left before we all trust machine translation
| unflinchingly, but I agree today's not the day.
| hn_throwaway_99 wrote:
| Try it on GPT-4, not Google or Bing Translate:
| https://news.ycombinator.com/item?id=35180715
| hn_throwaway_99 wrote:
| I think you are vastly underestimating how Google Translate,
| Bing Translate and others compare to GPT-4:
| https://news.ycombinator.com/item?id=35180715
| pxc wrote:
| > I mean, does anyone think that things like human translators,
| medical transcriptionists, court reporters, etc. will exist as
| jobs at all in 10-20 years? Maybe 1-2 years? It's fine to say
| "great, that can free up people for other thing", but given our
| current economic systems, how are these people supposed to eat?
|
| And it doesn't mean that the replacements will even be much
| good. They will probably suck in ways that will become familiar
| and predictable, and at the same time irritating and
| inescapable. Think of the outsourced, automated voice systems
| at your doctor's office, self-checkout at the grocery store,
| those touchscreen kiosks at McDonalds, etc.
|
| I already find myself ready to scream
|
| > GIVE ME A FUCKING HUMAN BEING
|
| every now and then. That's only going to get worse.
| malermeister wrote:
| > given our current economic systems, how are these people
| supposed to eat?
|
| I've said it before and I'll say it again. This right here is
| the crux of the issue. The only way people get to eat is if we
| change the economic systems.
|
| Capitalism supercharged by AI will lead to misery for almost
| everyone, with a few Musks, Bezoses and Thiels being our
| neofeudal overlords.
|
| The only hope is a complete break in economic systems, towards
| a techno-utopian socialism. AI could free us from having to do
| work to survive and usher in a Star Trek-like vision of the
| future where people are free to pursue their passions for their
| own sake.
|
| We're at a fork in the road. We need to make sure we take the
| right path.
| mostlysimilar wrote:
| It will take massive cooperation. Given how rough it was to
| make it through the pandemic... how can we hope to come
| together on something this daunting?
| malermeister wrote:
| I hope I'm wrong, but I worry that the change will come the
| same way it came to Tsarist Russia or to the Ancien Regime.
|
| Things will get worse and worse until they boil over.
| chaostheory wrote:
| > I mean, does anyone think that things like human translators,
| medical transcriptionists, court reporters, etc. will exist as
| jobs at all in 10-20 years?
|
| Before mechanical alarm clocks, there were people paid to tap
| on windows to wake them up.
| jMyles wrote:
| > given our current economic systems
|
| What can possibly be the benefit of requiring this constraint?
|
| Remove the idea that this is necessary and watch how much
| relaxation comes to the deliberation on this topic.
|
| "Current economic systems" will simply have to yield. Along
| with states. This has been obvious for decades now. Deep
| breaths, everybody. :-)
| hn_throwaway_99 wrote:
| > What can possibly be the benefit of requiring this
| constraint?
|
| It's not "requiring this constraint". If you have some
| plausible pathway to get from our current system to some
| "Star Trek-like nirvana", I'm all ears. Hand-wavy-ness
| doesn't cut it.
|
| > "Current economic systems" will simply have to yield.
|
| Why? For most of human history there were a few overloads and
| everyone else was starving half the time. Even look at now.
| I'm guessing you probably live a decent existence in a decent
| country, but meanwhile billions of people around the world
| (who can't compete skills-wise with upper income countries)
| barely eke out an existence.
|
| For the world that just lived through the pandemic, do you
| honestly see systems changing when worldwide cooperation and
| benevolence is a prerequisite?
| WalterBright wrote:
| Think of people who have jobs like archaeology, digging up
| bones. The only way these jobs can exist is if technology has
| taken over much of the grunt work of production.
|
| As for human translators, the need for them far, far exceeds
| the number of them. Have you ever needed translation help? I
| sure have, but no human translator was available or was too
| expensive.
| layer8 wrote:
| > was too expensive.
|
| This is probably the real problem. Translators are payed shit
| nowadays for what is a really high-skill job. I have
| translators in the extended family who had to give up on that
| line of work because the pay wouldn't sustain them anymore.
| adelie wrote:
| yep, exactly. the issue isn't that there will no longer be
| a need for human translators - machine translation makes
| subtle mistakes that legal/technical fields will need a
| human to double-check.
|
| the issue is that many translation jobs will, and already
| are, being replaced with 'proofread machine translation
| output' jobs that simply don't pay enough. translation
| checking is careful, detailed work that often takes almost
| as much time as translating passages yourself, yet it pays
| a third or less of the rate because 'the machine is doing
| most of the work.'
| layer8 wrote:
| I don't think it's really because "the machine is doing
| most of the work", but because there's no good way for
| clients to assess the quality of the supplemental human
| work, and therefore the market gets flooded with subpar
| translators who do the task sloppily on the cheap, in a
| way that still passes as acceptable.
| nidnogg wrote:
| When you have to use any documents within another country
| that doesn't list their original languages as official, not
| much, if anything at all, is machine-translated AFAIK. Is
| this not the case for most legal paperwork as well? You
| almost always need certified translation (by a human), for
| which you have to pay out a reasonable sum. And if it's not a
| good translator, you pay double.
|
| e.g. Italian citizenship can cost as much as a brand new car
| in Brazil and almost half of that cost could come from
| certified translation hurdles.
| Tiktaalik wrote:
| > does anyone think that things like human translators, medical
| transcriptionists, court reporters, etc. will exist as jobs at
| all in 10-20 years? Maybe 1-2 years?
|
| Maybe the very, very basic transcription/translation stuff
| might go away, but arguably this race to the bottom market was
| already being killed by google translate as bad as it is
| anyway.
|
| In areas where quality is required (eg. localizing video games
| from japanese to english and vis versa) people would be
| (justifiably) fussy about poor localization quality even when
| the translation was being done by humans, so I have to imagine
| that people will continue to be fussy and there will still be
| significant demand for quality job done by people who aren't
| just straight translating text, but _localizing_ text for a
| different audience from another culture.
| dogcomplex wrote:
| It is very obvious there is a mass unemployment wave coming -
| or at least a mass "retraining" wave, though the new jobs
| "teaching AIs" or whatever remain to be seen. I hope everyone
| currently just questioning whether this will happen now is
| prepared to state it with conviction in the coming months and
| fight for some sort of social protection program for all these
| displaced people, because the profits from this new world
| aren't getting distributed without a fight.
| psychphysic wrote:
| If not unemployment and retraining then a lot of people are
| going to need to miraculously become better at their jobs.
|
| I somehow imagine it'll be the worst of both worlds but I'm a
| glass half empty kind of guy.
| moffkalast wrote:
| Well it won't be miraculously, it'll be by using the AI
| tools to augment their work if anything. But probably
| unemployment.
| JohnFen wrote:
| Retraining only works if there are jobs available.
| jll29 wrote:
| > Imagine if you spent your entire career working on NLP, and
| now find GPT-4 will run rings around whatever you've done. What
| do you do now?
|
| I have been doing NLP since 1993. Before ca. 1996, there were
| mostly rule-based systems that were just toys. They lacked
| robustness. Then statistical systems came up and things like
| spell-checking (considering context when doing it), part of
| speech tagging and eventually even parsing started to work.
| Back then, people could still only analyze sentences with fewer
| than 40 words - the rest was often cut off. Then came more and
| more advanced machine learning models (decision trees, HMMs,
| CRFs), first a whole zoo, and then support vector regressors
| (SVM/SVR) ate everything else for breakfast. Then in machine
| learning a revival of neural networks happened, because better
| training algorithms were discovered, more data became available
| and cheap GPUs were suddenly available because kids needed them
| for computer games. This led to what some call the "deep
| learning revolution". Tasks like speech recognition where
| people for decades tried to squeeze out another half percent
| drop in error rate suddenly made huge jumps, improving quality
| by 35% - so jaws dropped. (But today's models like BERT still
| only can process 512 words of text.)
|
| So it is understandable that people worry at several ends. To
| lose jobs, to render "NLP redundant". I think that is not
| merited. Deep neural models have their own set of problems,
| which need to be solved. In particular, lack of transparency
| and presence of different types of bias, but also the size and
| energy consumption. Another issue is that for many tasks, no
| much data is actually available. The big corps like Google/Meta
| etc. push the big "foundational" models because in the consumer
| space there is ample data available. But there are very
| important segments (notably in the professional space -
| applications for accountants, lawyers, journalists,
| pharmacologists - all of which I have conducted projects
| in/for), where training data can be constructed for a lot of
| money, but it will never reach the size of the set of today`s
| FB likes. There will always be a need for people who build
| bespoke systems or customize systems for particular use cases
| or languages, so my bet is things will stay fun and exciting.
|
| Also note that "NLP" is a vast field that includes much more
| than just word based language models. The field of
| propositional (logical) semantics, which is currently
| disconnected from the so-called foundational models, is much
| more fascinating than, say, chatGPT if you ask me. The people
| there, linguist-logicians like Johan Bos identify laws that
| restrict what a sentence can mean, given its structure, and
| rules how to map from sentences like "The man gave the girl a
| rose" to their functor-argument structure - something like
| "give(man_0, rose_1)" - which models the "who did what to
| whom?". When such symbolic approaches are integrated with
| neural foundational models, there will be a much bigger
| breakthrough than what we are seeing today (mark my words!).
| Because these tools, for instance Lambda Discourse
| Representation Theory and friends, permit you to represent how
| the meaning of "man bites dog" is different from "dog bites
| man".
|
| So whereas today`s models SEEM a bit intelligent, but are
| actually only sophisticated statistical parrots, the future
| will bring something more principled. Then the "
| "hallucinations" of models will stop.
|
| I am glad I am in the field of NLP - it has been getting more
| exciting every year since 1993, and the best time still lies
| ahead!
| yunyu wrote:
| BERT can process 512 tokens. LLAMA and FLAN-UL2 can process
| 2048 tokens. GPT-4 can process 32768 tokens, and is much
| better at ignoring irrelevant context.
|
| These general models can be fine tuned with domain specific
| data with a very small number of samples, and have
| surprisingly good transfer performance (beating classical
| models). New research like LORA/PEFT are making things like
| continuous finetuning possible. Statistical models also do a
| much better job at translating sentences to formal structure
| than the old ways ever did - so I wouldn't necessarily view
| those fields are disconnected.
|
| I agree with the general sentiment, there are still major
| issues with the newer generation of models and things aren't
| fully cracked yet.
| mr_toad wrote:
| > Another issue is that for many tasks, no much data is
| actually available. The big corps like Google/Meta etc. push
| the big "foundational" models because in the consumer space
| there is ample data available. But there are very important
| segments (notably in the professional space - applications
| for accountants, lawyers, journalists, pharmacologists - all
| of which I have conducted projects in/for), where training
| data can be constructed for a lot of money, but it will never
| reach the size of the set of today`s FB likes.
|
| This is a really important point. GPT-x knows nothing about
| my database schema, let alone the data in that schema, it
| can't it learn it, and it's too big to fit in a prompt.
|
| Until we have AI that can learn _on the job_ it's like some
| delusional consultant who thinks they have all the solutions
| on day 1 and understands nothing about the business.
| eternalban wrote:
| > how are these people supposed to eat?
|
| My gut feeling is that AI is the 'social historic change' that
| will make UBI politically viable and a reality.
| yyyk wrote:
| There are three 'markets' for translators:
|
| * Verbal translation, where accuracy is usually important
| enough to want to also have a human onboard since humans still
| have an easier time with certain social clues.
|
| * High-culture translation, where there's a lot to personal
| choice and explaining it. GPT can give out many versions but
| can't yet sufficiently explain its reasoning, nor would its
| tastes necessarily match that of humans.
|
| * Technical translations for manuals and such. This market will
| be under severe threat from GPTs, though for high-accuracy
| cases one would still want a human editor just in case.
|
| All in all, GPT will contract the market, but many human
| translators will be fine. There's still areas where you'd still
| want a human, and deskilling isn't a bug threat - a human can
| decide to immerse and get experience directly, and many will
| still do so by necessity.
| screye wrote:
| > human translators, medical transcriptionists, court reporters
|
| Yes, they will be all called 'ai data labellers'.
|
| For a long time, "People don't just want jobs, they want good
| jobs" was the slogan of industries that automated the boring
| stuff. Now AI is suddenly good at all the jobs people actually
| want and the only thing it can't do is self-improve. In an AI
| future, mediocre anything will not exist anymore.
|
| Either you are brilliant enough to be sampling from 'out of
| distribution', or you're in the other 99 percent normies that
| follow the standard : "learn -> imitate -> internalize ->
| practice" cycle. That other 99% is now and eternally inferior
| to an AI.
| UmYeahNo wrote:
| >In an AI future, mediocre anything will not exist anymore.
|
| Right! Aren't we all mediocre before we're excellent? Isn't
| every entry level job some version of trying to get past
| being mediocre? i.e. Isn't a jr developer "mediocre" compared
| to a senior dev? If AI replaces the jr dev, how will anyone
| become a senior dev if they never got the chance to gain
| experience to become less mediocre?
| mxkopy wrote:
| What should happen is a thorough investigation of our
| assumptions about economics and see if they hold true. 20-30
| years ago saying "just get a robot to do it" would've been met
| with great cynicism, but now it's not that unthinkable.
| Especially once we apply what we learn to robotics - at that
| point doing things at scale is just playing an RTS
| timoth3y wrote:
| The problem is not that automation will eliminate our jobs.
|
| The problem is that we have created an economy where that is a
| bad thing.
| elwell wrote:
| The problem is that humans are often selfish.
| pcthrowaway wrote:
| I don't think it's the economy, it's the policy. Automating a
| shit-ton of jobs is _great_ for the economy. The economy is
| just fine if 90% of people are starving because big corps are
| saving shit-tons of money.
|
| The government of a wealthy country should ensure that its
| citizens are able to eat, and have a sheltered place to
| sleep, without them needing to work. Because the way things
| are going, there won't be enough work to go around. Even now,
| with the supposed "labour shortage" there are record numbers
| of homeless people, and people living paycheck-to-paycheck.
| Housing is more unaffordable than ever. Minimum wage is not
| keeping up with the economic realities.
|
| Governments need to step in; they need to change policy so
| big corps are paying more taxes, and that tax goes to a basic
| income that can cover the cost of housing and the cost of
| food. Maybe not right away, maybe it starts at $100/month.
| But eventually the goal should be to get everyone on a basic
| income that can cover the necessities, then if they want to
| be able to enjoy luxuries (concerts, gourmet food, hobbies,
| streaming services, etc.) they can choose to work.
| pyuser583 wrote:
| The problem starts long before AI takes the jobs.
|
| I used to do a job that was eventually automated. We did the
| one and only thing the computer couldn't do - again and again
| in a very mechanical fashion.
|
| It was a shit job. You might get promoted to supervisor - but
| that was like being a supervisor at McDonalds.
|
| Why not treat the job seriously? Why didn't the company use it
| as a way to recruit talent? Why didn't the workers unionize?
|
| Because we all knew it would be automated anyway.
|
| We were treated like robots, and we treated the org like it was
| run by robots.
|
| There's a huge shadow over the economy that treats most new
| jobs like shit jobs.
| xwdv wrote:
| Even in a world of perfect AI, there will be plenty of jobs.
| Anything involving movement and manipulation of matter will
| still require humans for the time being. We're not at a point
| yet where an intelligent an AI could simply build you a house
| without human labor involved.
|
| Many of these jobs are cheap and easy to understand and quick
| to train in. These aren't the kind of jobs people probably
| wanted, but they'll be there.
| yoyohello13 wrote:
| Now 90% of humanity can toil for 12 hours a day in the fields
| to support the 10% who own all the machines. Super awesome!
| xwolfi wrote:
| Come on, we could do it when we abandonned the horses, we can
| do it again.
| mitthrowaway2 wrote:
| Do you mean "the glue factory is always hiring"?
| csa wrote:
| > I think a lot of our current sociological problems, problems
| associated with wealth inequality, etc.,
|
| I see where you're coming from, but is this really the main
| source of the inequality?
|
| Based on numbers relating to workers' diminishing share of
| profits, it seems to be that the capital class has been able to
| take a bigger piece of the profit pie without sharing. In the
| past, companies have shared profits more widely due to
| benevolence (it happens), government edict (e.g., ww2 era), or
| social/political pressure (e.g., post-war boom).
|
| Fwiw, I think that the mid-20th century build up of the middle
| class was an anomaly (sadly), and perhaps we are just reverting
| to the norm in terms of capital class and worker class
| extremes.
|
| I see tons of super skilled folks still getting financially
| fucked by the capital class simply because there is no real
| option other than to try to attempt to become part of the
| capital class.
| mr_toad wrote:
| There is no sharing and there never was. Companies don't
| share profits with workers and they never have. Workers get
| paid on the _marginal_ value of their productivity, not some
| portion of the total or average.
| [deleted]
| WalterBright wrote:
| > Based on numbers relating to workers' diminishing share of
| profits, it seems to be that the capital class has been able
| to take a bigger piece of the profit pie without sharing.
|
| Consider the elephant in the room:
|
| https://www.federalbudgetinpictures.com/federal-spending-
| per...
|
| Where does that money come from?
| hn_throwaway_99 wrote:
| > the capital class has been able to take a bigger piece of
| the profit pie without sharing.
|
| In the current world, where do you think a lot of the capital
| class is able to get their capital?
|
| Technological progress, and especially the Internet, has made
| much bigger markets out of what were previously lots of
| little markets, and now th "winner take all/most" dynamics
| make it so that where you previously could have, for example,
| lots of "winners" in every city (e.g. local newspapers
| selling classified ads), where now Google, FB and Amazon
| gobble up most ad dollars - I think someone posted that
| Amazon's ad business alone is bigger than _all_ US (maybe
| more than that?) newspaper ad businesses.
| ChrisMarshallNY wrote:
| I have family that has been on the front lines of fighting
| global poverty and corruption, for their entire life (more
| than 50 years -at the very highest levels).
|
| I submit that it is not hyperbole to say that probably 95% of
| all global human problems can have their root cause traced to
| poverty. That is not a scientific number, so don't ask for a
| citation (it ain't happening).
| xp84 wrote:
| I think you and the one you're replying to are both very
| right.
|
| Yes, more of this money is going, instead of middle-class
| workers, straight to the capital class who own the "machines"
| that do the work people used to do. Except instead of it
| being a factory that makes industrial machines owned by some
| wealthy industrialist, the machines are things like Google
| and AWS and the owners are the small number of people with
| significant stock holdings.
|
| It's really striking though that a person graduating high
| school in say, 1970, could easily pick from a number of
| career choices even without doing college or even learning an
| in-demand trade, like plumbing, welding, etc. Factory work
| still existed and had a natural career progression that
| wasn't basically minimum wage, and the same went for retail.
| Sure, McDonalds burger flippers didn't expect then to own the
| restaurant in 10 years, but you could take lots of retail or
| clerical jobs, advance through hard work and support a family
| on those wages. Those are the days that are super gone and I
| totally agree with you both that something has changed for
| the worse for everyone who's not already wealthy.
| prottog wrote:
| > but you could take lots of retail or clerical jobs,
| advance through hard work and support a family on those
| wages. Those are the days that are super gone
|
| Only in certain places, and only mostly due to crazy
| policies that made housing ridiculously unaffordable. I'm
| in an area where my barber lives on 10 acres of land he
| didn't inherit and together with his wife raises two
| children. This type of relaxed life is possible to do in
| wide swathes of the country outside of the tier-one cities
| that have global competition trying to get in and live
| there, as long as you make prudent choices.
|
| I think 20- to 30-something engineers who have spent their
| entire adult lives in major coastal cities have a huge
| blind spot to how middle America lives.
| amrocha wrote:
| That kind of life is not achievable on minimum wage, even
| if you choose to live in a small city
| HPsquared wrote:
| Only about 1% of workers are on minimum wage, you
| wouldn't expect an average lifestyle from that.
| aleph_minus_one wrote:
| > It's really striking though that a person graduating high
| school in say, 1970, could easily pick from a number of
| career choices even without doing college or even learning
| an in-demand trade, like plumbing, welding, etc. [...]
| Those are the days that are super gone
|
| Isn't this rather a strong argument for the claim that what
| high school as of today teaches is a strong mismatch with
| what the labour market demands? In other words: the pupils
| are taught skills for many years of their life that are
| rather worthless for the job market.
| mbgerring wrote:
| You can still do that with plumbing and welding
| xp84 wrote:
| Sorry, my phrasing was bad. Totally agree, even today
| trades are still AMAZING for this. I meant even if you
| were to _set aside_ the trades, 50 years ago there was
| plenty of stuff you could at least support a family on
| without even that level of specialized skill. You could
| "start in the mailroom" or on the sales floor and end up
| in middle management after 20 years, in a variety of
| companies, most of which don't even exist anymore, or if
| they do, they employ far fewer workers domestically today
| due to a combo of offshoring and automation.
| zwkrt wrote:
| IMO the "main source of inequality" is that tech allows a
| small number of people to use technological and fiscal
| leverage to make an outsized impact on society as a whole.
| Anyone who has a job that produces value in a 1:1 way is
| positioned to be 'disrupted'. NLP, etc, just provides more
| tools for companies to increase their leverage in the market.
| My bet is that GPT-4 is probably better at being a paralegal
| than at least some small number of paralegals. GPT-5 will be
| better at that job than a larger percentage.
|
| Anyone who only has the skills to affect the lives and/or
| environments of the people in their immediate surrounding are
| going to find themselves on the 'have nots' end of the
| spectrum in the coming decades.
| mostlysimilar wrote:
| This is possibly a death spiral. GPT is only possible because
| it's been trained on the work humans have learned to do and
| then put out in the world. Now GPT is as good as them and will
| put them all out of work. How can it improve if the people who
| fed it are now jobless?
| MonkeyMalarky wrote:
| Also what happens to the intuition and unwritten skills that
| humans learned and passed on over time? Sure, the model has
| probably internalized them implicitly from the training data.
| But what happens in a case where you need to have a human
| perform the task again (say after a devastating war)? The
| ones with the arcane knowledge are gone, and now humans are
| starting from scratch.
| mostlysimilar wrote:
| Incredible that we've been writing speculative fiction
| about this for decades and still we sleepwalk right into
| it. I'd love to be wrong, but I think we're all still too
| divided and self-interested for this kind of technology to
| be successfully integrated. A lot of people are going to
| suffer.
| salad-tycoon wrote:
| It's not just sci fi. It's has already happened in past
| with construction. Things like pyramids and certain
| cathedrals and what not are no longer possible even with
| machines. At least this is what I've read and heard, I'm
| not actually an engineer or architect.
|
| Tangent, I'm looking for some sci fi about this topic.
| Any suggestions?
| 2OEH8eoCRo0 wrote:
| Literally everything you do online is training data. This
| comment and discussion is future training data. Your browser
| history is logged somewhere and will be training data. Your
| OS probably spies on what you do...training data. It's
| training data all the way down. And they've hardly begun to
| take into account the physical world, video, music, etc. as
| training data.
| jgust wrote:
| Presumably this problem is solved with technology
| improvements or the need is recognized to hire experts
| capable of generating high quality training material. In
| either situation, there's going to be extreme discomfort.
| mostlysimilar wrote:
| GPT is good because of collective knowledge, lots of data.
| What do you have in mind by "hire experts"? Isn't that what
| we have now? Many experts in many fields, hired to do their
| work. Cut this number down and you reduce training data.
| jgust wrote:
| Let's assume that GPT eliminates an entire field of
| experts, runs out of training data, and whoever is at the
| helm of that GPT program decides that it's lucrative
| enough to obtain more/better data. One alternative is
| subsidizing these experts to do this type of work and
| plug it directly into the model. I don't expect the
| nature of the work to change, more likely it's the
| signature on the check and the availability of the
| datasets.
| yoyohello13 wrote:
| There is a problem, how will people become experts in the
| field. If all entry level positions are taken by AI, nobody
| will be able to become an expert.
| WalterBright wrote:
| Imagine the devastation wrought by automatic looms, that put
| all the weavers out of a job!
|
| 97% of jobs used to be working on the farm. Now it's
| something like 2%.
| moffkalast wrote:
| Can't wait for the economy that is 97% twitch streamers
| because that's all what humans are left qualified for. /s
| msm_ wrote:
| You joke, but an economy that is 97% artists (aka content
| creators) sounds... good? Isn't this the utopic end goal
| after we automate the scarcity out of our lifes?
| salad-tycoon wrote:
| Have you seen some of that content? This sounds like a
| level in Dante's inferno, all day everyday all "these"
| (and myself probably ) people going blah blah blah into
| the either. Navel gazing to the extreme.
| moffkalast wrote:
| In theory it's great, in practice... who knows. The cynic
| in me would expect it to go worse than anyone could ever
| imagine. If everything is automated, why do you still
| need humans?
| 1attice wrote:
| This hoary take irks me. There were _still places for human
| endeavour to go_ when the looms were automated.
|
| That is no longer the case.
|
| Think of it instead as cognitive habitat. Sure, there has
| been habitat loss in the past, but those losses have been
| offset by habitat gains elsewhere.
|
| This time, I don't see anywhere for habitat gains to come,
| and I see a massive, enormous, looming (ha!) cognitive
| habitat loss.
|
| -- EDIT:
|
| Reply to reply, posted as edit because I hit the HN rate
| limit:
|
| > Your job didn't exist then. Mine didn't, either.
|
| Yes, that was my point. New habitat opened up. I infer (but
| cannot prove) that the same will not be true this time. At
| the least, the newly-created habitat (prompt engineer,
| etc.) will be miniscule compared to what has been lost.
|
| Reasoning from historical lessons learned during the
| introduction of TNT was of course tried when nuclear arms
| were created as well. Yet lessons from the TNT era proved
| ineffective at describing the world that was ushered into
| being. Firebombing, while as destructive as a small nuclear
| warhead, was _hard_ , requiring fantastic air and ground
| support to achieve. Whereas dropping nukes is easy. It was
| precisely that ease-of-use that raised the profile of game
| theory and Mutually Assured Destruction, tit-for-tat, and
| all the other novelties occurrent in the nuclear world and
| not the one it supplanted.
|
| Arguing from what happened with looms feels like the sort
| of undergrad maneuver that makes for a good term paper, but
| lousy economic policy. _So_ many disanalogies.
| WalterBright wrote:
| > There were still places for human endeavour to go when
| the looms were automated.
|
| Your job didn't exist then. Mine didn't, either.
| [deleted]
| xp84 wrote:
| Presumably it will improve the same way humans did -- once
| it's roughly on par with us it'll be just as capable of
| innovating and trying new things. The only difference is that
| for humans, trying a truly new approach to something isn't
| really done that often by most. "GPT-9" might regularly and
| automatically try recomputing all the "tricky problems" it
| remembers from the past with updated models, or with a few
| tweaked parameters and then analyze whether any of these
| experiments provided "better" solutions. And it might do this
| operation during all idle cycles continuously.
|
| Honestly as a human who grasps how the economy works, this
| doesn't sound like a good thing, but I don't see any path to
| trying the fundamental changes that would be required for
| really good general AI to not be an absolute Depression
| generator.
|
| The only thing I'm wondering is, will the wealthiest ones,
| who actually have any power to influence these fundamental
| thing, figure this out before it's too late? I really doubt
| your Musks and Bezoses would enjoy living out their lives on
| ring-fenced compounds or remote islands while the rest of the
| world devolves into the Hunger Games.
| bloppe wrote:
| Technology never affects the economy in isolation. It acts in
| concert with policy. Broadly speaking, inequality rises when
| capital is significantly more valuable than labor. The value of
| either depends on taxes, the education system, technology, and
| many other factors. We're never going to stop technology. We
| just have to adjust the other knobs and levers to make its
| impact positive.
| martindbp wrote:
| Not big tech (or PhD level research), but half the work I did on
| my side project (subtitles for Chinese learning/OCR) is sort of
| obsolete now, most of the rest of it within a year or two. I put
| months into an NLP pipeline to segment Chinese sentences,
| classifying pinyin and translating words in-context, something
| ChatGPT is great at out the box. My painstaking heuristic for
| determining show difficulty using word frequencies and comparing
| distributions to children's shows is now the simple task of
| giving part of the transcript and asking ChatGPT how difficult it
| is. Next up, the OCR I did will probably be solved by ChatGPT4.
| It seems the writing is on the wall: most tasks on standard media
| (text/images/video), will be "good enough" for non-critical use.
| The only remaining advantage of bespoke solutions is speed and
| cost and that will also be a fleeting advantage.
|
| But it's also extremely exciting, we'll be able to build really
| great things very easily, and focus our efforts elsewhere. Today
| anyone can throw together a language learning tutor to rival
| Duolingo. As long as you're in it for solving problems you
| shouldn't be too threatened by whatever tool set you're currently
| becoming obsolete.
| epups wrote:
| Everyone here is saying that people can simply transition easily
| into startups and other big companies. To a certain extent that's
| true, but what exactly are they going to do? As technology
| consolidates into one or two major LLM's, likely only accessible
| by API, I feel most orgs would be better served by relying
| heavily on finetuning or optimizing those for their purpose.
| Previous experience with NLP certainly helps with that, although
| this type of work would not necessarily be as exciting as trying
| to build the next big thing, which everyone was scrambling for
| before.
|
| OpenAI could build a state-of-the-art tool with a few hundred
| developers - to me, that means that money will converge to them
| and other big orgs rather than the opposite.
| Yoric wrote:
| That's definitely a risk.
|
| With a PhD in the domain, I consider myself pretty good at (a
| subset of) distributed programming. But these days, when
| companies hire for distributed programming, they seem to want
| developers who know a specific set of tools and APIs. I'm more
| suited at reimplementing them for scratch.
| jurassic wrote:
| Maybe this is alarmist, but I don't see how LLMs don't collapse
| our entire economic system over the next decade or so. This is
| coming for all of us, not just the NLP experts in big company
| research groups. Being able to cheaply/instantly perform
| virtually any task is great until you realize there is now nobody
| left to buy your product or service because the entire middle
| class has been put out of work by LLMs. And the service
| industries that depend on those middle class knowledge workers
| will be out of work because nobody can afford to purchase their
| services. I don't see how this doesn't end with guillotines
| coming out for the owner class and/or terrorism against the
| companies powering this revolution. I hope I'm wrong.
| qwerty3344 wrote:
| There are entire sectors of the economy that LLMs can't touch -
| hospitality, manufacturing, caregivers, religious sectors,
| live-action entertainment, etc. Sure some of these will be
| replaced by robots but there will always be new jobs too.
| seydor wrote:
| white collar workers detest these jobs
|
| the only reason they studied, went to university etc was to
| avoid doing manual labour. this has been happening for
| decades, a century. they ll be depressed
| krapp wrote:
| Just give them the same lecture they like to trot out about
| supply and demand and how automation simply creates new
| opportunities. And then have an AI compose a dirge to play
| on the world's smallest violin for them.
| seydor wrote:
| it's not even their fault. societies, cities have been
| built to produce this kind of people
| krapp wrote:
| It isn't anyone's fault but the capitalist class. Still,
| real life holds no sympathy for people who consider any
| work beneath their dignity.
|
| They'll be depressed? Tough shit, we're _all_ depressed.
| But I hear there 's dignity and self respect in a
| lifetime of backbreaking labor. Hard times create strong
| men and whatnot.
| jurassic wrote:
| No, there are not. Everything in the economy is connected and
| you can't have a vibrant industry without customers. The
| customers of hospitality/entertainment/healthcare/etc
| businesses are largely the middle class who will be put out
| of work by LLMs. So the person who today makes $200/night in
| tips waiting tables at a nice restaurant.... who will be
| buying those meals?
| woah wrote:
| Someone who uses an LLM as a tool to perform a useful
| service
| seydor wrote:
| That would be another LLM or a robot then
| toss1 wrote:
| The owner class gets enlightened and makes sure that the govt
| taxes them and implements a solid Universal Basic Income
|
| This is part of what the original UBI concept was about.
|
| If this doesn't happen, yes, there will likely be violence
| until it is fixed.
|
| The other view is that many technologies that were supposed to
| reduce work actually net added work, because now more
| sophisticated tasks could be done by the humans, so the net was
| similar to the highway paradox where more and wider highways
| breed more traffic by induced demand.
|
| Where would this demand come from? IDK, but at least initially,
| these LLMs make such massive errors that keeping a lid on the
| now-hyper-industrial-scale bullshit[0] spewed by these machines
| will make many more full time jobs.
|
| Seriously, just today I was amazed at how the GPT model tried
| to not only BS me with completely fabricated author names for
| an article that I had it summarize, but it repeatedly did so
| even after being successively prompted more and more
| specifically to where it could find the actual author (hint:
| right after the byline starting with the word "Author". It just
| keep apologizing and then doubling down on more fantastic lies,
| as if it were very motivated to hide the truth (I know it's
| not, that's just how fantablous it was).
|
| [0] Bullshit being defined as speech or writing telling a good
| tale but with zero regard to the truth or falsehood of any part
| of it -- with no malice but nonetheless a salad of truth and
| lies.
| djous wrote:
| During my master's degree in data science, we had several
| companies visit our faculty to recruit students. Not a single one
| was a specialized NLP company, but many of them had NLP projects
| going on.
|
| Most of those projects were the usual "solution looking for a
| problem to solve". Even those projects that might have had _some_
| utility, would have been way more effective to buy/license a
| product than to develop an in-house solution. Because really,
| what's the use of throwing a dozen 25-30 years old with non-
| specialized knowledge, when there are companies full of guys with
| PhDs in NLP that devote all their resources to NLP? Yeah, you can
| pipe together some python, but these kind of products will always
| be subpar and more expensive long-term than just buying a proper
| solution from a specialized company.
|
| To me it was pretty clear that those projects were just PR so
| that c-levels could sell how they were preparing their company
| for a digital world. Can't say I'm sorry for all the people
| working on those non-issues though. From the attitude of
| recruiters and employees, you'd think they were about to find a
| cure for cancer. Honestly, I can't wait for GPT and other
| productivity tools to wrech havock upon the tech labour market.
| Some people in tech really need to be taken down a notch or two.
| version_five wrote:
| those projects were just PR so that c-levels could sell how
| they were preparing their company for a digital world
|
| This is exactly it. The 2017-2019 corporate version of "invest
| in AI" meant to build an in-house team to do ML experiments on
| internal data, and then usually evolved a bit to get some "ml-
| ops" thrown in so they could "deploy" the models they built. I
| spent some time with a few companies doing this and it always
| reminded my of "the cat in the hat comes back" when the cat let
| all the little cats out of his hat and they went to work on the
| snow spots... just doing busy work...
|
| Anyway it's a symptom of the hype cycle - AI was the next
| electricity, but there were no actual products and nothing
| clear to do with it, just hire a bunch of kids to act like they
| were in a kaggle competition, or worse a bunch of PhDs to be
| under-utilized building scikit-learn models.
|
| Now that there are (potentially) products coming along that at
| least bypass the low-level layer of ML, having an internal team
| makes no sense. Maybe the most logical thing that will happen
| is the pendulum will swing too far, and this bubble will
| consist more of businessy types using chatGPT without remotely
| understanding it or realizing it's just a computer program.
| DebtDeflation wrote:
| >The 2017-2019 corporate version of "invest in AI" meant to
| build an in-house team to do ML experiments on internal data,
| and then usually evolved a bit to get some "ml-ops" thrown in
| so they could "deploy" the models they built.
|
| You nailed it, although very few models actually ever got
| deployed to Prod at Fortune 500 non-tech companies and the
| few that did delivered little value. I'm a consultant and
| most internal AI/ML/DS teams that I interacted with were just
| running experiments on internal data as you said, and the
| results would get pasted into Powerpoint, a narrative
| created, and then presented to executives, who did little or
| nothing with the "insights". Reminded me of the "Big Data"
| boom a few years earlier where every company created a Big
| Data Team who then promptly stood up a Hadoop cluster on
| prem, ingested every log file they could find, and
| then..................did nothing with it.
| jazzyjackson wrote:
| > having an internal team makes no sense.
|
| Disagree. I was on one of these R&D/prototyping teams running
| ML experiments and you're right, it was the company wanting
| to present itself as future-leaning, ready to adapt, and I
| would say that at this point it was a good move to have
| employees who understand where the tech is going.
|
| Companies with internal teams that are able to implement open
| source models are in a much better negotiating position for
| the B2B contracts they're looking at for integrating GPT into
| their workflow, they won't _need_ GPT as much, if they can
| fallback on their own models, and they will be better able to
| sit down with the sales engineers and call bullshit when they
| 're being sold snake oil.
| visarga wrote:
| You tend to oversimplify the GPT's - they don't just work all
| the time, you got to test how well they work, then you got to
| select the best prompt and demonstrations, then you got to
| update your prompt it as new data comes along. There is
| plenty of work parsing various inputs into a format it could
| understand and then parsing its outputs, especially for
| information extraction.
| icedistilled wrote:
| Counterpoint, if one doesn't have their own baseline model how
| does one know the vendor is providing value.
|
| Yeah having a whole big team create the internal baseline is
| not cost effective, but having at least one or two people work
| on something to actually know the vendor is worth their cost is
| important.
| danaris wrote:
| > Honestly, I can't wait for GPT and other productivity tools
| to wrech havock upon the tech labour market. Some people in
| tech really need to be taken down a notch or two.
|
| You have to remember that when these sorts of things happen,
| the ones who get "taken down" in ways that actually affect
| their lives are invariably the ones who already have the least.
| The ones who "need" that takedown will be just fine, unless
| they've made incredibly stupid investment decisions.
| avmich wrote:
| > the ones who get "taken down" in ways that actually affect
| their lives are invariably the ones who already have the
| least
|
| I'm not sure that was the case with personal computing in
| 1980-s. What was the significant part of society which had
| the least and got "taken down"?
| ChuckNorris89 wrote:
| Personal computing didn't automate too many things that
| only humans could previously do. Personal computer enabled
| you to move the data haystack from paper medium to digital
| but you still had to know the right SW incantations and
| meticulously dig through it to find the needle.
|
| ChatGPT and other ML apps can find you the needle in the
| data haystack. To look up stuff on the PC you still needed
| to know the location of your stuff, filesystem info and how
| to formulate queries. You no longer need to learn to "speak
| machine language" but finally the machines can now
| understand human language to do what you tell them to do.
|
| Of course, ChatGPT & friends can also say dumb shit or just
| hallucinate stuff up so you still need a human in the loop
| to double-check everything.
| osigurdson wrote:
| >> "solution looking for a problem to solve"
|
| I wonder if this is a bad as everyone thinks. When a new
| technology arrives which is not completely understood, isn't
| the right approach to try to find some applications for it?
| Sure, most will fail, but some valid use cases will likely
| emerge.
|
| I'm pretty sure almost all technologies at some point were
| solutions looking for a problem to solve. Examples include the
| internet, the computer and math.
| 72deluxe wrote:
| The computer was always designed to be a computational
| machine. It didn't just appear and then someone thought "what
| could I actually use this for?"
|
| Also the Internet came out of DARPA which was a method of
| sharing data between geographically remote military
| facilities. It wasn't like they wired up devices and thought
| "what could we use this for?".
| osigurdson wrote:
| Do you assert that we had a good understanding of all of
| the problems that a computer could solve before making it?
| This seems absurd to me.
| karpierz wrote:
| GPs point is that the technologies you've mentioned
| solved real problems before they were adapted for
| different use cases. They didn't make Darpanet and then
| think "man, if only there was some use for this" until
| the Internet came along. They designed it to send signals
| between distant nodes while being resilient to individual
| nodes being nuked.
|
| Only after DARPAnet solved that problem did it get
| adapted to some other problems (ex: how do I send cat
| pictures to people)?
| Technotroll wrote:
| R&D is fraught with risk, but some risks are more rewarding
| than others. These companies don't just sit on useless
| knowledge. Take Google who now sits as a "loser" in the
| current AI "competition"; their projects are far from
| worthless. Because they've built up expertise, they're now in
| a very good position to overtake Microsoft on AI, even though
| they currently seem a bit behind. (And frankly on many fields
| they're already far ahead.) So OK, perhaps the behemoth that
| is Google is a bad example, but I still think the same thing
| is true for smaller companies. If you just read the news, you
| would think that a technological race like this only has one
| winner, but that just isn't true. Even quote unquote
| "worthless projects" can help increase the understanding and
| expertise in quite important areas, that while not "worth"
| anything currently, may still have huge value in the future.
| The only way to know, is to stay in the race.
| JKCalhoun wrote:
| > I wonder if this is a bad as everyone thinks.
|
| I think it is. If they actually do end up finding a problem
| to solve, that would be serendipitous but I imagine the vast
| majority of the time they find themselves in the business of
| trying to convince the rest of us to buy a thing that we
| don't need. And while the latter may drive the economy to
| some degree as I get older I detest it more and more.
| osigurdson wrote:
| No one actually needs anything - perhaps food and water but
| even survival is not strictly necessary.
|
| The problem with "stuff we don't need" arguments is they
| are fundamentally nihilistic.
|
| Everyone needs a flying car so let's get on with it.
| 72deluxe wrote:
| This appears to be the computing model of the past 20
| years, from what I can tell?
|
| There have been no real advancements since the desktop
| model of the late 1990s. We might have more animations and
| applications running in virtual machines for security
| purposes, but literally nothing new has come out.
|
| Even all the web apps are reimplementation of basic desktop
| capabilities from the decades before, but slower and with
| more RAM usage. They might be easier to write (I personally
| don't think so - RAD apps from the 90s were quicker to
| write and use) but the actual utility hasn't changed; if
| anything it's just shoving all of your data from your
| microcomputer to someone else's microcomputer, and being
| tracked and losing control of said data whilst you're at
| it!
|
| And we have easier access to videos on the Internet, I
| guess??
|
| It all seems to be missing the point of actually having a
| computational device locally. There is no computation going
| on. It's all digital paper pushing.
| tracerbulletx wrote:
| It might not be optimal if we knew the future but to me its
| just a natural organic process, organizations and factions
| inside of organizations are slime molds. A new value gradient
| appears in the environment and we all spread out and crawl in
| a million different out growths feeling blindly in the
| general direction of something that feels like a good idea
| until one of the tendrils hits actual value and becomes a
| path of least resistance and the other ones dry out and die.
| JohnFen wrote:
| > I'm pretty sure almost all technologies at some point were
| solutions looking for a problem to solve. Examples include
| the internet, the computer and math.
|
| I think the opposite -- nearly all technologies came about as
| a result of people trying to solve existing real problems.
| Examples include the internet, the computer and math.
| (Although I don't think "math" counts as a technology.)
|
| The internet came about from darpanet, which was solving the
| problem of network resiliency. Computers automated what used
| to be a human job ("computer") of doing very large amounts of
| computations. That automation was solving the problem of
| needing to do more computations than could be done with
| armies of people.
| echelon wrote:
| > Honestly, I can't wait for GPT and other productivity tools
| to wrech havock upon the tech labour market. Some people in
| tech really need to be taken down a notch or two.
|
| That's an odd reason to want this.
| MomoXenosaga wrote:
| Less bullshit jobs. Society needs doctors, nurses, plumbers
| and teachers not tech bros.
| tomp wrote:
| With this kind of mindset, we'd still be using lead pipes
| and letting blood.
|
| Doctors and plumbers might make society work, but
| technology drives society forward.
| throwayyy479087 wrote:
| Sure. But recruiting scheduling coordinators do not.
| Those people would better serve society stringing up new
| HVDC lines, which the current model does not incentivize.
| siva7 wrote:
| You do realise those tech "bros" are what enables doctors,
| nurses, plumbers and teachers to have a better work-life?
| themaninthedark wrote:
| I am not sure I can agree.
|
| Doctors and nurses now spend more time entering data than
| talking to patients.
|
| Teachers now spend more time entering grades into online
| systems and fielding messages from parents.
|
| Not sure how tech is helping or hurting plumbers except
| for the standard GPS tracking that bosses use to follow
| them around.
| laserlight wrote:
| AI or technology won't reduce bullshit jobs. To the
| contrary, they might increase bullshit jobs, because there
| would be more resources to allocate for those jobs.
| harimau777 wrote:
| Two problems: Who is going to pay to retrain people for
| those jobs?
|
| Except for perhaps doctors (and even then residency is BS)
| all of those jobs are treated or paid like crap.
| tpoacher wrote:
| take it from me; "doc bros" are far, _far_ worse.
| JKCalhoun wrote:
| I agree with your sentiment but disagree that AI research
| is in any way the domain of tech bros.
|
| I'm starting to see the term "tech bros" appear more and
| more in HN - before hand I more frequently saw it outside
| of this site.
|
| Some people on HN I have seen really come down on those
| that use the term. I don't.
|
| Perhaps those of us in the industry ought to recognize that
| the term exists because of a growing resentment among
| people outside of the tech industry.
|
| Your comment hints too as to why that is.
| [deleted]
| meany wrote:
| It's evidence of resentment, but not of well reasoned
| discourse against something the tech industry is doing.
| Characterizations like this anthropomorphize a group into
| a single entity that is easier to hate and assign
| intentions, too. It's not constructive to any
| conversation that moves a discussion forward. A person
| who is mad at "tech bros" is likely more upset about
| systemic forces that they want to blame on a target. It's
| logically equivalent to making sweeping statements
| blaming immigrants for suppressed wages.
| stonogo wrote:
| Comparing affluent ivory-tower digital landlords to
| vulnerable people being blamed for things outside their
| control is definitely one of the decisions of all time.
| It also seems like a lot of exercise just to feel
| justified in discarding a large group of opinions.
|
| People start generalizing about groups like this when
| they've stopped caring about negative policy consequences
| which affect those groups. Politicians who blame wage
| stagnation on immigrants do not expect to have those
| immigrants who gain citizenship vote for them. Why do you
| think people might have stopped caring what happens to
| the group designated "tech bros"?
| piva00 wrote:
| Society definitely needs those, but the incentives of the
| system most societies live under do not align to those
| needs. We are 100% into a society of wants, not needs, and
| the rewards are for those who sell stuff for these wants.
| Our needs went into the "cost center" of society's
| calculation, not an investment, and so it's been a race to
| the bottom for those professions.
|
| While adtech, crypto and other bullshit gets massive
| funding because it can turn a profit.
|
| The incentives to have a good society don't align with the
| incentives of financial capitalism.
| steponlego wrote:
| Why start startup of any kind if there's a bigger company full
| of people already competing in the same space?
| twodave wrote:
| Because product execution at SO many places sucks. LLMs won't
| help with that, either. They'll just help people market their
| crappy products more cheaply. Woe to the marketers, however.
| api wrote:
| It does seem like the (misnamed because it's not open) OpenAI is
| very far ahead of most other efforts, especially at the edges in
| areas like instruction training and output filtering.
|
| Playing with Llama 65G gave me a sense for what the median raw
| effort is probably like. It seems to take a lot of work to fine
| tune and harness these systems and get them reliably producing
| useful output.
| CuriouslyC wrote:
| I don't think it's possible to build a moat around models at
| all. The model architectures are public, and there are already
| distributed group training projects so the compute isn't a
| barrier. The only moat is data.
| levidos wrote:
| OpenAI stopped publishing the architecture of GPT-4, so I'm
| worried that architecture availability will not be as
| available in the futire
| belter wrote:
| A whole thread on AI experts discussing how AI is making them
| obsolete...back to gardening...
| izacus wrote:
| This fad too shall pass. And the tech will end up where it
| always does: helping some, changing some but nowhere near as
| much as the gold rush profiteers would make you believe.
| emptysongglass wrote:
| This is not an event that calls for pithy adages. The fruits
| of ML are not a fad just like personal computing was not a
| fad. It's a watershed event that cuts across every knowledge
| worker's domain. If you're not currently using these LLMs it
| may not be obvious to you but those of us that have tried to
| apply them to our current fields see huge gains in
| productivity. Just in my own little slice of knowledge work,
| I've seen yield increases that have saved me multiple days of
| work on a single project.
|
| Everyone is going to feel this, most prominently people in
| the sorts of industries that frequent HN. If you haven't yet,
| you will or you will be forced to when you discover everyone
| in your field is out-producing you armed with these tools.
| izacus wrote:
| Uh-huh.
|
| How's them NFTs and Blockchain doing the watershed world
| changin these days?
| macinjosh wrote:
| Is the compute for running an LLM cheap enough to scale at the
| moment? LLMs seem to be a great generalist solution but could
| specifically targeted NLP solutions still outperform in terms of
| speed/cost when you are processing high volumes of inputs?
| davidkuennen wrote:
| I tried translating something from English to German (my native
| language) yesterday with ChatGPT4 and compared it to Microsoft
| Translate, Google Translate and DeepL.
|
| My ranking:
|
| 1. ChatGPT4 - flawless translation. I was blown away
|
| 2. DeepL - very close, but one mistake
|
| 3. Google Translate - good translation, some mistakes
|
| 4. Microsoft Translate - bad translation, many mistakes
|
| I can understand the panic.
| og_kalu wrote:
| Tested these before GPT4 but 100%, Bi/Multi-lingual LLMs are
| the key to solving Machine Translation.
|
| https://github.com/ogkalu2/Human-parity-on-machine-translati...
| davidktr wrote:
| Fellow German here. Funny thing about DeepL: It translates
| "pathetisch" as "pathetic". For example: "Das war eine
| pathetische Rede." -> "That was a pathetic speech."
|
| I guess we have to get used to software redefining the meaning
| of words. It was kind of funny when that happened regarding
| Google Maps / neighborhood names, but with LLMs it's a
| different ballgame.
| DangerousPie wrote:
| Another German here, and I have to admit I would have
| actually translated "pathetisch" as "pathetic" as well. I
| guess my German vocabulary has suffered quite a bit over the
| years of living abroad.
| pohuing wrote:
| Pathetic can mean emotional in English as well. Though I only
| discovered that by reading the dictionary.
|
| For anyone who doesn't speak German, pathetisch means with
| pathos, impassioned.
| harimau777 wrote:
| This strikes me as a good example of how nuanced language
| can be.
|
| A native English speaker probably would only use "pathetic"
| to mean "emotional" if the emotions were specifically
| negative. They also would use pathetic to describe someone
| experiencing non-emotional suffering such as injury or
| poverty.
|
| Therefore, a native English speaker probably would not use
| "pathetic" to mean "emotional" in everyday writing.
| However, I could definitely see someone using it to mean
| emotional when they were being more poetic. For example, I
| could see someone calling an essay on the emotional toll of
| counseling "The Pathetic Class" in order to imply that
| social workers are a class that society has tasked with
| confronting negative emotions.
| pyuser583 wrote:
| That's a definition you see as technical term in Ancient
| Philosophy. Beyond literal translations from Greek, it
| doesn't come up much.
| sinuhe69 wrote:
| I think we should not undervalue DeepL. Not only its default-
| translation is already very good, it allows users to select
| different alternatives and remember these preferences, too.
| Which is not possible, at least not easy with GPT.
|
| And as with anything else, with the time it will get
| improved, too. LLM is not the answer to all linguistic
| problems.
| davidkuennen wrote:
| The most amazing thing about ChatGPT translation is, that you
| can even instruct it how to translate. For example "dutzen"
| and "sietzen" in German. I just simply tell it how it should
| do it and it did. Absolutely amazing. It's like actually
| working with a real translator.
| siva7 wrote:
| That's something i'm really sorry for but those jobs will
| be likely the first to fade away, there is a whole
| university faculty dedicated to the profession of the
| language translator where i live.
| yieldcrv wrote:
| goodbye to teaching English in Asia! come on home ya'll!
| epups wrote:
| I'm actually not sure what will become of tools like DeepL.
| Whatever edge they may have with dataset tuning and other
| tricks under the hood are likely superseded by a better
| architecture, which in turn requires a ton of capital to train.
| By the time they come up with a GPT4 equivalent, we will be
| using GPT5.
| zirgs wrote:
| Does it translate hate speech too?
| macawfish wrote:
| Of course it can
| zirgs wrote:
| ButI thought ChatGPT has guardrails that prevent it from
| outputting hate speech, praising certain politicians and so
| on.
| groffee wrote:
| [dead]
| MonkeyMalarky wrote:
| I'm not at a big tech company, and we don't sell algorithms, but
| my team does use a lot of NLP stuff in internal algorithms. The
| only panic I have is trying to keep up and take the time to learn
| the new stuff. If anything, things like GPT-4 are going to make
| my team 10x more successful without having to hire an army of
| PhDs.
| jarebear6expepj wrote:
| The PhD army will rise up against us one day... as soon as they
| are finish their TA appointments.
| not-chatgpt wrote:
| What does your team do? It feels like GPT4 can handle any task
| out there. Only drawback is latency and cost.
| MonkeyMalarky wrote:
| The price isn't even that bad, even the most expensive at
| 6cents per 1k tokens, it won't cost me much. It's the context
| size that's amazing. Gone are the days of only being able to
| pass ~500 tokens into something like BERT.
| gniv wrote:
| I remember thinking about this when AlphaFold was announced. Did
| it happen back then? Were there large shifts in
| companies/universities that were doing folding research?
| jhrmnn wrote:
| I've been thinking about this. My current theory is that
| molecular simulation is a much more heterogeneous activity than
| language modeling. Language is always the same _kind_ of data.
| Molecular simulations span orders of magnitude in space and
| time and depending on that, data and even objectives have very
| different form. AlphaFold is just one small piece in this
| puzzle and it's very easy for a research project to incorporate
| AlphaFold into an existing pipeline and shift its goal.
| tippytippytango wrote:
| Not even experts in the domain could see themselves being
| replaced and pivot in time. What hope does an ordinary person
| have in preparing for what's coming? Telling people to retrain
| will not be an acceptable answer because no one can predict which
| skills will be safe from AI in 5 years.
| twa34532 wrote:
| oh no?!
|
| so finally the tech sector is experiencing themselves what they
| have done to other lines of professions for the past decades,
| namely eradicting them (rightfully) with innovation?
|
| well same advice applies then:
|
| * embrace, move on and retrain for another profession * learn
| empathy from the panic and hurt
| credit_guy wrote:
| They may panic, but they shouldn't. They can quickly pivot. GPT
| programs can be used off the shelf, but they can also use custom
| training. Every large org has a huge internal set of documents,
| plus a large external set of documents relevant to its work
| (research articles, media articles, domain relevant rules and
| regulations). They can train a GPT bot to their particular
| codebase. And that is now. Soon (I'd give it at most one year),
| we'll be able to train GPT bots to videos.
|
| All this training does not happen by itself.
| nr2x wrote:
| 100%. Anybody with experience in distributed systems,
| networking, or SRE knows the plumbing can be as challenging as
| the "big idea". Training these models is a plumbing job. And
| that's actually really hard to pull off.
| MonkeyMalarky wrote:
| Yeah this thread has been the motivation for me to sign up on
| the wait list and cost out what it would take to try fine-
| tuning their older models on our data. There's still plenty of
| work out there when it comes to building a solution to a
| problem.
| hnbad wrote:
| When I was studying Computational Linguistics I kept running into
| the unspoken question: given that Google Translate already
| exists, what is even the point of all of this? We were learning
| all these ideas about how to model natural language and tag parts
| of speech using linguistic theory so we could eventually discover
| that utopian solution that would let us feed two language models
| into a machine to make it perfectly translate a sentence from one
| language into another. And here was Google Translate being "good
| enough" for 80% of all use cases using a "dumb" statistic model
| that didn't even have a coherent concept of what a language is.
|
| It's been close to two decades and I still wonder if that "pure"
| approach has any chance of ever turning into something useful.
| Except now it's not just language but "AI" in general: ChatGPT is
| not an AGI, it's a model fed with prose that can generate
| coherent responses for a given input. It doesn't always work out
| right and it "hallucinates" (i.e. bullshits) more than we'd like
| but it feels like this is a more economically viable shot at most
| use cases for AGI than doing it "right" and attempting to create
| an actual AGI.
|
| We didn't need to teach computers how language works in order to
| get them to provide adequate translations. Maybe we also don't
| need to teach them how the world works in order to get them to
| provide answers about it. But it will always be a 80% solution
| because it's an evolutionary dead end: it can't know things, we
| have only figured out how to trick it into pretending that it
| does.
| dogcomplex wrote:
| Ask a toddler how the world works and you'll get a very similar
| response. It is entirely likely the 80%-of-human-intelligence
| barrier is not a "dead end" but merely a temporary limitation
| until these models are made to hone their understanding and
| update over time (i.e. get feedback) instead of going for zero-
| shot perfection. The GPT models incorporating video should
| start developing this "memory" naturally as they incorporate
| temporal coherence (time) into the model.
|
| The fact we got this far through brute force is just insanely
| telling. This is a natural phenomena we're stumbling upon, not
| something crafted by humans.
|
| Also - fun fact, the Facebook Llama model that fits on a
| Raspberry Pi and is almost as good as GPT3? Also basically
| brute force. They just trained it a lot longer and it shrunk
| the model. Food for thought.
| nl wrote:
| > Computational Linguistics I kept running into the unspoken
| question
|
| I've done a lot of work in NLP and the times when computational
| linguistics has been useful is very rare. The only time I
| shipped something to production that used it was a classifier
| for documents that needed to evaluate them on a sentence by
| sentence basis for possible compliance issues. Computational
| linguistics was useful then because I could rewrite mulit-
| clause sentences into simpler single clause sentences which the
| classifier could get better accuracy on.
|
| > And here was Google Translate being "good enough" for 80% of
| all use cases using a "dumb" statistic model that didn't even
| have a coherent concept of what a language is.
|
| I assume you are aware if Frederick Jelinek quote "Every time I
| fire a linguist, the performance of the speech recognizer goes
| up"?[1]
|
| That was in 1998. It's been pretty clear for a long time that
| computational linguistics can provide some tools to help us
| understand language but it is insufficiently reliable to use
| for unconstrained tasks.
|
| [1] https://en.wikipedia.org/wiki/Frederick_Jelinek
| leroy-is-here wrote:
| I personally think that humans easily apply structure to
| language that doesn't really exist. In fact, we restructure our
| languages daily, as individuals, when communicating verbally
| and through text. We make up words and shorthands and
| abbreviations and portmanteaus. But I think the brain simply
| makes connections between words and things and the structure of
| speaking those words is interpreted like audio or visuals in
| our brains -- just patterns to be placed.
|
| Really, words, utterances by themselves, carry meaning.
| Language is just a structure for _us_, so to speak, that we
| agree on for ease of communication. I think this is why
| probabilistic models do so well: the ideas we all have are
| mostly similar, it really is about just mapping from one kind
| of word to another, or kind of phrase to another.
|
| Feel free to respond, I'm most certainly out of my depth here.
| esperent wrote:
| Google translate works amazingly will on languages with a
| similar grammar (or at least, it works so on European
| languages, which I have the experience to judge).
|
| However, translation of more distant languages is pretty
| terrible. Vietnamese to English is something I use Google
| translate for everyday and it's a mess. I can usually guess
| what the intended meaning was but if you're translating a
| paragraph or more it won't even be able to translate the same
| important subject words consistently throughout. Throw in any
| kind of slang or abbreviations (which Vietnamese people use a
| _lot_ when messaging each other) and it 's completely lost.
| hnfong wrote:
| I learnt some very basics of computational linguistics since it
| was related to a side project. I kept wondering why people were
| spending huge amounts of resources into tagging and labelling
| corpora of thousands of words, while to me it seems that in
| theory it should be possible to feed wikipedia (of a certain
| language) into a program and have it spit out some
| statistically correct rules about words and grammar.
|
| I guess the same intuition led to these new AI technologies...
| sp332 wrote:
| English Wikipedia is the largest. Wikipedia in other
| languages would be less useful.
| vkazanov wrote:
| The secret is that there are no grammars in our brains. Rules
| are statistical, not precise. Rules, idioms are fluid and...
| statistical.
|
| We're a bit more specialised than these new models. But
| that's it, really.
| xp84 wrote:
| ^ This. I think the more we internalize the fact that we're
| _also_ basically LLMs, the more we 'll realize that there
| likely isn't some hard barrier beyond which no AI can
| climb. If you watch the things kids who are learning
| language say, you'll see the same kinds of slip-ups that
| belie the fact that they don't yet understand all the words
| themselves, but nobody thinks that 2-year-olds aren't
| people or thinks they will never learn to understand these
| concepts.
| hnbad wrote:
| I think a huge part is that computational linguistics still
| chases the idea of a universal language model, which may
| simply not be possible. I haven't followed the science in
| general linguistics but something feels off when most of the
| information ends up being tagged onto nil particles (i.e.
| parts of speech present neither in utterances nor written
| language and not affecting intonation or otherwise being
| detectable except by contrasting the structure with related
| languages).
| hnfong wrote:
| In a sense the model _is_ universal. It 's just a 100GB
| (give or take) neural network.
|
| And apparently (or so I heard, I think) feeding transformer
| models training data of Language A could improve its
| ability to understand Language B. So maybe there's
| something truly universal in some sense.
| tkgally wrote:
| * * *
| lisasays wrote:
| _Given that Google Translate already exists, what is even the
| point of all of this?_
|
| Because for the other 20 percent it's plainly -not- good
| enough. It can't even produce an acceptable business letter in
| a resource-rich target language, for example. It just gets you
| "a good chunk of the way there."
|
| And there's no evidence that either (1) throwing exponentially
| more data at the problem with see matching gains in accuracy or
| (2) this additional data will even be available.
| jjoonathan wrote:
| Yeah... Google Translate is still occasionally translating
| good/item as "baby" on taobao. "Return Defective Baby" was
| hilarious for a year or two, but that was ~8 years ago IIRC,
| and now it just stands as a reminder that Google Translate
| still has a considerable way to go.
| JohnFen wrote:
| Indeed. Google Translate is just barely useful. Whenever I
| use it to translate to English, what I get is generally
| poor. It's good enough to understand the gist of what the
| original text said, but that's about it. Fortunately, most
| of the time, understanding the gist is enough.
| bippingchip wrote:
| As one of the comments on reddit posts - it's not just big tech
| companies, but also entire university teams which feel the
| goalposts moving miles ahead all of a sudden. Imagine working on
| your PhD on chat bots since start of 2022. Your entire PhD topic
| might be irrelevant already...
| ChuckNorris89 wrote:
| _> Imagine working on your PhD on chat bots since start of
| 2022. Your entire PhD topic might be irrelevant already..._
|
| In fairness most PhD topics people work on these days, outside
| of the select few top research universities in the world, are
| obsolete before they begin. At least from what my friends in
| the field tell me.
| Yoric wrote:
| Anecdata of one: I finished my PhD about 20 years ago in
| programming language theory. I created something innovative
| but not revolutionary. Given how slowly industry is catching
| up on my domain, it will probably take another 20-30 years
| before something similarly powerful makes it into an
| industrial programming language.
|
| Counter-anecdata of one: On the other hand, one of the
| research teams of which I've been a member after my PhD was
| basically inventing Linux containers (in competition with
| other teams). Industry caught up pretty quickly on that.
| Still, academia arrived first.
|
| edit Rephrased to decrease pedantism.
| nemaar wrote:
| > something as powerful as what I created
|
| Could you give us more detail? It sounds intriguing.
| Yoric wrote:
| I developed a new static analysis (a type system, to be
| precise) to guarantee statically that a
| concurrent/distributed system could fail gracefully in
| case of (D)DoS or other causes of resource exhaustion.
| Other people in that field developed comparable tools to
| statically guarantee algorithmic space or time complexity
| of implementations (including the good use of
| timeouts/resource sandboxes if necessary). Or type
| system-level segregation between any number of layers of
| classified/declassified information within a system. Or
| type systems to guarantee that binary (byte)code produced
| on a machine could find all its dependencies on another
| machine. Or type systems to prove that an algorithm was
| invariant with respect to all race conditions. Or to
| guarantee that a non-blocking algorithm always
| progresses. Or to detect deadlocks statically. etc.
|
| All these things have been available in academia for a
| long time now. Even languages such as Rust or Scala, that
| offer cutting edge (for the industry) type systems, are
| mostly based on academic research from the 90s.
|
| For comparison, garbage-collectors were invented in the
| 60s and were still considered novelties in the industry
| in the early 2000s.
| codethief wrote:
| Is there a good resource (a review paper maybe?) to get
| an overview over such programming language / type system
| topics?
| pyuser583 wrote:
| Isn't that the sort of thing advisors are supposed to caution
| against?
|
| And aren't PhDs supposed have a theoretical underpinning?
| simonh wrote:
| I'm not too worried about that. We don't actually understand
| fully how LLMs function internally, so research on how language
| works and how to process it is still useful in advancing our
| understanding. It may not lead to products that can compete
| with GPT, but PhDs aren't about commercialisation, they're
| about advancing human knowledge.
| oldgradstudent wrote:
| > We don't actually understand fully
|
| A touch of understatement.
| echelon wrote:
| All these people don't understand how hireable and desirable
| they are now. They need to get out of academia and plugged into
| AI positions at tech companies and startups.
|
| Their value just went up tremendously, even if their PhD thesis
| got cancelled.
|
| Easily millionaires waiting to happen.
|
| ---
|
| edit: Can't respond to child comment due to rate limit, so
| editing instead.
|
| > That is not how it works at all.
|
| Speak for yourself. I'm hiring folks off 4chan, and they're
| kicking ass with pytorch and can digest and author papers just
| fine.
|
| People stopped caring about software engineering and data
| science degrees in the late 2010's.
|
| People will stop caring about AI/ML PhDs as soon as the
| challenge to hire talent hits - and it will hit this year.
| goethes_kind wrote:
| That is not how it works at all. You won't get hired if you
| don't have the academic pedigree in the first place. That
| means a completed Ph.D and good publications in good
| journals.
| Der_Einzige wrote:
| Sorry, you don't need the Ph.D. publications at top 10 NLP
| venues are enough
| Yoric wrote:
| Hired in academia? Sure.
|
| Hired in industry. That's the opposite. I've had a friend
| who had to hide that they had a PhD to be hired...
| goethes_kind wrote:
| I guess we are living in two different universes. Any job
| ad for an ML role or ML adjacent role says Ph.d required
| or Ph.d preferable. Maybe it is also a matter of
| location. I am in Germany.
|
| For a plain SWE role a Ph.d might be a disadvantage here
| too, but for anything ML related it is mandatory from
| what I can see.
| visarga wrote:
| In my hiring experience as an interviewer, 90% of
| candidates with PhD or not will actually have mediocre
| grasp on ML. It is a rare happy day when I get a good
| candidate. We interview for months for one hire. I got to
| interview candidates worldwide so I've seen people from
| many countries.
| nl wrote:
| Was this hiring for ML positions?
|
| As someone who hired for this in general we'd use PhD (or
| _maybe_ a Masters degree) as a filter by HR before I even
| saw them.
|
| It's true that a PhD doesn't guarantee anything though. I
| once interviewed a candidate with 2 PhDs who couldn't
| explain the difference between regression and
| classification (which was sort of our "ok lets calm your
| nerves" question).
| antegamisou wrote:
| Yeah, you don't want to be anywhere near a place claiming
| to hire HS graduates/4chan posters in disciplines
| requiring advanced knowledge for successful product
| development, unless, idk, they have demonstrated
| mathematical talent through well-established media e.g.
| math olympiads, thesis on some relevant discipline.
|
| Almost all the time, they're shitty startups, where
| bankruptcy is a matter of time, run by overpromising-
| underdelivering grifter CTOs pursuing a get-rich-quick
| scheme using whatever is trendy right now -crypto, AI,
| whatever has the most density on the frontpage-.
| kelipso wrote:
| Yeah true, I've had to work with too many fresh college
| grads to not relate to this. People try to take some rare
| case and generalize when that's really not applicable.
| yawnxyz wrote:
| As much as I'd wish to say "you're wrong, people care about
| intelligent, passionate people who do great work, not PhDs"
| you're right about much of the work out there.
|
| We've tried many time to work with CSIRO (the NSF of
| Australia) and it's fallen flat. They love impressive
| resumes and nothing else. I'm having a chat with their
| "Director of ML" who's never heard of the words "word2vec"
| or "pytorch" before. (And I'm a UX designer!)
|
| I think at most corporate firms you'll end up running into
| more resume stuffers than people who actually know how to
| use ML tools.
| Technotroll wrote:
| Sorry, that's patently untrue. Perhaps it's anecdotal, but
| I know a host of undergrads who got head hunted into quite
| elite tech positions either directly from Uni where I
| studied, or due to private projects they were in. And I
| even know a few that doesn't even have any uni edu that got
| hired to very high technical positions. Usually they were
| nerdy types who had worked with or had exposure to large
| systems for whatever reason, or who showed some promise due
| to previous work, demos or programs they'd made. But sure,
| most people have to go the edu route. It's the safest way
| into tech, as you are - at least in principle - fully
| vetted before you apply. Thinking that you can get a data
| science or hacker job just by installing Kali is ofc also
| very untrue.
| goethes_kind wrote:
| I think my post is more representative of the truth than
| yours. I am sure you are telling the truth, but these
| unique talents you are talking about are not
| representative of the bulk of people working in research.
| echelon wrote:
| (My posting rate limit went away)
|
| The demand for AI/ML will fast outstrip available talent.
| We'll be pulling students right out of undergrad if they
| can pass an interview.
|
| I'm hiring folks off Reddit and 4chan that show an
| ability to futz with PyTorch and read papers.
|
| Also, from your sibling comment:
|
| > Maybe it is also a matter of location. I am in Germany.
|
| Huge factor. US cares about getting work done and little
| else. Titles are honestly more trouble than they're worth
| and you sometimes see negative selection for them in
| software engineering. I suspect this will bleed over into
| AI/ML in ten years.
|
| Work and getting it done is what matters. If someone has
| an aptitude for doing a task, it doesn't matter where it
| came from. If they can get along with your team, do the
| work, learn on the job and grow, bring them on.
| goethes_kind wrote:
| Thanks for the insight. I hope you are right of course.
| Unfortunately, Germany is a bit hopeless in this respect.
| levidos wrote:
| I'm DevOps engineer and I became super interested in AI
| recently. Any tips on how can I shift to an AI/ML career?
| theGnuMe wrote:
| Just as an fyi some of the top AI folks at OpenAI don't
| have PhDs. I remember reading that on Twitter (I think).
| goethes_kind wrote:
| This is where it pays off to be researching something
| completely esoteric rather than something immediately
| applicable. I mostly scoffed at such research in the past, but
| now I see the value of it. The guy researching QML algorithms
| for NLP is not panicking yet, I think.
| sgt101 wrote:
| Perhaps - but normally you'll have a narrowly defined and very
| specific technical topic/hypothesis that you're working on, and
| many/most of these aren't going to be closed off by ChatGPT4
|
| Will this effect the job market (both academic and commercial)
| for these folks? It's very hard to say. Clearly lots of value
| will be generated by the new generation of models. There will
| be a lot of catchup and utilisation work where people will want
| to have models in house and with specific features that the
| hyperscale models don't have (for example constrained training
| sets). I'm wondering how many commercial illustrators have had
| their practices disrupted by Stable Diffusion? Will the same
| dynamics (what ever they are) apply for the use of LLM's?
| hn_throwaway_99 wrote:
| > but normally you'll have a narrowly defined and very
| specific technical topic/hypothesis that you're working on,
| and many/most of these aren't going to be closed off by
| ChatGPT4
|
| Pretty hard disagree. Even if your NLP PhD topic is looking
| at hypotheses on underlying processes about how languages
| work (and LLMs can't give you this insight), 9 times out of
| 10 it's with an eye for some sort of "applicability" of this
| for the future. GPT-4 just cut off the applicability parts of
| this for huge swaths of NLP research.
| wunderland wrote:
| Some big tech companies are witnessing a panic inside their
| entire org because they focus almost entirely on their
| competitors (except for the business divisions which are
| monopolies).
| KrugerDunnings wrote:
| Some people look at ChatGPT and think its all over and other
| look at it and start imagining all the things they can use it
| for.
| oars wrote:
| If you were an NLP researcher at a university whose past years of
| experience is facing existential threat due to this rapid
| innovation causing your area to become obsolete, what would be
| some good areas to pivot to or refocus on?
| echelon wrote:
| Get out of academia and into industry.
|
| Why the hell stay in in academia? This is clearly the next
| technological wave, and you shouldn't sleep on it. Especially
| when you're so well positioned to take advantage of your
| experience. You can make $500,000/yr (maybe more with all the
| new startups and options) and be on the bleeding edge.
|
| If you want to go back to academia later, you can comfortably
| do so. Most don't, but that doesn't mean it isn't an option.
| Beaver117 wrote:
| $500,000 is not a lot after all the inflation we had.
|
| $100,000 in 1970 is worth almost $800,000 today.
|
| Yes, downvote me all you want. But if you're an NLP expert
| thinking of working for a company that will make billions off
| your work, you can and should demand millions at least.
| matthewdgreen wrote:
| If you go into industry you'll be given a chance to deploy
| these models and rush them into products. You'll also make
| good money. If you go into academia (or research, whether
| it's in academia or industry) you'll be given the chance to
| try to understand what they're doing. I can see the appeal of
| making money and rushing products out. But it wouldn't even
| begin to compete with my curiosity. Makes me wish I was
| younger and could start my research career over.
|
| ETA: And though it may take longer, people who understand
| these models will eventually be in possession of the most
| valuable skill there is. Perhaps one of the last valuable
| human skills, if things go a certain direction.
| thwayunion wrote:
| Do both.
|
| Getting your hands dirty is the best way to understand how
| something works. Think about all the useless SE and PL work
| that gets done by folks who never programmed for a living,
| and how often faculty members in those fields with 10 yoe
| in industry spend their first few years back in academia
| just slamming ball after ball way out of the park.
|
| More importantly, $500K gross is $300K net. Times 5 is
| $1.5, or time 10 is $3M. That's pretty good "fuck you"
| money. On top which some industry street cred allows new
| faculty to opt out of a lot of the ridiculous BS that
| happens in academia. Seen this time and again.
|
| I think the easiest and best path for a fresh NLP phd grad
| can do right now is find the highest paying industry
| position, stick it out 5-10 years, then return as a profess
| of practice and tear it up pre-tenure (or just say f u to
| the tenure track because who needs tenure when you've got a
| flush brokerage account?)
| siva7 wrote:
| This is as likely to happen as that someone will fully
| understand how the brain works. I don't think you're
| missing much out in academia
| CuriouslyC wrote:
| Plot twist: as these models increase in function,
| complexity and size, behaviors given activations will be as
| inscrutable to us as our behaviors are given gene and
| neuron activations.
| akavi wrote:
| The danger is that the opportunity academia is giving you
| is something more like "you'll be given the chance to try
| to understand what they were doing 5 years ago".
| tokai wrote:
| NLP is nowhere near being solved.
| mach1ne wrote:
| Depending on definition, it is solved.
| [deleted]
| gattilorenz wrote:
| You're using the _wrong_ definition, then. /s
|
| Where is some evidence that NLP is 'solved'? What does it
| even mean? OpenAI itself acknowledges the fundamental
| limitations of ChatGPT and the method of training it, but
| apparently everybody is happily sweeping them under the
| rug:
|
| "ChatGPT sometimes writes plausible-sounding but incorrect
| or nonsensical answers. Fixing this issue is challenging,
| as: (1) during RL training, there's currently no source of
| truth; (2) training the model to be more cautious causes it
| to decline questions that it can answer correctly; and (3)
| supervised training misleads the model because the ideal
| answer depends on what the model knows, rather than what
| the human demonstrator knows." (from
| https://openai.com/blog/chatgpt )
|
| Certainly ChatGPT/GPT-4 are impressive accomplishments, and
| it doesn't mean they won't be useful, but we were pretty
| sure in the past that we had "solved" AI or that we were
| just about to crack it, just give it a few years... except
| there's always a new rabbit hole to fall into waiting for
| you.
| gonzo41 wrote:
| It'd be great if GPT could provide it's sources for the
| text it generated.
|
| I've been asking it about lyrics from songs that I know
| of, but where I can't find the original artist listed. I
| was hoping chat gpt had consumed a stack of lyrics and I
| could just ask it, "What song has this chorus or one
| similar to X..." It didn't work. Instead it firmly stated
| the wrong answer. And when I gave it time ranges it just
| noped out of there.
|
| I think If I could ask it a question and it could go,
| I've used these 20-100 sources directly to synthesize
| this information, it'd be very helpful.
| IanCal wrote:
| Have you tried bing chat? That search & sourcing is
| exactly what it does.
| snowwrestler wrote:
| Sure, but the sources list is generated by the same
| system that generated the text, so it's equally subject
| to hallucinations. Some examples in here:
|
| https://dkb.blog/p/bing-ai-cant-be-trusted
|
| To answer the question above, these systems cannot
| provide sources because they don't work that way. Their
| source for everything is, basically, everything. They are
| trained on a huge corpus of text data and every output
| depends on that entire training.
|
| They have no way to distinguish or differentiate which
| piece of the training data was the "actual" or "true"
| source of what they generated. It's like the old
| questions "which drop caused the flood" or "which pebble
| caused the landslide".
| gonzo41 wrote:
| Not yet, I'll try at work on my windows box. Thanks.
| throwaway4aday wrote:
| Is the goal of NLP for the model to actually understand
| the language it is processing? By understand I mean
| having the ability to relate the language to the real
| world and reason about it the same way a human would. To
| me, that goes far beyond NLP into true AI territory where
| the "model" is at the least conscious of its environment
| and possesses a true memory of past experiences. Maybe it
| would not be consciously aware of its self but it would
| be damn close.
|
| I think LLMs have essentially solved the natural language
| processing problem but they have not solved reasoning or
| logical abilities including mathematics.
| gattilorenz wrote:
| LLMs have (maybe/probably) solved the language modeling
| problem, sure. That's hardly NLP, right? NLG is more than
| "producing text with no semantics" and both NLG and NLU
| are only part of NLP.
|
| ChatGPT cannot even reason reliably on what it knows and
| doesn't know... it's the library of Babel, but every book
| is written in excellent English.
| chartpath wrote:
| Even if that were true, LLMs don't give any kind of
| "handles" on the semantics. You just get what you get and
| have to hope it is tuned for your domain. This is 100% fine
| for generic consumer-facing services where the training
| data is representative, but for specialized and jargon-
| filled domains where there has to be a very opinionated
| interpretation of words, classical NLU is really the only
| ethical choice IMHO.
| nothrowaways wrote:
| Only If you want to keep doing it the old lematization way.
___________________________________________________________________
(page generated 2023-03-16 23:01 UTC)