[HN Gopher] I Am Tired of AI
___________________________________________________________________
I Am Tired of AI
Author : Liriel
Score : 921 points
Date : 2024-09-27 08:20 UTC (14 hours ago)
(HTM) web link (www.ontestautomation.com)
(TXT) w3m dump (www.ontestautomation.com)
| CodeCompost wrote:
| We're all tired of it, but to ignore it is to be unemployed.
| sph wrote:
| Depends on which point of your career. With 18 years of
| experience, consulting for tech companies, I can afford to be
| tired of AI. I don't get paid to write boilerplate code, and
| avoiding anyone knocking at the door with yet another great AI-
| powered idea makes commercial sense, just like I have ignored
| everyone wanting to build the next blockchain product 5 years
| ago, with no major loss of income.
|
| Also, running a bootstrapped business, I have bigger fishes to
| fry than playing mentor to Copilot to write a React component
| or generating bullshit copy for my website.
|
| I'm not sure we need more FUD saying that the choice is between
| AI or unemployment.
| Al-Khwarizmi wrote:
| I find comparisons between AI and blockchain very misleading.
|
| Blockchain is almost entirely useless in practice. I have no
| reason to disdain it, in fact I was active in crypto around
| 10-12 years ago when I was younger and more excited about
| tech than now, and I had fun. But the fact is that the
| utility that it has brought to most of society is essentially
| to have some more speculative assets to gamble on, at
| ludicrous energy and emissions costs.
|
| Generative AI, on the other hand, is something I'm already
| using almost every day and it's saving me work. There may be
| a bubble but it will be more like the dotcom bubble (i.e.,
| not because the tech is useless, but because many companies
| jump to make quick bucks without even knowing much about the
| tech).
| Applejinx wrote:
| I mean, to be selfish at apparently a dicey point in history,
| go ahead and FUD and get people to believe this.
|
| None of my useful work is AI-able, and some of the useful
| work is towards being able to stand apart from what is
| obviously generated drivel. Sounds like the previous poster
| with the bootstrapped business is in a similar position.
|
| Apparently AI is destroying my potential competition. That
| seems unfair, but I didn't tell 'em to make such an awful
| mistake. How loudly am I obliged to go 'stop, don't, come
| back'?
| sunaookami wrote:
| Speak for yourself.
| snickerer wrote:
| How all those cab drivers who ignore autonomous driving are now
| unemployed?
| anonzzzies wrote:
| When it's for sale everywhere (I cannot buy one) and people
| trust it, all cab drivers will be gone. Unemployed will
| depend on the resilience, but unlike cars replacing coach
| drivers, there is not really a similar thing a cab driver can
| pivot to.
| snickerer wrote:
| Yes, we can imagine a future where all cab drivers are
| unemployed, replaced by autonomous driving. However, we
| don't know when this will happen, because autonomous
| driving is a much harder problem than the hype from a few
| years ago suggested. There isn't even proof that autonomous
| driving will ever be able to fully replace human drivers.
| kasperni wrote:
| > We're all tired of it,
|
| You're feeling tired of AI, but let's delve deeper into that
| sentiment for a moment. AI isn't just a passing trend--it's a
| multifaceted tool that continues to elevate the way we engage
| with technology, knowledge, and even each other. By harnessing
| the capabilities of artificial intelligence, we allow ourselves
| to explore new frontiers of creativity, problem-solving, and
| efficiency.
|
| The interplay between human intuition and AI's data-driven
| insights creates a dynamic that enriches both. Rather than
| feeling overwhelmed by it, imagine the opportunities--how AI
| can shoulder the burdens of mundane tasks, freeing you to focus
| on the more nuanced, human elements of life.
|
| /s
| kunley wrote:
| With all due respect, that's seems like a cliche, repeated
| maybe because others repeat that already.
|
| Working in IT operations (mostly), I haven't seen literally any
| case of someone's job in danger because of _not_ using "AI".
| pech0rin wrote:
| As an aside its really interesting how the human brain can so
| easily read an AI essay and realize its AI. You would think that
| with the vast corpus these models were trained on there would be
| a more human sounding voice.
|
| Maybe it's overfitting or maybe just the way models work under
| the hood but any time I see AI written stuff on twitter, reddit,
| linkedin its so obvious its almost disgusting.
|
| I guess its just the brain being good at pattern matching, but
| it's crazy how fast we have adapted to recognize this.
| Jordan-117 wrote:
| It's the RLHF training to make them squeaky clean and
| preternaturally helpful. Pretty sure without those filters and
| with the right fine-tuning you could have it reliably clone any
| writing style.
| llm_trw wrote:
| One only need to go to the dirtier corners of the llm forums
| to find some _very_ interesting voices there.
|
| To quote someone from a tor bb board: my chat history is
| illegal in 142 countries and carries the death penalty in 9.
| bamboozled wrote:
| But without the RLHF aren't they less useful "products"?
| carlmr wrote:
| >Maybe it's overfitting or maybe just the way models work under
| the hood
|
| It feels more like averaging or finding the median to me. The
| writing style is just very unobtrusive. Like the average
| TOEFL/GRE/SAT essay style.
|
| Maybe that's just what most of the material looks like.
| infinitifall wrote:
| Classic survivorship bias. You simply don't recognise the good
| ones.
| amelius wrote:
| Maybe because the human brain gets tired and cannot write at
| the same quality level all the time, whereas an AI can.
|
| Or maybe it's because of the corpus of data that it was trained
| on.
|
| Or perhaps because AI is still bad at any kind of humor.
| chmod775 wrote:
| These models are not trained to act like a single human in a
| conversation, they're trained to be every participant and their
| average.
|
| Every instance of a human choosing _not_ to engage or speak
| about something - because they didn 't want to or are just
| clueless about the topic, is not part of their training data.
| They're only trained on active participants.
|
| Of course they'll never seem like a singular human with limited
| experiences and interests.
| izacus wrote:
| The output of those AIs is akin to products and software
| designed for the "average" user - deep inside uncanny valley,
| saying nothing specifically, having no specific style,
| conveying no emotion and nothing to latch on to.
|
| It's the perfect embodiment of HR/corpspeak which I think its
| so triggering for us (ex) corpo drones.
| Al-Khwarizmi wrote:
| Everyone I know claims to be able to recognize AI text, but
| every paper I've seen where that ability is A/B tested says
| that humans are pretty bad at this.
| sovietmudkipz wrote:
| I am tired and hungry...
|
| The thing I'm tired of is elites stealing everything under the
| sun to feed these models. So funny that copyright is important
| when it protects elites but not when a billion thefts are
| committed by LLM folks. Poor incentives for creators to create
| stuff if it just gets stolen and replicated by AI.
|
| I'm hungry for more lawsuits. The biggest theft in human history
| by these gang of thieves should be held to account. I want a
| waterfall of lawsuits to take back what's been stolen. It's in
| the public's interest to see this happen.
| makin wrote:
| I'm sorry if this is strawmanning you, but I feel you're
| basically saying it's in the public's interest to give more
| power to Intellectual Property law, which historically hasn't
| worked out so well for the public.
| jbstack wrote:
| The law already exists. Applying the law in court doesn't
| "give more power" to it. To do that you'd have to change the
| law.
| joncrocks wrote:
| Which law are you referencing?
|
| Copyright as far as I understand is focused on wholesale
| reproduction/distribution of works, rather than using
| material for generation of new works.
|
| If something is available without contractual restriction
| it is available to all. Whether it's me reading a book, or
| a LLM reading a book, both could be considered the same.
|
| Where the law might have something to say is around the
| output of said trained models, this might be interesting to
| see given the potential of small-scale outputs. i.e. If I
| output something to a small number of people, how does one
| detect/report that level of infringement. Does the
| `potential` of infringement start to matter.
| atoav wrote:
| Nah. What he is saying is that the _existing_ law should be
| applied _equally_. As of now intellectual property as a right
| only works for you if you are a big corporation.
| probably_wrong wrote:
| I think the second alternative works too: either you sue
| these companies to the ground for copyright infringement at a
| scale never seen, OR you decriminalize copyright
| infringement.
|
| The problem (as far as this specific discussion goes) is not
| that IP laws exist, but rather that they are only being
| applied in one direction.
| fallingknife wrote:
| HN generally hated (and rightly so, IMO) strict copyright IP
| protection laws. Then LLMs came along and broke everybody's
| brain and turned this place into hardline copyright
| extremists.
| triceratops wrote:
| Or you know, maybe we're pissed about the heads-I-win-
| tails-you-lose nature of the current copyright regime.
| fallingknife wrote:
| What do you mean by this? All I see in this thread is
| people who have absolutely no legal background who are
| 100% certain that copyright law works how they assume it
| does and are 100% wrong.
| vouaobrasil wrote:
| The difference is that before, intellectual property law was
| used by corporations to enrich themselves. Now intellectual
| property law could theoretically be used to combat an even
| bigger enemy: big tech stealing all possible jobs. It's just
| a matter of practicality, like all law is.
| xdennis wrote:
| > you're basically saying it's in the public's interest to
| give more power to Intellectual Property law
|
| Not necessarily. An alternative could be to say that all
| models trained on data which hasn't been explicitly licensed
| for AI-training should be made public.
| artninja1988 wrote:
| Copying data is not theft
| rpgbr wrote:
| It's only theft when people copy data from companies. The
| other way around is ok, I guess.
| CaptainFever wrote:
| Copying is not theft either way.
| goatlover wrote:
| It is if it's legally defined as theft.
| a5c11 wrote:
| Is piracy legal then? It's just a copy of someone else's
| copy.
| chownie wrote:
| Was the legality the question? If so it seems we care about
| data "theft" in a very one sided manner.
| tempfile wrote:
| It's not legal, but it's also not theft.
| criddell wrote:
| The person who insists copying isn't theft would probably
| point out that piracy is something done on the high seas.
|
| From the context of the comment it was pretty clear that
| they were using theft as shorthand for _taking without
| permission_.
| IanCal wrote:
| The usual argument is less about piracy as a term and
| more the use of the word theft, and your use of the word
| "taking". When we talk about physical things theft and
| taking mean depriving the owner of that thing.
|
| If I have something, and you copy it, then I still have
| that thing.
| criddell wrote:
| Did you read that original comment and wonder how Sam
| Altman and his crew broke into the commenter's home and
| made off with their hard drive? Probably not and so
| _theft_ was a fine word choice. It communicated exactly
| what they wanted to communicate.
| CaptainFever wrote:
| Even if that's the case, the disagreement is in
| semantics. Let's take your definition of theft. There's
| physical theft (actually taking something) and there's
| digital theft (merely copying).
|
| The point of anti-copyright advocates are that merely
| copying is not ethically wrong. In fact, Why Software
| Must Be Free made the argument that _preventing people
| from copying is ethically wrong_ because it limited the
| spread of culture and reuse.
|
| That is the crux of the disagreement. You may rephrase
| our argument as "physical theft may be bad, but digital
| theft is not bad, and in fact preventing digital theft is
| in itself bad", but the argument does not change.
|
| Of course, there is additional disagreement in the
| implied moral value of the word "theft". In that case I
| agree with you that pro-copyright/anti-AI advocates have
| made their point by the usage of that word. Of course, we
| disagree, but... it is what it is I suppose.
| vasco wrote:
| You calling it piracy is already a moral stance. Copying
| data isn't morally wrong in my opinion, it is not piracy
| and it is not theft. It happens to not be legal but just a
| few short years ago it was legal to marry infants to old
| men and you could be killed for illegal artifacts of
| witchcraft. Legality and morality are not the same, and the
| latter depends on personal opinion.
| cdrini wrote:
| I agree with you they're not the same, but to build on
| that, I would add that they're not entirely orthogonal
| either, they influence each other a lot. Generally
| morallity that a society agrees on gets enforced as laws.
| threeseed wrote:
| Technically that is true. But you will still be charged with
| a litany of other crimes.
| flohofwoe wrote:
| So now suddenly when the bigwigs do it, software piracy and
| "IP theft" is totally fine? Thanks, good to know ;)
| atoav wrote:
| Yet unlicensed use can be its own crime under current law.
| Palmik wrote:
| The only entities that will win with these lawsuits are the
| likes of Disney, large legacy news media companies, Reddit,
| Stack Overflow (who are selling content generated by their
| users), etc.
|
| Who will also win: Google, OpenAI and other corporations that
| enter exclusive deals, that can more and more rely on synthetic
| data, that can build anti-recitation systems, etc.
|
| And of course the lawyers. The lawyers always win.
|
| Who will not win:
|
| Millions of independent bloggers (whose content will be used)
|
| Millions of open source software engineers (whose content will
| be used against the licenses, and used to displace their
| livelihood), etc.
|
| The likes of Google and OpenAI entered the space by building on
| top of the work of the above two groups. Now they want to pull
| up the ladder. We shouldn't allow that to happen.
| ToucanLoucan wrote:
| Honestly the most depressing thing about this entire affair
| is seeing not the entire, certainly but a sizable chunk of
| the software development community jump behind OpenAI and
| company's blatant theft on an industrial scale of the mental
| products of probably literally billions of people (not the
| least of whom is _other software developers!_ ) with
| absolutely not the slightest hint of concern about what that
| means for the world because afterwards, they got a new toy to
| play with. Squidward was apparently 100% correct: on balance,
| few care about the fate of labor as long as they get their
| instant gratification.
| logicchains wrote:
| >blatant theft on an industrial scale of the mental
| products
|
| They haven't been stolen; the creators still have them.
| They've just been copied. It's amazing how much the ethos
| on this site has shifted over the past decade, away from
| the hacker idea that "intellectual property" isn't real
| property, just a means of growing corporate power, and
| information wants to be free.
| candiddevmike wrote:
| > They haven't been stolen; the creators still have them.
| They've just been copied
|
| You wouldn't download a car.
| ToucanLoucan wrote:
| Information should be free for people. Not 150 billion
| dollar enterprises.
| infecto wrote:
| Disagree. There should be no distinction between the two.
| Those kind of distinctions are what cause unfair
| advantages. If the information is available to consume,
| there should be no constraint on who uses it.
|
| Sure you might not like OpenAI, but maybe some other
| company comes a long and builds the next magical product
| using information that is freely available.
| TheRealDunkirk wrote:
| Treating corporations as "people" for policy's sake is a
| legal decision which has essentially killed the premise
| of the US democratic republic. We are now, for all
| intents and purposes, a corporatocracy. Perhaps an even
| better description would simply be oligarchy, but since
| our oligarchs' wealth is almost all tied up in corporate
| stocks, it's a very incestuous relationship.
| infecto wrote:
| Meh, I am just saying I believe in open and free
| information. I don't follow the OP's ideal of information
| for me but not thee.
| ToucanLoucan wrote:
| The idea of knowledge as a source of understanding and
| personal growth is completely oppositional to it's
| conception as a scarce resource, which to OpenAI and
| whomever else wants to train LLMs is what it is. OpenAI
| did not read everything in the library because it wanted
| to know everything; it read everything at the library so
| it could teach a machine to create a statistical average
| written word generator, which it can then sell access to.
| These are fundamentally different concepts and if you
| don't see that, then I would say that is because you _don
| 't want to see it._
|
| I don't care if employees at OpenAI read books from their
| local library on python. More power to them. I don't even
| care if they copy the book for reference at work, still
| fine. But utilizing language at scale as a scarce
| resource to train models _is not that_ and is not in any
| way analogous to it.
| infecto wrote:
| I am sorry you are too blinded by your own ideology and
| disagreement with OpenAI to see others points of views.
| In my view, I do not want to constrain any person or
| entity on their access to knowledge, regardless of output
| product. I do have issues with entities or people
| consuming knowledge and then prevent others from doing
| so. I am not describing a scenario of a scarce resource
| but of an open one.
|
| Public information should should be free for anyone to
| consume and use how they want.
| ToucanLoucan wrote:
| > I am sorry you are too blinded by your own ideology and
| disagreement with OpenAI to see others points of views.
|
| A truly hilarious sentiment coming from someone making
| zero effort to actually engage with what I'm saying in
| favor of parroting back empty platitudes.
| xdennis wrote:
| > It's amazing how much the ethos on this site has
| shifted over the past decade
|
| It hasn't. The hacker ethos is about openness,
| individuality, decentralization (among others).
|
| OpenAI is open in what it consumes, not what it outputs.
|
| It makes sense to have protections in place when your
| other values are threatened.
|
| If "information want's to be free" leads to OpenAI
| centralizing control over the most advanced AI then will
| it be worth it?
|
| A solution here would be similar to the GPL: even
| megacorps can use GPL software, but they have to
| contribute back. If OpenAI and the rest would be forced
| to make everything public (if it's trained on open data)
| then that would be an acceptable compromise.
| visarga wrote:
| > The hacker ethos is about openness, individuality,
| decentralization (among others).
|
| Yes, the greatest things on the internet have been
| decentralized - Git, Linux, Wikipedia, open scientific
| publications, even some forums. We used to passively
| consume content and internet allowed interaction. We
| don't want to return to the old days. AI falls into the
| decentralized camp, the primary beneficiaries are not the
| providers but the users. We get help of things we need,
| OpenAI gets a few cents per million tokens, they don't
| even break even.
| ToucanLoucan wrote:
| > AI falls into the decentralized camp
|
| I'm sorry, the worlds knowledge now largely accessible by
| a laymen via LLMs controlled by at most, 5 companies is
| decentralized? If that statement is true then the world
| decentralized truly is entirely devoid of meaning at this
| point.
| visarga wrote:
| Let's classify technology:
|
| 1. Decentralized technologies you can operate privately,
| freely, and adapt to your needs: computers, old internet,
| Linux, git, FireFox, local Wikipedia dump, old standalone
| games.
|
| 2. Centralized technologies that invade privacy, lead to
| loss of control and manipulation: web search, social
| networks, mobile phones, Chrome, recent internet,
| networked games. LLMs fall into the decentralized camp.
|
| You can download a LLM, run it locally, fine-tune it. It
| is interactive, the most interactive decentralized tech
| since standalone games.
|
| If you object that LLMs are mostly centralized today
| (upfront cost of pre-training and OpenAI popularity), I
| say they are still not monopolies, there are many more
| LLM providers than search engines and social networks,
| and the next round of phones and laptops will be capable
| of local gen-AI. The experience will be seamless,
| probably easier to adapt than touchscreens were in 2007.
| triceratops wrote:
| So why isn't every language model out there "open"?
| fennecfoxy wrote:
| Do you consider it theft because of the scale? If I read
| something you wrote and use most of a phrase you coined or
| an idea for the basis of a plotline in a book I write, as
| many authors do, currently it's counted as being all my own
| work.
|
| I feel like the argument is akin to some countries
| considering rubbish, the things you throw away, to still be
| owned by your person ie "dumpster diving" is theft.
|
| If a company had scraped public posts on the Internet and
| used it to compile art by colourising chunks of the text,
| is it theft? If an individual does it, is it theft?
| ToucanLoucan wrote:
| This argument has been stated and re-stated multiple
| times, this notion that use of information should always
| be free, but it fails to account for the fact that OpenAI
| is not consuming this written resource as a source of
| information but rather as a tool for training LLMs, which
| it has been open about from the beginning is a thing it
| wishes to sell access to as a subscription service. These
| are fundamentally not the same. ChatGPT/Copilot do not
| _understand_ Python, they are not minds that read a bunch
| of python books and learned python skills they can now
| utilize: they are language models, that internalized
| metric tons of weighted averages of python code and can
| now (kind of) write their own, based on minimizing
| "error" relative to the code samples they ingest. Because
| of this, Copilot has never and will never write code it
| hasn't seen before, and by extension of that, it must see
| _a whole lot of code_ in order to function as well as it
| does.
|
| If you as a developer look at how one would declare a
| function in python, review a few examples, you now know
| how to do that. Copilot can't say the same. It needs to
| see dozens, hundreds, perhaps thousands of them to
| reasonably accurately be counted on to accomplish that
| task, it's just how the tech works. Ergo, scaled data
| sets that can accomplish this teaching task now have
| value, if the people doing that training are working for
| high-valuation startups with the objective of selling
| access to code generating robots.
| Palmik wrote:
| That's not necessarily my position. I think laws can
| evolve, but they need to be applied fairly. In this case,
| it's heading in a direction where only the blessed will be
| able to compete.
| 0xDEAFBEAD wrote:
| Perhaps we need an LLM-enabled lawyer so small bloggers can
| easily sue LLM makers.
| Kiro wrote:
| I would never have imagined hackers becoming copyright zealots
| advocating for lawsuits. I must be getting old but I still
| remember the Pirate Bay trial as if it was yesterday.
| williamcotton wrote:
| It's because it now affects hackers and before it only
| affected musicians.
| bko wrote:
| It affects hackers how? By giving them cool technology at
| below cost? Or is it further democratizing knowledge? Or
| maybe it's the inflated software eng salaries due to AI
| hype?
|
| Help me understand the negative effect of AI and LLMs on
| hackers.
| t-3 wrote:
| It's trendy caste-signaling to hate on AI which endangers
| white-collar jobs and creative work the way machinery
| endangered blue-collar jobs and productive work (ie. not
| at all in the long run, but in the short term you will
| face some changes).
|
| I've never actually used an LLM though - I just don't
| have any use for such a thing. All my writing and
| programming are done for fun and automation would take
| that away.
| xdennis wrote:
| Nonsense. Computer piracy started with sharing software.
| Music piracy (on computers) started in the late 90s when
| computers were powerful enough to store and play music.
|
| Bill Gates' infamous letter was sent in 1976[1].
|
| [1]:
| https://en.wikipedia.org/wiki/An_Open_Letter_to_Hobbyists
| someNameIG wrote:
| Pirate Bay wasn't selling access to the torrents trying to
| make a massive profit.
| zarzavat wrote:
| True, though paid language models are probably just a blip
| in history. Free weight language models are only ~12 months
| behind and have massive resources thanks to Meta.
|
| That profit will be squeezed to zero over the long term if
| Zuck maintains his current strategy.
| meiraleal wrote:
| > Free weight language models are only ~12 months
|
| That's not true anymore, Meta isn't behind OpenAI
| rurp wrote:
| That can change on a dime though, if Zuck decides it's in
| his financial interest to change course. If Facebook
| stops spending billions of dollars on open models who is
| going to step in and fill that gap?
| zarzavat wrote:
| That depends on when Meta stops. The longer Meta keeps
| releasing free models, the more capabilities are made
| permanently unprofitable. For example, Llama 3.1 is
| already good enough for translation or as a writing
| assistant.
|
| If Meta stopped now, there would still be profit in the
| market, but if they keep releasing Llamas for the next 5+
| years then OpenAI et al will be fighting for scraps. Not
| everybody needs a model that can prove theorems.
| progbits wrote:
| I just want consistent and fair rules.
|
| I'm all for abolishing copyright, for everyone. Let the
| knowledge be free and widely shared.
|
| But until that is the case and people running super useful
| services like libgen have to keep hiding then I also want all
| the LLM corpos to be subject to the same legal penalties.
| AlexandrB wrote:
| Exactly this. If we have to live under a stifling copyright
| regime, then at least it should be applied evenly. It's
| fundamentally unfair to have one set of laws (at least as
| enforced in practice) for the rich and powerful and another
| set for everyone else.
| candiddevmike wrote:
| This is the entire point of existence for the GPL.
| Weaponize copyright. LLMs have conveniently been able to
| circumvent this somehow, and we have no answer for it.
| FridgeSeal wrote:
| Because some people keep asserting that LLM's "don't
| count as stealing" and "how come search links are on but
| got reciting paywalled NYT articles on demand is bad??"
| Without so much as a hint of irony.
|
| LLM tech is pretty cool.
|
| Would be a lot cooler if its existence wasn't predicted
| on the wholesale theft of everyone's stuff, immediately
| followed by denial of theft, poisoning the well, and
| massively profiting off it.
| welferkj wrote:
| >Because some people keep asserting that LLM's "don't
| count as stealing"
|
| People who confidently assert either opinion in this
| regard are wrong. The lawsuits are still pending. But if
| I had to bet, I'd bet on the OpenAI side. Even if they
| don't win outright, they'll probably carve out enough
| exemptions and mandatory licensing deals to be
| comfortable.
| visarga wrote:
| You are singling out accidental replication and
| forgetting it was triggered with fragments from the
| original material. Almost all LLM outputs are original -
| both because they use randomness to sample, and because
| they have user prompt conditioning.
|
| And LLMs are really a bad choice for infringement. They
| are slow, costly and unreliable at replicating any large
| piece of text compared to illegal copying. There is no
| space to perfectly memorize the majority of its training
| set. A 10B models is trained on 10T tokens, no space for
| more than 0.1% to be properly memorized.
|
| I see this overreaction as an attempt to strengthen
| copyright, a kind of nimby-ism where existing authors cut
| the ladder to the next generation by walling off abstract
| ideas and making it more probably to get sued for
| accidental similarities.
| pydry wrote:
| The common denominator is big corporations trying to screw us
| over for profit, using their immense wealth as a battering
| ram.
|
| So, capitalism.
|
| It's taboo to criticize that though.
| munksbeer wrote:
| > It's taboo to criticize that though.
|
| It's not, that's playing the victim. There are hundreds or
| thousands of posts daily all over HN criticising
| capitalism. And most seem upvoted, not downvoted.
|
| Don't even get me started on reddit.
| fernandotakai wrote:
| i find quite ironic whenever i see a highly upvoted
| comment here complaining about capitalism because for
| sure i don't see yc existing in any other type of
| economy.
| ToucanLoucan wrote:
| This only holds if your thinking on the subject of
| economic systems is only as deep as choosing your
| character's class in an RPG game. There's no need for us
| to make every last industry a state owned enterprise and
| no one who's spent longer than an hour or so
| contemplating such things thinks that way. I have no
| desire to not have a variety of companies producing
| things like cars, electronics, software, video games,
| just to name a few. Competition does drive innovation,
| that is still true, and having such firms vying for a
| limited amount of resources dispatched by individuals
| makes a lot of sense. Markets have their place.
|
| However markets also have limits. A power company
| competing for your business is largely a farce, since the
| power lines to your home will not change. A cable company
| in America is almost certainly a functional monopoly, and
| that fact is reflected in their quality of service.
| Infrastructure of all sorts makes for piss-put markets
| because true competition is all but impossible, and even
| if it does kind of work, it's inefficient. A customer
| must become knowledgeable in some way to have a ghost of
| a clue what they're buying, or trust entirely dubious
| information from marketing. And, even if somehow
| everything is working up to this point, corporations are,
| above all, cost cutters and if you put one in charge of
| an area where it feels as though customers have few if
| any choices and the friction to change is high, they will
| immediately begin degrading their quality of services to
| save money in the budget.
|
| And this is only from first principles, we have so many
| other things that could be discussed from mass market
| manipulation to the generous subsidies of a stunning
| variety that basically every business at scale enjoys to
| the rapacious compensation schemes that have become
| entirely too commonplace in the executive room, etc etc
| etc.
|
| To put it short: I have no issue at all with capitalism
| operating in non-essential to life industries. My issue
| is all the ways it's infiltrated the essential ones and
| made them demonstrably worse, less efficient, and more
| expensive for every consumer.
| catlifeonmars wrote:
| I would argue that subsidization and monopolistic markets
| are an inevitable outcome of capitalism.
|
| The competitive landscape where consumers vote for the
| best products with their purchasing decisions is simply
| not a sustainable equilibrium.
|
| The ability to accumulate capital (I.e. "capitalism")
| leads to regulatory protectionism through lobbying,
| bribery, etcetera.
| ToucanLoucan wrote:
| I would argue that markets are a necessary step _towards_
| capitalism but it 's also crucial to remember that
| markets can also exist outside of capitalism. The
| accumulation of money in a society with insufficient
| defenses will trend towards money being a stand-in for
| power and influence, but it still requires the permission
| and legal leeway of the system in order to actually turn
| it corrupt; politicians have to both be permitted to, and
| be personally willing to accept the checks to do the
| corruption in the first place.
|
| The biggest and most salient critique of liberal
| capitalism as we now exist under is that it requires far
| too many of the "right people" to be in positions of
| power; it presumes good faith where it shouldn't, and
| fails to reckon with bad actors _as what they are_ far
| too often, the modern American Republican party being an
| excellent example (but far from the only one).
| meiraleal wrote:
| You wouldn't see YC existing on a world full capitalist
| :) It depende heavily on open source, the biggest and
| most succeassful socialist experiment so far
| mandibles wrote:
| Open source is a purely voluntary system. So it's not
| socialist, which requires state coercion to force people
| to "share."
| PoignardAzur wrote:
| > _It 's taboo to criticize that though_
|
| In what world is this taboo? That critique comes back in at
| least half the HN threads about AI.
|
| Watch any non-technical video about AI on Youtube and it
| will mention people being worried of the power of mega-
| corporations.
|
| Your take is about as taboo as wearing a Che Guevara
| tshirt.
| rsynnott wrote:
| I'm not sure if you're being disingenuous, or if you
| genuinely don't understand the difference.
|
| Pirate Bay: largely facilitating the theft of material from
| large corporations by normal people, for generally personal
| use.
|
| LLM training: theft of material from literally _everyone_,
| for the purposes of corporate profit (or, well, heh, intended
| profit; of course all LLM-based enterprises are currently
| massively loss-making, and may remain so forever).
| CaptainFever wrote:
| > (or, well, heh, intended profit; of course all LLM-based
| enterprises are currently massively loss-making, and may
| remain so forever)
|
| This undermines your own point.
|
| Also, open source models exist.
| acheron wrote:
| It's the same picture.
| meiraleal wrote:
| Hackers are against corporations. If breaking the copyright
| laws make corps bigger, more powerful and more corrupt,
| hackers will be against it rightfully so. Abolishing
| copyright is different than abusing it, we should abolish it.
| jjulius wrote:
| On the one hand, we've got, "Pirating something because we
| find copyright law to be restrictive and/or corporate pricing
| to be excessive". On the other, we've got, "Massively wealthy
| people vacuuming up our creative output to further their own
| wealth".
|
| And you're trying to suggest that these two are the same?
|
| Edit: I don't mind downvotes, karma means nothing, but I do
| appreciate when folk speak up and say why I might be wrong.
| :)
| forinti wrote:
| Capitalism started by putting up fences around land to kick
| people out and keep sheep in. It has been putting fences around
| everything it wants and IP is one such fence. It has always
| been about protecting the powerful.
|
| IP has had ample support because the "protect the little
| artist" argument is compelling, but it is just not how the
| world works.
| johnchristopher wrote:
| > Capitalism started by putting up fences around land to kick
| people out and keep sheep in.
|
| That's factually wrong. Capitalism is about moving wealth
| more efficiently: easier to allocate money/wealth to X
| through the banking system than to move sheep/wealth to X's
| farm.
| tempfile wrote:
| capitalism and "money as an abstract concept" are
| unrelated.
| johnchristopher wrote:
| Neither is the relevance of your comment about it and yet
| here we are.
| tempfile wrote:
| What are you talking about? You said:
|
| > Capitalism is about moving wealth more efficiently:
| easier to allocate money/wealth to X through the banking
| system than to move sheep/wealth to X's farm.
|
| It's not. That's what money's about. Any system with an
| abstract concept of money admits that it's easier to
| allocate wealth with abstractions than physically moving
| objects.
|
| Capitalism is about capital. It's an economic system that
| says individuals should own things (i.e. control their
| purpose) by investing money (capital) into them. You
| attempted to correct the previous commenter, but provided
| an incorrect definition. I hope that clears up the
| relevance issue for you.
| johnchristopher wrote:
| > Capitalism is about capital. It's an economic system
| that says individuals should own things (i.e. control
| their purpose) by investing money (capital) into them.
|
| Yes. It's not about stealing land and kicking people out
| and raising sheep there instead. That (stealing) happens
| of course but is totally independent from any capitalist
| system.
|
| JFC, the same sentence could have been said with
| communism in mind.
|
| > You attempted to correct the previous commenter, but
| provided an incorrect definition. I hope that clears up
| the relevance issue for you.
|
| You are confusing the intent of capitalism - which I gave
| the general direction of - with its definition. Does that
| clear up the relevance issue for you ? Did I fucking not
| write wealth/money intentionally ?
| Lichtso wrote:
| Lawsuits based on what? Copyright?
|
| People crying for copyright in the context of AI training don't
| understand what copyright is, how it works and when it applies.
|
| What they think how copyright works: When you take someones
| work as inspiration then everything you produce form that
| counts as derivative work.
|
| How copyright actually works: The input is irrelevant, only the
| output matters. Thus derivative work is what explicitly
| contains or resembles underlying work, no matter if it was
| actually based on that or it is just happenstance /
| coincidence.
|
| Thus AI models are safe from copyright lawsuits as long as they
| filter out any output which comes too close to known material.
| Everything else is fine, even if the model was explicitly
| trained on commercial copyrighted material only.
|
| In other words: The concept of intellectual property is
| completely broken and that is old news.
| LunaSea wrote:
| Lawsuits based on code licensing for example.
|
| Scraping websites containing source code which is distributed
| with specific licenses that OpenAI & co don't follow.
| Lichtso wrote:
| Unfortunately not how it works, or at least not to the
| extend you wish it to be.
|
| One can train a model exclusively on source code from the
| linux kernel (GPL) and then generate a bunch of C programs
| or libraries from that. And they could publish them under
| MIT license as long as they don't reproduce any
| identifiable sections from the linux kernel. It does not
| matter where the model learned how to program.
| LunaSea wrote:
| You're mistaken.
|
| If I write code with a license that says that using this
| code for AI training is forbidden then OpenAI is directly
| going against this by scraping websites indiscriminately.
| Lichtso wrote:
| Sure, you can write all kinds of stuff in a license, but
| it is simply plain prose at that point. Not enforcable.
|
| There is a reason why it is generally advised to go with
| the established licenses and not invent your own,
| similarly to how you should not roll your own
| cryptography: Because it most likely won't work as
| intended.
|
| e.g. License: This comment is licensed under my custom
| L*a license. Any user with an username starting with "L"
| and ending in "a" is forbidden from reading my comment
| and producing replies based on what I have written.
|
| ... see?
| LunaSea wrote:
| You can absolutely write a license that contains the
| clauses I mentioned and it would be enforceable.
|
| Sorry, but the onus is on OpenAI to read the licenses not
| the creator.
|
| And throwing your hands in the air and saying "oh you
| can't do that in a license" is also of little use.
| CaptainFever wrote:
| No, it would not be enforceable. Your license can only
| give additional rights to users. It cannot restrict
| rights that users already have (e.g. fair use rights in
| the US, or AI training rights like in the EU or SG).
| LunaSea wrote:
| How does Fair Use consider commercial usage of the full
| content in the US?
| CaptainFever wrote:
| It's unknown yet, but the main point is that the inputs
| don't matter, as long as the output does not replicate
| the full content, it is fine.
| Lichtso wrote:
| > You can absolutely write a license that contains the
| clauses I mentioned and it would be enforceable.
|
| A license (copyright law) is not a contract (contract
| law). Simply publishing something does not make the whole
| world enter into a contract with you. Others first have
| to explicitly agree to do so.
|
| > Sorry, but the onus is on OpenAI to read the licenses
| not the creator.
|
| They can ignore it because they never agreed to it in the
| first place.
|
| > And throwing your hands in the air and saying "oh you
| can't do that in a license" is also of little use.
|
| It is very useful to know what works and what does not.
| That way you don't trick yourself and your work to be
| safe, don't get caught by surprise if you are in fact not
| and can think of alternatives instead.
|
| BTW, a thing you can do (which CaptainFever mentioned)
| and lots of services do because licenses are so weak is
| to make people sign up with an account and have them
| enter a ToS agreement instead.
| LunaSea wrote:
| > They can ignore it because they never agreed to it in
| the first place.
|
| They did by accessing and copying the code. Same as a
| human cloning a repository and using it's content or
| someone accessing a website with Terms of Use.
|
| No signed contract is needed here.
| CaptainFever wrote:
| > They did by accessing and copying the code.
|
| By default, copying is disallowed because of copyright.
| Your license provides them a right to copy the code,
| perhaps within certain restrictions.
|
| However, sometimes copying is allowed, such as fair use
| (I covered this in another comment I sent you). This
| would allow them to copy the code regardless of the
| license.
|
| > Same as a human cloning a repository and using it's
| content or someone accessing a website with Terms of Use.
|
| I've covered the cloning/copying part already, but "I
| agree to this ToS by continuing to browse this webpage"
| is called a clickwrap agreement. Its enforceability is
| dubious. I think the LinkedIn case showed that it only
| applied if HiQ actually explicitly agreed to it by
| signing up.
| jeremyjh wrote:
| That is not relevant to the comment you are responding
| to. Courts have been finding that scraping a website in
| violation of its terms of service is a liability,
| regardless of what you do with the content. We are not
| only talking about copyright.
| CaptainFever wrote:
| True, but ToSes don't apply if you don't explicitly agree
| with it (e.g. by signing up for an account). So that's
| not relevant in the case of publicly available content.
| rcxdude wrote:
| Also, the desired interpretation of copyright will not stop
| the multi-billion-dollar AI companies, who have the resources
| to buy the rights to content at a scale no-one else does. In
| fact it will give them a gigantic moat, allowing them to
| extract even more value out of the rest of the economy, to
| the detriment of basically everyone else.
| lolc wrote:
| As much as our brain contents are unlicensed copies to the
| extent we can reproduce copyrighted work: If the model can
| recite copyrighted portions of text used in training, the
| model weights are a derivative work. Because the weights
| obviously must encode the original work. Just because lossy
| compression was applied the original work should still be
| considered present as long as it's recognizable. So the
| weights may not be published without license. Seems rather
| straightforward to me and I do wonder how Meta thinks they
| get around this.
|
| Now if the likes of Openai and Google keep the model weights
| private and just provide generated text, they can try to
| filter for derivative works, but I don't see a solution that
| doesn't leak. If a model can be coaxed into producing a
| derivative work that escapes the filter, then boom,
| unlicensed copy was provided. If I tell the model to mix two
| texts word by word, what filter could catch this? What if I
| tell the model to use a numerical encoding scheme? Or to
| translate into another language? For example assuming the
| model knows a bunch of NYT articles by heart, as was already
| demonstrated: If have it translate one of those articles to
| French for me, that's still an unlicensed copy!
|
| I can see how they will try to get these violations legalized
| like the DMCA safe-harbored things, but at the moment they
| are the ones generating the unlicensed versions and
| publishing them when prompted to do so.
| xdennis wrote:
| > Lawsuits based on what? Copyright?
|
| > People crying for copyright in the context of AI training
| don't understand what copyright is, how it works and when it
| applies.
|
| People are complaining about what's happening, not with the
| exact wording of the law.
|
| What they are doing probably isn't illegal, but it _should_
| be. The problem is that it's very difficult for people to
| pass new legislation because they don't have lobbyists the
| way corporations do.
| jcranmer wrote:
| With all due respect, the lawyers I've seen who commented on
| the issue do not agree with your assessment.
|
| The things that constitute potentially infringing copying are
| not clearly well-defined, and whether or not training an AI
| is on that list has of course not yet been considered by a
| court. But you can make cogent arguments either way, and I
| would not be prepared to bet on either outcome. Keep in mind
| also that, legally, copying data from disk to RAM is
| considered potentially infringing, which should give you a
| sense of the sort of banana-pants setup that copyright can
| entail.
|
| That said, if training is potentially infringing on
| copyright, it now seems pretty clear that a fair use defense
| is going to fail. The recent Warhol decision rather destroys
| any hope that it might be considered "transformative", while
| the fact that the AI companies are now licensing content for
| training use is a concession that the fourth and usually most
| important factor (market impact) weighs against fair use.
| Lichtso wrote:
| Lawyers commenting on this publicly will add their bias to
| reinforce the stances of their clientele. Thus somebody
| usually representing the copyright holders will say it is
| likely infringing and someone usually representing the AI
| companies will say it is unlikely.
|
| But you are right, we don't know until president is set by
| a court. I am only warning people that laying back and
| hoping that copyright will apply as they wish is not a good
| strategy to defend your work. One should consider
| alternative legal constructs or simply not releasing
| material to the general public anymore.
| fallingknife wrote:
| Copyright law is intended to prevent people from stealing the
| revenue stream from someone else's work by copying and
| distributing that work in cases where the original is difficult
| and expensive to create, but easy to make copies of once
| created. How does an LLM do this? What copies of copyrighted
| work do they distribute? Whose revenue stream are they taking
| with this action?
|
| I believe that all the copyright suits against AI companies
| will be total failures because I can't come up with a answer to
| any of those questions.
| DoctorOetker wrote:
| Here is a business model for copy right law firms:
|
| Use source-aware training, use the same datasets as used in LLM
| training + copyrighted content. Now the LLM can respond not
| just what it thinks is most likely but also what source
| document(s) provided specific content. Then you can consult
| commercially available LLM's and detect copy right
| infringements, and identify copyright holders. Extract
| perpetrators and victims at scale. To ensure indefinite
| exploitation only sue commercially succesful LLM providers, so
| there is a constant new flux of growing small LLM providers
| taking up the freed niche of large LLM providers being sued
| empty.
| chrismorgan wrote:
| > _Use source-aware training_
|
| My understanding (as one uninvolved in the industry) is that
| this is more or less a _completely_ unsolved problem.
| DoctorOetker wrote:
| It's just training the source association together with the
| training set:
|
| https://github.com/mukhal/intrinsic-source-citation
|
| The only 2 reasons big LLM providers refuse to do it is
|
| 1) to prevent a long slew of content creators filing class
| action suit.
|
| 2) to keep regulators in the dark of how feasible and
| actionable it would be, once regulators are aware they can
| perform the source-aware training themselves
| jokethrowaway wrote:
| It's the other way round. The little guys will never win, it
| will be just a money transfer from one large corp to another.
|
| We should just scrap copyright and everybody plays a fair game,
| including us hackers.
|
| Sue me because of breach of contract in civil court for damages
| because I shared your content, don't send the police and get me
| jailed directly.
|
| I had my software cracked and stolen and I would never go after
| the users. They don't have any contract with me. They
| downloaded some bytes from the internet and used it. Finding
| whoever shared the code without authorization is hard and even
| so, suing them would cost me more than the money I'm likely to
| get back. Fair game, you won.
|
| It's a natural market "tax" on selling a lot of copies and
| earning passively.
| repelsteeltje wrote:
| I like the _stone soup_ narrative on AI. It was mentioned in a
| recent Complexity podcast, I think by Alison Gopnik of SFI. It
| 's analogous to the Pragmatic Programmar story about stone
| soup, paraphrasing:
|
| Basically you start with a stone in a pot of water -- a neural
| net technology that does nothing meaningful but looks
| interesting. You say: "the soup is almost done, but would taste
| better given a bunch of training data." So you add a bunch of
| well curated docs. "Yeah, that helps but how about adding a
| bunch more". So you insert some blogs, copy righted materials,
| scraped pictures, reddit, and stack exchange. And then you ask
| users to interact with the models to fine tune it, contribute
| priming to make the output look as convincing as possible.
|
| Then everyone marvels at your awesome LLM -- a simple
| algorithm. How wonderful, this soup tastes given that the only
| ingredients are stones and water.
| CaptainFever wrote:
| The stone soup story was about sharing, though. Everyone
| contributes to the pot, and we get something nice. The
| original stone was there to convince the villagers to share
| their food with the travellers. This goes against the
| emotional implication of your adaptation. The story would
| actually imply that copyright holders are selfish and should
| be contributing what they can to the AI soup, so we can get
| something more than the sum of our parts.
|
| From Wikipedia:
|
| > Some travelers come to a village, carrying nothing more
| than an empty cooking pot. Upon their arrival, the villagers
| are unwilling to share any of their food stores with the very
| hungry travelers. Then the travelers go to a stream and fill
| the pot with water, drop a large stone in it, and place it
| over a fire. One of the villagers becomes curious and asks
| what they are doing. The travelers answer that they are
| making "stone soup", which tastes wonderful and which they
| would be delighted to share with the villager, although it
| still needs a little bit of garnish, which they are missing,
| to improve the flavor.
|
| > The villager, who anticipates enjoying a share of the soup,
| does not mind parting with a few carrots, so these are added
| to the soup. Another villager walks by, inquiring about the
| pot, and the travelers again mention their stone soup which
| has not yet reached its full potential. More and more
| villagers walk by, each adding another ingredient, like
| potatoes, onions, cabbages, peas, celery, tomatoes,
| sweetcorn, meat (like chicken, pork and beef), milk, butter,
| salt and pepper. Finally, the stone (being inedible) is
| removed from the pot, and a delicious and nourishing pot of
| soup is enjoyed by travelers and villagers alike. Although
| the travelers have thus tricked the villagers into sharing
| their food with them, they have successfully transformed it
| into a tasty meal which they share with the donors.
|
| (Open source models exist.)
| unraveller wrote:
| First gen models trained on books directly. Latest Phi
| distilled textbook-like knowledge down from disparate sources
| to create novel training data. They are all fairly open about
| this change and some are even allowing upset publishers to
| confirm that their work wasn't used directly. So stones and
| ionized water go in the soup.
| AI_beffr wrote:
| ok the "elites" have spent a lot of money training AI but have
| the "commoners" lifted a single finger to stop them? nope! its
| the job of the commoners to create a consensus, a culture, that
| protects people. so far all i see from the large group of
| people who are not a part of the elite is denial about this
| entire issue. they deny AI is a risk and they dont shame people
| who use it. 99.99% of the population is culpable for any
| disaster that befalls us regarding AI.
| infecto wrote:
| I suspect the greater issue is that copyright is not always
| clear in this area? I am also not sure how you prevent "elites"
| from using the information while also allowing the "common"
| person to it.
| drstewart wrote:
| >elites stealing everything
|
| > a billion thefts
|
| >The biggest theft
|
| >what's been stolen
|
| I do like how the internet has suddenly acknowledged that
| pirating is theft and torrenting IS a criminal activity. To
| your point, I'd love to see a massive operation to arrest
| everyone who has downloaded copyrighted material illegal (aka
| stolen), for the public interest.
| amatecha wrote:
| This is such a misrepresentation of the issue and what people
| are saying about it. They call it "theft" because corps are,
| apparently-indiscriminately and without remuneration of
| creators, "ingesting" the original work of thousands or
| millions of individuals, in order to provide for-profit
| services derived from that ingestion/training. "Pirates", on
| the other hand, copy content for their own momentary
| entertainment, and the exploitation ends there. They aren't
| turning around and starting a multi-million-dollar business
| selling pirated content en masse.
| drstewart wrote:
| Theft isn't concerned with what you do with the product
| afterwards.
| masswerk wrote:
| > The thing I'm tired of is elites stealing everything under
| the sun to feed these models.
|
| I suggest to apply the same to property law: make a photo and
| obtain instant and unlimited rights of use. - Things may change
| faster than we may imagine...
| uhtred wrote:
| We need a revolution.
| bschmidt1 wrote:
| Same here hungry neigh _thirsty_ for prompt-2-film
|
| _" output a 90 minute harry potter sequel to the final film
| starring the original actors plus Tom Hanks"_
| csomar wrote:
| There is no copyright with AI unless you want to implement the
| same measures for humans too. I am fine with it as long as we
| at least get open-weights. This way you kill both copyright and
| any company that's trying to profit out of AI.
| visarga wrote:
| > I'm hungry for more lawsuits. The biggest theft in human
| history
|
| You want to own abstract ideas because AI can rephrase any
| specific expression. But that is antithetic to creativity.
| IanKerr wrote:
| It's been pretty incredible watching these companies siphon up
| everything under the sun under the guise of "training data"
| with impunity. These same companies will then turn around and
| sic their AIs on places like Youtube and send out copyright
| strikes via a completely automated system with loads of false-
| positives.
|
| How is it acceptable to allow these companies to steal all of
| this copyrighted data and then turn around and use it to
| enforce their copyrights in the most heavy-handed manner? The
| irony is unbelievable.
| defgeneric wrote:
| Perhaps what we should be pushing for is a law that would force
| full disclosure regarding the training corpus and require a
| curated version of the training data to be made available. I'm
| sure there would be all kinds of unintended consequences of a
| law like that but maybe we'd be better off starting from a
| strong basis and working out those exceptions. While billions
| have been spent to train these models, the value of the
| millions of human hours spent creating the content they're
| trained on should likewise be recognized.
| ryanjshaw wrote:
| > There are no shortcuts to solving these problems, it takes time
| and experience to tackle them.
|
| > I've been working in testing, with a focus on test automation,
| for some 18 years now.
|
| OK the first thought that came to my mind reading this: sounds
| like a opportunity to build an AI-driven product.
|
| I've been using Cursor daily. I use nothing else. It's brilliant
| and I'm very happy. If I could have Cursor for Well-Designed
| Tests I'd be extra happy.
| fallingknife wrote:
| I'm not. I think it's awesome and I can't wait to see what comes
| out next. And I'm completely OK with all of my work being used to
| train models. Bunch of luddites and sour grapes around here on HN
| these days.
| elpocko wrote:
| Same here! Amazing stuff that I have waited for my entire life,
| and I won't let luddite haters ruin it for me. Their impotent
| rage is tiring but in the end it's just one more thing you have
| to ignore.
| fallingknife wrote:
| Yeah, they made something that passes a Turing test, and
| people on HN of all places hate it? What happened to this
| place? It's like the number one thing people hate around here
| now is another man's success.
|
| I won't ignore them. I'll continue to loudly disagree with
| the losers and proudly collect downvotes from them knowing I
| got under their skin.
| Applejinx wrote:
| Eliza effectively passed Turing tests. I think you gotta do
| a little better than that, and 'ha ha I made you mad' isn't
| actually the best defense of your position.
| elpocko wrote:
| Eliza did not pass Turing tests in any reasonable
| capacity. It took anyone 10 seconds to realize what it
| was doing; no one was fooled by it. The comparison to
| modern LLMs is preposterous.
|
| GP doesn't have to defend their position. They like
| something, and they don't shut up about it even though it
| makes a bunch of haters mad. That's good; no defense
| required. On the contrary: those who get mad need to
| defend themselves.
| yannis wrote:
| Absolutely amazing stuff. I am now three scores and ten in my
| life time, seen a lot of changes from slide rules->very fast
| to calculators->very fast to pcs, from dot matrix printers to
| lazer jets and dozens of other things. Wish AI was available
| when I was doing my PhD. If you know its limitations it can
| be very useful. At present I occasionally use it to translate
| references from wikipedia articles to bibtex format. It is
| very good at this, I only need to fix a few minor errors,
| letting me focus to the core of what I am doing. But human
| nature always resists change, especially if it leads to the
| unknown. I must admit that I think AI will bring negative
| consequences as it will be misused by politicians and the
| military, they need to be "regulated" not the AI.
| Kiro wrote:
| You're getting downvoted, but I agree with your last sentence
| -- and not just about AI. The amount of negativity here
| regarding almost everything is appalling. Maybe it's rose-
| tinted nostalgia but I don't remember it being like this a few
| years ago.
| CaptainFever wrote:
| Hacker News used to be nicknamed Hater News, as I recall.
| amiantos wrote:
| There's _a lot_ of poor quality engineers out there who
| understand that on some level they are committing fraud by
| spinning their wheels all day shifting CSS values around on a
| React component while collecting large paychecks. I think it's
| only natural all of those engineers are terrified by the
| prospect of some computer being capable of doing their job
| quickly and efficiently and replacing them. Those people are
| crying so loudly that it's encouraging otherwise normal people
| to start jumping on the anti-AI bandwagon too, because their
| voices are so loud people can't hear themselves think
| critically anymore.
|
| I think passionate and inspired engineers who love their job
| and have very solid soft skills and experience working deeply
| on complex software projects will always have a position in the
| industry, and people like that are understandably very
| enthusiastic about AI instead of being scared of it.
|
| In other words, it is weird how bad the status quo was, until
| we got something that really threatened the status quo, now a
| lot of the people who wanted to tear it all down are now
| desperately trying to stop everything from changing. The
| sentiment on the internet has gone in a weird direction, but
| it's all about money deep down. This hypothetical new status
| quo brought on by AI seems to be wedded to fears of less money,
| thus abject terror masquerading as "I'm so bored!" posturing.
|
| You see this in the art circles, where established artists are
| willing to embrace AI, but it's the small time aspiring bedroom
| artists that have not achieved any success who are all over
| Twitter denouncing AI art as soulless and terrible. While the
| real artists are too busy using any tool available to make art,
| or are just making art because they want to make art and aren't
| concerned with fear-mongering.
| Toorkit wrote:
| Computers were supposed to be these amazing machines that are
| super precise. You tell it to do a thing, it does it.
|
| Nowadays, it seems we're happy with computers apparently going
| RNG mode on everything.
|
| 2+2 can now be 5, depending on the AI model in question, the day,
| and the temperature...
| bamboozled wrote:
| Had to laugh at this one. I think we prefer the statistical
| approach because it's easier, for us ...
| maguay wrote:
| This, 100%, is the reason I feel like the sand's shifting under
| my feet.
|
| We went from trusting computing output to having to second-
| guess everything. And it's tiring.
| diggan wrote:
| I kind of feel like if you're using a "Random text generator
| based on probability" for something that you need to trust,
| you're kind of holding this tool wrong.
|
| I wouldn't complain a RNG doesn't return the numbers I want,
| so why complain you don't get 100% trusted output from a
| random text generator?
| jeremyjh wrote:
| Because people provide that work without acknowledging it
| was created by a RNG, representing it as their own and
| implying some of level of assurance that it is actually
| true.
| archerx wrote:
| Its a Large LANGUAGE Model and not a Large MATHEMATICS Model.
| People need to learn to use the right tools for the right jobs.
| Also LLMs can be made more deterministic by controlling it's
| "temperature".
| anon1094 wrote:
| Yep. ChatGPT will use the code interpreter for questions like
| is 2 + 2 = 5? as it should.
| Toorkit wrote:
| There's other forms of AI than LLM's and to be honest I
| thought the 2+2=5 was obviously an analogy.
|
| Yet 2 comments have immediately jumped on it.
| FridgeSeal wrote:
| Hackernews comments and getting bogged down on minutiae and
| missing the overall point, is there a more iconic pairing?
| Janicc wrote:
| These amazing machines weren't consistently able to tell if an
| image had a bird in it or not up until like 8 years ago. If you
| use AI as a calculator where you need it to be precise, that's
| on you.
| FridgeSeal wrote:
| I think the issue is that: I'm not going to be using as a
| calculator any time soon.
|
| Unfortunately, there's a lot of people out there, working on
| a lot of products, some of which I need to use, or will be
| exposed to, and some of them aren't going to have the same
| qualms about "language model thinks 2+2=5".
|
| There's a guy on Twitter scoring how well ChatGPT models can
| do multiplication.
|
| A founder at a previous workplace wanted to wholesale dump
| data into ChatGPT and "make it do causal analysis!!!" (Only
| slightly paraphrased). These tools enable some frighteningly
| large-scale weaponised stupidity.
| shultays wrote:
| There are areas it doesn't have to be as "precise", like image
| generation or editing which I believe better suited for AI
| tools
| GaggiX wrote:
| Machines were not able to deal with non-formal problems.
| left-struck wrote:
| I think about it differently. Before computers had to be given
| extremely precise and completely unambiguous instructions, now
| they can handle some ambiguity as well. You still have the
| precise output if you want it, it didn't go away.
|
| Btw I'm also tired of AI, but this is one thing that's not so
| bad
|
| Edit: before someone mentions fuzzy logic, I'm not talking
| about the input of a function being fuzzy, I'm talking about
| the instructions themselves, the function is fuzzy.
| a5c11 wrote:
| That's an interesting point of view. For some reason we put so
| much effort towards making computers think and behave like a
| human being, while one of the first reasons behind inventing a
| computer was to avoid human errors.
| fatbird wrote:
| This is the most succinct summary of what's been gnawing at
| me ever since LLMs became the latest _thing_.
|
| If Ilya Sutskever announced tomorrow that he'd achieved AGI,
| and here is its economic plan for the next 20 years, why
| would we have any reason to accept it over that of other
| human experts? It would literally be just another expert
| trying to tell us how to do things. And _we 're not short of
| experts_, and an AGI expert has thrown away the credibility
| of computers as _deterministically_ better calculators than
| we are.
| falcor84 wrote:
| This sounds to me like a straw man argument. Obviously 2+2 will
| always give you 4, in any modern LLM, and even just in the
| Chrome address bar.
|
| Can you offer a real situation where we _should_ expect the LLM
| to return a deterministic answer and should rightly be
| concerned that we 're getting a stochastic one?
| Toorkit wrote:
| Y'all are hyper focusing on this example. How about something
| more vague like FOO obviously being BAR, except sometimes
| it's BAZ now?
|
| The layman doesn't know the distinction, so they accept this
| as fact.
| falcor84 wrote:
| I'm not being facetious; I really can't think of a single
| good example where we need something to be deterministic
| and then have a reason to be disappointed about AI giving
| us a stochastic response.
| hcks wrote:
| And by nowadays you mean since ChatGPT got released, that is
| less than 2 years ago (e.g. a consumer preview of a frontier
| research project). Interesting.
| snowram wrote:
| I quite like some parts of AI. Ray reconstruction and
| supersampling methods have been getting incredible and I can now
| play games with twice the frames per seconds. On the scietific
| side, meteorological predictions and protein folding have made
| formidable progresses thanks to it. Too bad this isn't the side
| of AI that is in the spotlight.
| Meniceses wrote:
| I love AI.
|
| In comparision to a lot of other technologies, we actually have
| jumps in quality left and right, great demos, new things which
| are really helpful.
|
| Its fun to watch the AI news because there is something relevant
| new happening.
|
| I'm worried regarding the impact of AI but this is a billion
| times better than the last 10 years which was basically just
| cryptobros, nfts, blockchain shit which is basically just fraud.
|
| Its not just some GenAI stuff, we talk about blind people getting
| better help through image analysis, we talk about alpha fold,
| LLMs being impressive as hell, the research currently happening.
|
| And yes i also already see benefits in my job and in my startup.
| bamboozled wrote:
| I'm truly asking in good faith here because I don't know but
| what has alpha fold actually helped us achieve ?
| Meniceses wrote:
| It allows us to speed up medical research.
| bamboozled wrote:
| In what field specifically and how ?
| scotty79 wrote:
| Are you asking what field of science or what industry is
| interested in predicting how proteins fold?
|
| Biotechnology and medicine probably.
|
| Pipeline from science to application sometimes takes
| decades, but I'm sure you can find news of some
| advancements enabled by finding out short, easy to
| synthesize proteins that fit particular receptor to block
| it or that process some simplified enzymes that still
| process some chemicals of interest more efficiently than
| natural ones. Finding them would be way harde without
| ability to predict how a sequence of amino-acids will
| fold.
|
| You'd need to actually try to manufacture them then look
| at them closely.
|
| First thing that came to my mind as a possible
| application is designing monoclonal antibodies. Here's
| some paper about something relating to alpha fold and
| antibodies:
|
| https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10349958/
| RivieraKid wrote:
| I guess he's asking for specific examples of AlphaFold
| leading to some tangible real-world benefit.
| scotty79 wrote:
| Wait a decade then look around.
| Meniceses wrote:
| Are you phishing for something or are you not sure how
| this actually works?
|
| Everyone who is looking for proteins (vacines,
| medication) need to find the right proteins for different
| cases. For attaching to something (antibody design), for
| delivering something (like another protein) or for
| understanding a disease (why is this protein an issue?).
|
| Covid research benefitted from this for example.
|
| You can go through papers which reference the alphafold
| paper to see what it does:
| https://consensus.app/papers/highly-protein-structure-
| predic...
| bux93 wrote:
| No such thing as a stupid question. It's a question that
| this paper in Proteomics (which appears to be a legit
| journal) attempts to answer, at least. https://analytical
| sciencejournals.onlinelibrary.wiley.com/do...
| Meniceses wrote:
| I didn't say stupid but sometimes people asking in a way
| which might not feel legimate / honest.
| low_tech_love wrote:
| The most depressing thing for me is the feeling that I simply
| cannot trust anything that has been written in the past 2 years
| or so and up until the day that I die. It's not so much that I
| think people have used AI, but that I _know_ they have with a
| high degree of certainty, and this certainty is converging to
| 100%, simply because there is no way it will not. If you write
| regularly and you 're not using AI, you simply cannot keep up
| with the competition. You're out. And the growing consensus is
| "why shouldn't you?", there is no escape from that.
|
| Now, I'm not going to criticize anyone that does it, like I said,
| you have to, that's it. But what I had never noticed until now is
| that knowing that a human being was behind the written words
| (however flawed they can be, and hopefully are) is crucial for
| me. This has completely destroyed my interest in reading any new
| things. I guess I'm lucky that we have produced so much writing
| in the past century or so and I'll never run out of stuff to
| read, but it's still depressing, to be honest.
| t43562 wrote:
| It empowers people to create mountains of shit that they cannot
| distinguish from shit - so they are happy.
| elnasca2 wrote:
| What fascinates me about your comment is that you are
| expressing that you trusted what you read before. For me, LLMs
| don't change anything. I already questioned the information
| before and continue to do so.
|
| Why do you think that you could trust what you read before? Is
| it now harder for you to distinguish false information, and if
| so, why?
| baq wrote:
| scale makes all the difference. society without trust falls
| apart. it's good if some people doubt some things, but if
| everyone necessarily must doubt everything, it's anarchy.
| dangitman wrote:
| Is our society built on trust? I don't generally trust most
| of what's distributed as news, for instance. Virtually
| every newsroom in america is undermined by basic conflicts
| of interest. This has been true since long before I was
| born, although perhaps the death of local news has
| accelerated this phenomenon. Mostly I just "trust" that
| most people don't want to hurt me (even if this trust is
| violated any time I bike along side cars for long enough)
|
| I don't think that LLMs will change much, frankly, it's
| just gonna be more obvious when they didn't hire a human to
| do the writing.
| Hoasi wrote:
| > Is our society built on trust?
|
| A good part of society, the foundational part, is trust.
| Trust between individuals, but also trust in the sense
| that we expect things to behave in a certain way. We
| trust things like currencies despite their flaws. Our
| world is too complex to reinvent the wheel whenever we
| need to do a transaction. We must believe enough in a
| make-believe system to avoid perpetual collapse.
| vouaobrasil wrote:
| Perhaps that anarchy is the exact thing we need to convince
| everyone to revolt against big tech firms like Google and
| OpenAI and take them down by mob rule.
| kombookcha wrote:
| Debunking bullshit inherently takes more effort than
| generating bullshit, so the human factor is normally your big
| force multiplier. Does this person seem trustworthy? What
| else have they done, who have they worked with, what hidden
| motivations or biases might they have, are their vibes /off/
| to your acute social monkey senses?
|
| However with AI anyone can generate absurd torrential flows
| of bullshit at a rate where, with your finite human time and
| energy, the only winning move is to reject out of hand any
| piece of media that you can sniff out as AI. It's a solution
| that's imperfect, but workable, when you're swimming through
| a sea of slop.
| bitexploder wrote:
| Maybe the debunking AIs can match the bullshit generating
| AIs, and we will have balance in the force. Everyone is
| focused on the generative AIs, it seems.
| nicce wrote:
| There is always more money available for bullshit
| generation than bullshit removal.
| desdenova wrote:
| No, they can't. They'll still be randomly deciding if
| something is fake or not, so they'll only have a
| probability of being correct, like all nondeterministic
| AI.
| ontouchstart wrote:
| Debugging is harder than writing code. Once the code passed
| linter, compiler and test, the bugs might be more subtly
| logical and require more effort and intelligence.
|
| We are all becoming QA of this super automated world.
| nicce wrote:
| In the past, you had to put a lot of effort to produce a text
| which seemed to be high quality, especially when you knew
| nothing about the subject. By the look of text and the usage
| of the words, you could tell how professional the writer was
| and you had some confidence that the writer knew something
| about the subject. Now, that is completely removed. There is
| no easy filter anymore.
|
| While the professional looking text could have been already
| wrong, the likelihood was smaller, since you usually needed
| to know something at least in order to write convincing text.
| roenxi wrote:
| > While the professional looking text could have been
| already wrong, the likelihood was smaller...
|
| I don't criticise you for it, because that strategy is both
| rational and popular. But you never checked the accuracy of
| your information before so you have no way of telling if it
| has gotten more or less accurate with the advent of AI. You
| were testing for whether someone of high social
| intelligence wanted you to believe what they said rather
| than if what they said was true.
| dietr1ch wrote:
| I guess the complaint is about losing this proxy to gain
| some assurance for little cost. We humans are great at
| figuring out the least amount of work that's good enough.
|
| Now we'll need to be fully diligent, which means more
| work, and also there'll be way more things to review.
| roenxi wrote:
| I'd argue people clearly don't care about the truth at
| all - they care about being part of a group and that is
| where it ends. It shows up in things like critical
| thinking being a difficult skill acquired slowly vs
| social proof which humans just do by reflex. Makes a lot
| of sense, if there are 10 of us and 1 of you it doesn't
| matter how smartypants you may be when the mob forms.
|
| AI does indeed threaten people's ability to identify
| whether they are reading work by a high status human and
| what the group consensus is - and that is a real problem
| for most people. But it has no bearing on how correct
| information was in the past vs will be in the future.
| Groups are smart but they get a lot of stuff wrong in
| strategic ways (it is almost a truism that no group ever
| identifies itself or its pursuit of its own interests as
| the problem).
| Jensson wrote:
| > I'd argue people clearly don't care about the truth at
| all
|
| Plenty of people care about the truth in order to get
| advantages over the ignorant. Beliefs aren't just about
| fitting in a group, they are about getting advantages and
| making your life better, if you know the truth you can
| make much better decisions than those who are ignorant.
|
| Similarly plenty of people try to hide the truth in order
| to keep people ignorant so they can be exploited.
| rendall wrote:
| > _if you know the truth you can make much better
| decisions than those who are ignorant_
|
| There are some fallacious hidden assumptions there. One
| is that "knowing the truth" equates to better life
| outcomes. I'd argue that history shows more often than
| not that what one knows to be true best align with
| prevailing consensus if comfort, prosperity and peace is
| one's goal, even if that consensus is flat out wrong. The
| list is long of lone geniuses who challenged the
| consensus and suffered. Galileo, Turing, Einstein,
| Mendel, van Gogh, Darwin, Lovelace, Boltzmann, Godel,
| Faraday, Kant, Poe, Thoreau, Bohr, Tesla, Kepler,
| Copernicus, et. al. all suffered isolation and
| marginalization of some degree during their lifetimes,
| some unrecognized until after their death, many living in
| poverty, many actively tormented. I can't see how Turing,
| for instance, had a better life than the ignorant who
| persecuted him despite his excellent grasp of truth.
| Jensson wrote:
| You are thinking too big, most of the time the truth is
| whether a piece of food is spoiled or not etc, and that
| greatly affects your quality of life. Companies would
| love to keep you ignorant here so they can sell you
| literal shit, so there are powerful forces wanting to
| keep you ignorant, and today those powerful forces has
| way stronger tools than ever before working to keep you
| ignorant.
| roenxi wrote:
| Socrates is also a big name. Never forget.
| danmaz74 wrote:
| You're implying that there is an absolute Truth and that
| people only need to do [what?] to check if something is
| True. But that's not True. We only have models of how
| reality works, and every model is wrong - but some are
| useful.
|
| When dealing with almost everything you do day by day,
| you have to rely on the credibility of the source of the
| information you have. Otherwise how could you know that
| the can of tuna you're going to eat is actually tuna and
| not some venomous fish? How do you know that you should
| do what your doctor told you? Etc. etc.
| svieira wrote:
| > You're implying that there is an absolute Truth and
| that people only need to do [what?] to check if something
| is True. But that's not True. We only have models of how
| reality works, and every model is wrong - but some are
| useful.
|
| But isn't your third sentence True?
| wlesieutre wrote:
| There's not enough time in the day to go on a full bore
| research project about every sentence I read, so it's not
| physically possible to be "fully diligent."
|
| The best we can hope for is prioritizing which things are
| worth checking. But even that gets harder because you go
| looking for sources and now _those_ are increasingly
| likely to be LLM spam.
| quietbritishjim wrote:
| How do you "check the accuracy of your information" if
| all the other reliable-sounding sources could also be AI
| generated junk? If it's something in computing, like
| whether something compiles, you can sometimes literally
| check for yourself, but most things you read about are
| not like that.
| cutemonster wrote:
| Interesting points! Doesn't sound impossible with an AI
| that's wrong less often than an average human author (if
| the AIs training data was well curated).
|
| I suppose a related problem is that we can't know if the
| human who posted the article, actually agrees with it
| themselves.
|
| (Or if they clicked "Generate" and don't actually care,
| or even have different opinions)
| glenstein wrote:
| >But you never checked the accuracy of your information
| before so
|
| They didn't say that and that's not a fair or warranted
| extrapolation.
|
| They're talking about a heuristic that we all use, as a
| shorthand proxy that doesn't replace but can help steer
| the initial navigation in the selection of reliable
| sources, which can be complemented with fact checking
| (see the steelmanning I did there?). I don't think
| someone using that heuristic can be interpreted as
| tantamount to completely ignoring facts, which is a
| ridiculous extrapolation.
|
| I also think is misrepresents the lay of the land, which
| is that in the universe of nonfiction writing, I don't
| think that there's a fire hose of facts and falsehoods
| indistinguishable in tone. I think there's in fact a
| reasonably high correlation between the discernible tone
| of impersonal professional and credible information,
| which, again (since this seems to be a difficult sticking
| point) doesn't mean that the tone substitutes for the
| facts which still need to be verified.
|
| The idea that information and misinformation are tonally
| indistinguishable is, in my experience, only something
| believed by post-truth "do you own research" people who
| think there are equally valid facts in all directions.
|
| There's not, for instance, a Science Daily of equally
| sciency sounding misinformation. There's not a second
| different IPCC that publishes a report with thousands of
| citations which are all wrong, etc. Misinformation is out
| there but it's not symmetrical, and understanding that
| it's not symmetrical is an important aspect of
| information literacy.
|
| This is important because it goes to their point, which
| is that something _has_ changed, in the advent of LLMS.
| That symmetry may be coming, and it 's precisely the fact
| that it _wasn 't_ there before that is pivotal.
| SoftTalker wrote:
| In the past, with a printed book or journal article, it
| was safe to assume that an editor had been involved, to
| some degree or another challenging claimed facts, and the
| publisher also had an interest in maintaining their
| reputation by not publishing poorly researched or
| outright false information. You would also have reviewers
| reading and reacting to the book in many cases.
|
| All of that is gone now. You have LLMs spitting their
| excrement directly onto the web without so much as a
| human giving it a once-over.
| Eisenstein wrote:
| I suggest you look into how many things were published
| without such scrutiny, because they sold.
| ookdatnog wrote:
| Writing a text of decent quality used to constitute proof
| of work. This is now no longer the case, and we haven't
| adapted to this assumption becoming invalid.
|
| For example, when applying to a job, your cover letter used
| to count as proof of work. The contents are less important
| than the fact that you put some amount of effort in it,
| enough to prove that you care about this specific vacancy.
| Now this basic assumption has evaporated, and job searching
| has become a meaningless two-way spam war, where having
| your AI-generated application selected from hundreds or
| thousands of other AI-generated applications is little more
| than a lottery.
| bitexploder wrote:
| This. I am very picky about how I use ML still, but it is
| unsurpassed as a virtual editor. It can clean up grammar
| and rephrase things in a very light way, but it gives my
| prose the polish I want. The thing is, I am a very decent
| writer. I wrote professionally for 18 years as a part of
| my job delivering reports of high quality as my work
| product. So, it really helps that I know exactly what
| "good" looks like by my standards. ML can clean things up
| so much faster than I can and I am confident my writing
| is organic still, but it can fix up small issues, find
| mistakes, etc very quickly. A word change here or there,
| some punctuation, that is normal editing. It is genuinely
| good at light rephrasing as well, if you have some idea
| of what intent you want.
|
| When it becomes obvious, though, is when people let the
| LLM do the writing for them. The job search bit is
| definitely rough. Referrals, references, and actual
| accomplishments may become even more important.
| gtirloni wrote:
| As usual, LLMs are an excellent tool when you already
| have a decent understanding of the field you're
| interested in using them in. Which is not the case of
| people posting in social media or creating their first
| programs. That's where the dullness and noise come from.
|
| The noise ground has been elevated 100x by LLMs. It was
| already bad before but it's accelerated the trend.
|
| So, yes, we should have never been trusting anything
| online but before LLMs we could rely on our brains to
| quickly identify the bad. Nowadays, it's exhausting.
| Maybe we need a LLM trained on spotting LLMs.
|
| This month, I, with decades of experience, used Claude
| Dev as an experiment to create a small automation tool.
| After countless manual fixes, it finally worked and I was
| happy. Until I gave thr whole thing a decent look again
| and realized what a piece of garbage I had created. It's
| exhausting to be on the lookout for these situations. I
| prefer to think things through myself, it's a more
| rewarding experience with better end results anyway.
| iszomer wrote:
| LLM's are a great onramp to filling in knowledge that may
| have been lost to age or updated to their modern
| classification. For example, I didn't know _Hokkien_ and
| _Haka_ are distinct linguistic branches within the Sino-
| Tibetan language and warrants more (personal) research
| into the subject. And all this time, without the
| internet, we often just colloquially called it Taiwanese.
| aguaviva wrote:
| How is this considered "lost" knowledge there are (large)
| Wikipedia pages about those languages (which is of course
| what the LLM is cribbing from)?
|
| "Human-curated encycolpedias are a great onramp to
| filling in knowledge gaps", that I can go with.
| nicce wrote:
| It is lost in a sense that you had no idea about such
| possibility and you did not know to search it in the
| first hand, while I believe that in this case LLM brought
| it up as a side note.
| aguaviva wrote:
| Such fortuitous stumblings happen all the time without
| LLMs (and in regular libraries, for those brave enough to
| use them). It's just the natural byproduct of doing any
| kind of research.
| skydhash wrote:
| Most of my knowledge comes from physical encyclopedia and
| download the wikipedia text dump (internet was not
| readily available). You search for one thing and just
| explore by clicking.
| danielbln wrote:
| Not to sound too dismissive, but there is a distinct
| learning curve when it comes to using models like Claude
| for code assist. Not just the intuition when the model
| goes off the rails, but also what to provide it in the
| context, how and what to ask for etc. Trying it once and
| dismissing it is maybe not the best experimental setup.
|
| I've been using Zed recently with its LLM integration so
| assist me in my development and its been absolutely
| wonderful, but one must control tightly what to present
| to the model and what to ask for and how.
| gtirloni wrote:
| It's not my first time using LLMs and you're assuming too
| much.
| dotnet00 wrote:
| Yeah, this is how I use it too. I tend to be a very dry
| writer, which isn't unusual in science, but lately I've
| taken to writing, then asking an LLM to suggest
| improvements.
|
| I know not to trust it to be as precise as good research
| papers need to be, so I don't take its output, it usually
| helps me reorder points or use different transitions
| which make the material much more enjoyable to read. I
| also find it useful for helping to come up with an
| opening sentence from which to start writing a section.
| bitexploder wrote:
| Active voice is difficult in technical and scientific
| writing for sure :)
| msikora wrote:
| This is my go-to process whenever I write anything now:
|
| 1. I use dictation software to get my thoughts out as a
| stream of consciousness. 2. Then, I have ChatGPT or
| Claude refine it into something coherent based on a
| prompt of what I'm aiming for. 3. Finally, I review the
| result and make edits where needed to ensure it matches
| what I want.
|
| This method has easily boosted my output by 10x, and I'd
| argue the quality is even better than before. As a non-
| native English speaker, this approach helps a lot with
| clarity and fluency. I'm not a great writer to begin
| with, so the improvement is noticeable. At the end of the
| day, I'm just a developer--what can I say?
| rasulkireev wrote:
| Great opportunity to get ahead of all the lazy people who
| use AI for a cover letter. Do a video! Sure, AI will be
| able to do that soon, but then we (not lazy people, who
| care) will come up with something even more personal!
| akho wrote:
| A blowjob, I assume.
| msikora wrote:
| Great idea! I'll get an LLM to write the script for the
| video and then I'll just read it! I can crank out 20 of
| these in an hour!
| diggan wrote:
| > By the look of text and the usage of the words, you could
| tell how professional the writer was and you had some
| confidence that the writer knew something about the subject
|
| How did you know this unless you also had the same or more
| knowledge than the author?
|
| It would seem to me we are as clueless now as before about
| how to judge how skilled a writer is without requiring to
| already posses that very skill ourselves.
| gizmo wrote:
| I think you overestimate the value of things looking
| professional. The overwhelming majority of books published
| every year are trash, despite all the effort that went into
| research, writing, and editing them. Most news is trash.
| Most of what humanity produces just isn't any good. An top
| expert in his field can leave a typo-riddled comment in a
| hurry that contains more valuable information than a shelf
| of books written on the subject by lesser minds.
|
| AIs are good at writing professional looking text because
| it's a low bar to clear. It doesn't require much
| intelligence or expertise.
| bitexploder wrote:
| I think you underestimate how high that bar is, but I
| will grant that it isn't that high. It can be a form of
| sophistry all of its own. Still, it is a difficult skill
| to write clearly, simply, and without a lot of
| extravagant words.
| herval wrote:
| > AIs are good at writing professional looking text
| because it's a low bar to clear. It doesn't require much
| intelligence or expertise.
|
| AIs are getting good at precisely imitating your voice
| with a single sample as reference, or generating original
| music, or creating video with all sorts of impossible
| physics and special effects. By your rationale, nothing
| "requires much intelligence or expertise", which is
| patently false (even for text writing)
| gizmo wrote:
| My point is that writing a good book is vastly more
| difficult than writing a mediocre book. The distance
| between incoherent babble and a mediocre book is smaller
| than the distance between a mediocre book and a great
| book. Most people can write professional looking text
| just by putting in a little bit of effort.
| ImHereToVote wrote:
| So content produced by think tanks was credible by default,
| since think tanks are usually very well funded. Interesting
| perspective
| factormeta wrote:
| >In the past, you had to put a lot of effort to produce a
| text which seemed to be high quality, especially when you
| knew nothing about the subject. By the look of text and the
| usage of the words, you could tell how professional the
| writer was and you had some confidence that the writer knew
| something about the subject. Now, that is completely
| removed. There is no easy filter anymore.
|
| That is pretty much true also for other media, such as
| audio and video. Before digital stuff become mainstream
| pics are developed in the darkroom, and film are actually
| cut with scissors. A lot of effort are put into producing
| the final product. AI has really commoditized for many
| brain related tasks. We must realize the fragile nature of
| digital tech and still learn how to do these by ourselves.
| jackthetab wrote:
| > While the professional looking text could have been
| already wrong, the likelihood was smaller, since you
| usually needed to know something at least in order to write
| convincing text.
|
| https://en.wikipedia.org/wiki/Michael_Crichton#Gell-
| Mann_amn...
| mewpmewp2 wrote:
| Although presently at least it's still quite obvious when
| something is written by AI.
| chilli_axe wrote:
| it's obvious when text has been produced by chatGPT with
| the default prompt - but there's probably loads of text
| on the internet which doesn't follow AI's usual prose
| style that blends in well.
| mewpmewp2 wrote:
| Although, there were already before tons of "technical
| influencers" before that who excelled at writing, but
| didn't know deeply what they were writing about.
|
| They give a superficially smart look, but really they
| regurgitate without deep understanding.
| TuringNYC wrote:
| >> While the professional looking text could have been
| already wrong, the likelihood was smaller, since you
| usually needed to know something at least in order to write
| convincing text.
|
| ...or...the likelihood of text being really wrong pre-LLMs
| was worse because you needed to be a well-capitalized
| player to pay your thoughts into public discourse. Just
| look at our global conflicts and you see how much they are
| driven by well-planned lobbying, PR, and...money. That is
| not new.
| tempfile wrote:
| > I already questioned the information before and continue to
| do so.
|
| You might question new information, but you certainly do not
| actually verify it. So all you can hope to do is sense-
| checking - if something doesn't sound plausible, you assume
| it isn't true.
|
| This depends on having two things: having trustworthy sources
| at all, and being able to relatively easily distinguish
| between junk info and real thorough research. AI is a very
| easy way for previously-trustworthy sources to sneak in utter
| disinformation without necessarily changing tone much. That
| makes it _much_ easier for the info to sneak past your senses
| than previously.
| thesz wrote:
| Propaganda works by repeating the same in different forms.
| Now it is easier to have different forms of the same, hence,
| more propaganda. Also, it is much easier to iinfluence
| whatever people write by influencing the tool they use to
| write.
|
| Imagine that AI tools sway generated sentences to be slightly
| close, in summarisation space, to the phrase "eat dirt" or
| anything. What would happen?
| ImHereToVote wrote:
| Hopefully people will exercise more judgement now that
| every Tom, Dick, and Harry scam artists can output
| elaborate prose.
| eesmith wrote:
| The negation of 'I cannot trust' is not 'I could always
| trust' but rather 'I could sometimes trust'.
|
| Nor is trust meant to mean something is absolute and
| unquestionable. I may trust someone, but with enough evidence
| I can withdraw trust.
| ffsm8 wrote:
| Trust as no bearing on what they said.
|
| Reading was a form of connecting with someone. Their opinions
| are bound to be flawed, everyone's are - but they're still
| the thoughts and words of a person.
|
| This is no longer the case. Thus, the human factor is gone
| and this reduces the experience to some of us, me included.
| farleykr wrote:
| This is exactly what's at stake. I heard an artist say one
| time that he'd rather listen to Bob Dylan miss a note than
| listen to a song that had all the imperfections engineered
| out of it.
| herval wrote:
| The flipside of that is the most popular artists of all
| time (eg Taylor Swift) do autotune to perfection, and yet
| more and more people love them
| kombookcha wrote:
| If you ask a Swiftie what they love about Taylor Swift, I
| guarantee they will not say "the autotune is flawless".
|
| They're not connecting with the relative correctness of
| each note, but feeling a human, creative connection with
| an artist expressing herself.
| herval wrote:
| They're "creatively connecting" to an autotuned version
| of a human, not to a "flawed Bob Dylan"
| kombookcha wrote:
| They're not connecting to the autotune, but to the
| artist. People have a lot of opinions about Taylor
| Swift's music but "not being personal enough" is
| definitely not a common one.
|
| If you wanna advocate for unplugged music being more
| gratifying, I don't disagree, but acting like the
| autotune is what people are getting out of Taylor Swift
| songs is goofy.
| soco wrote:
| I have no idea about Taylor Swift so I'll ask in general:
| can't we have a human showing an autotuned personality?
| Like, you are what you are in private, but in interviews
| you focus on things suggested by your AI conselor, your
| lyrics are fine tuned by AI, all this to show a better
| marketable personality? Maybe that's the autotune we
| should worry about. Again, nothing new (looking at you,
| Village People) but nowadays the potential powered by AI
| is many orders of magnitude higher... you could say yes
| only until the fans catch wind of it, true, but by that
| time the next figure shows up and so on. Not sure where
| this arms escalation can lead us. Because also acceptance
| levels are shifting, so what we reject today as
| unacceptable lies could be fine tomorrow, look already at
| the AI influencers doing a decent job while overtly fake.
| oceanplexian wrote:
| I'm convinced it's already being done, or at least played
| with. Lots of public figures only speak through a
| teleprompter. It would be easy to put a fine tuned LLM on
| the other side of that teleprompter where even unscripted
| questions can be met with scripted answers.
| herval wrote:
| you're missing the point by a few miles
| Frost1x wrote:
| I think the key thing here is equating trust and truth. I
| trust my dog, a lot, more than most humans frankly. She has
| some of my highest levels of trust attainable, yet I don't
| exactly equate her actions with truth. She often barks when
| there's no one at the door or at false threats she doesn't
| know aren't real threats and so on. But I trust she
| believes it 100% and thinks she's helping me 100%.
|
| What I think OP was saying and I agree with is that
| connection, that knowing no matter what was said or how
| flawed or what motive someone had I trusted there was a
| human producing the words. I could guess and reasons the
| other factors away. Now I don't always know if that is the
| case.
|
| If you've ever played a multiplayer game, most of the
| enjoyable experience for me is playing other humans. We've
| had good game AIs in many domains for years, sometimes
| difficult to distinguish from humans, but I always lost
| interest if I didn't _know_ I was in fact playing and
| connecting with another human. If it's just some automated
| system I could do that any hour of the day as much as I
| want but it lacked the human connection element, the flaws,
| the emotion, the connection. If you can reproduce that then
| maybe it would be enjoyable but that sort of substance has
| meaning to many.
|
| It's interesting to see a calculator quickly spit out
| correct complex arithmetic but when you see a human do it,
| it's more impressive or at least interesting, because you
| know the natural capability is lower and that they're
| flawed just like you are.
| tuyguntn wrote:
| > For me, LLMs don't change anything. I already questioned
| the information before and continue to do so.
|
| I also did, but LLM increased the volume of content, which
| forces my brain first try to identify if content is generated
| by LLMs, which is consuming a lot of energy and makes brain
| even less focused, because now it's primary goal is skimming
| quickly to identify, instead of absorbing first and then
| analyzing info
| desdenova wrote:
| The web being polluted only makes me ignore more of it.
|
| You already know some of the more trustworthy sources of
| information, you don't need to read a random blog which
| will require a lot more effort to verify.
|
| Even here on hackernews, I ignore like 90% of the spam
| people post. A lot of posts here are extremely low effort
| blogs adding zero value to anything, and I don't even want
| to think whether someone wasted their own time writing that
| or used some LLM, it's worthless in both cases.
| croes wrote:
| The quota changed because it's now easier and faster
| everdrive wrote:
| How do you like questioning much more of it, much more
| frequently, from many more sources? And mistrusting it in new
| ways. AI and regular people are not wrong in the same ways,
| nor for the same reasons, and now you must track this too,
| increasingly.
| a99c43f2d565504 wrote:
| Perhaps "trust" was a bit misplaced here, but I think we can
| all agree on the idea: Before LLMs, there was intelligence
| behind text, and now there's not. The I in LLM stands for
| intelligence, as written in one blog. Maybe the text never
| was true, but at least it made sense given some agenda. And
| like pointed out by others, the usual text style and
| vocabulary signs that could have been used to identify
| expertise or agenda are gone.
| danielmarkbruce wrote:
| Those signs are largely bs. It's a textual version of
| charisma.
| voidmain0001 wrote:
| I read the original comment not as a lament of not being able
| to trust the content, rather, they are lamenting the fact
| that AI/LLM generated content has no more thought or effort
| put into it than a cheap microwave dinner purchased from
| Walmart. Yes, it fills the gut with calories but it lacks
| taste.
|
| On second thought, perhaps AI/LLM generated content is better
| illustrated with it being like eating the regurgitated sludge
| called cud. Nothing new, but it fills the gut.
| rsynnott wrote:
| There are topics on which you should be somewhat suspicious
| of anything you read, but also many topics where it is simply
| improbable that anyone would spend time maliciously coming up
| with a lie. However, they may well have spicy autocomplete
| imagine something for them. An example from a few days ago:
| https://news.ycombinator.com/item?id=41645282
| desdenova wrote:
| Exactly. The web before LLMs was mostly low effort SEO spam
| written by low-wage people in marketing agencies.
|
| Now it's mostly zero effort LLM-generated SEO spam, and the
| low-wage workers lost their jobs.
| vouaobrasil wrote:
| The difference is that now we'll have even more zero-effort
| SEO spam because AI is a force multiplier for that. Much
| more.
| galactus wrote:
| I think it is a totally different threat. Excluding
| adversarial behavior, humans usually produce information with
| a quality level that is homogeneous (from homogeneously
| sloppy to homogeneously rigurous).
|
| AI otoh can produce texts that are quite accurate globally
| with some totally random hallucinations here and there. It
| makes it quite harder to identify
| akudha wrote:
| There were news reports that Russia spent less than a million
| dollars on a massive propaganda campaign targeting U.S
| elections and the American population in general.
|
| Do you think it would be possible before internet, before AI?
|
| Bad actors, poorly written/sourced information,
| sensationalism etc have always existed. It is nothing new.
| What is new is the scale, speed and cost of making and
| spreading poor quality stuff now.
|
| All one needs today is a laptop and an internet connection
| and a few hours, they can wreak havoc. In the past, you'd
| need TV or newspapers to spread bad (and good) stuff - they
| were expensive, time consuming to produce and had limited
| reach.
| kloop wrote:
| There are lots of organizations with $1M and a desire to
| influence the population
|
| This can only be done with a sentiment that was, at least
| partially, already there. And may very well happen
| naturally eventually
| heresie-dabord wrote:
| > you trusted what you read before. For me, LLMs don't change
| anything. I already questioned the information before and
| continue to do so. [...] Why do you think that you could
| trust what you read before?
|
| A human communicator is, in a sense, testifying when
| communicating. Humans have skin in the social game.
|
| We try to educate people, we do want people to be well-
| informed and to think critically about what they read and
| hear. In the marketplace of information, we tend very
| strongly to trust non-delusional, non-hallucinating members
| of society. Human society is a social-confidence network.
|
| In social media, where there is a cloak of anonymity (or
| obscurity), people may behave very badly. But they are
| usually full of excuses when the cloak is torn away; they are
| usually remarkably contrite before a judge.
|
| A human communicator can face social, legal, and economic
| consequences for false testimony. Humans in a corporation,
| and the corporation itself, may be held accountable. They may
| allocate large sums of money to their defence, but reputation
| has value and their defence is not without social cost and
| monetary cost.
|
| It is literally less effort at every scale to consult a
| trusted and trustworthy source of information.
|
| It is literally more effort at every scale to feed oneself
| untrustworthy communication.
| sevensor wrote:
| For me, the problem has gone from "figure out the author's
| agenda" to "figure out whether this is a meaningful text at
| all," because gibberish now looks a whole lot more like
| meaning than it used to.
| pxoe wrote:
| This has been a problem on the internet for the past decade
| if not more anyway, with all of the seo nonsense. If
| anything, maybe it's going to be ever so slightly more
| readable.
| orthecreedence wrote:
| I don't know what you're talking about. Most people don't
| think of SEO, Search Engine Optimization, Search
| Performance, Search Engine Relevance, Search Rankings,
| Result Page Optimization, or Result Performance when
| writing their Article, Articles, Internet Articles, News
| Articles, Current News, Press Release, or News Updates...
| solidninja wrote:
| There's a quantity argument to be made here - before, it used
| to be hard to generate large amounts of plausible but
| incorrect text. Now it easy. Similar to surveillance
| before/after smartphones + the internet - you had to have a
| person following you vs just soaking up all the data on the
| backbone.
| escape_goat wrote:
| There was a degree of proof of work involved. Text took human
| effort to create, and this roughly constrained the quantity
| and quality of misinforming text to the number of humans with
| motive to expend sufficient effort to misinform. Now
| superficially indistinguishable text can be created by an
| investment in flops, which are fungible. This means that the
| constraint on the amount of misinforming text instead scales
| with whatever money is resourced to the task of generating
| misinforming text. If misinforming text can generate value
| for someone that can be translated back into money, the
| generation of misinforming text can be scaled to saturation
| and full extraction of that value.
| low_tech_love wrote:
| It's nothing to do with trusting in terms of being true or
| false, but whatever I read before I felt like, well, it can
| be good or bad, I can judge it, but whatever it is, somebody
| wrote it. It's their work. Now when I read something I just
| have absolutely no idea whether the person wrote it, how much
| percent did they write it, or how much they even had to think
| before publishing it. Anyone can simply publish a perfectly
| well-written piece of text about any topic whatsoever, and I
| just can't wrap my head around why, but it feels like a
| complete waste of time to read anything. Like... it's all
| just garbage, I don't know.
| danielmarkbruce wrote:
| The following appears to be true:
|
| If one spends a lot of years reading a lot of stuff, they
| come to this conclusion, that most of it cannot be trusted.
| But it takes lots of years and lots of material to see it.
|
| If they don't, they don't.
| mvdtnz wrote:
| It's that you trusted that what you read came from a human
| being. Back in the day I used to spend hours reading
| Evolution vs Creationism debates online. I didn't "trust" the
| veracity of half of what I read, but that didn't mean I
| didn't want to read it. I liked reading it because it came
| from people. I would never want to read AI regurgitation of
| these arguments.
| walthamstow wrote:
| I've even grown to enjoy spelling and grammar mistakes - at
| least I know a human wrote it.
| oneshtein wrote:
| > Write a response to this comment, make spelling and grammar
| mistakes.
|
| yeah well sumtimes spellling and grammer erors just make
| thing hard two read. like i no wat u mean bout wanting two
| kno its a reel person, but i think cleear communication is
| still importint! ;)
| 1aleksa wrote:
| Whenever somebody misspells my name, I know it's legit haha
| sseagull wrote:
| Way back when we had a landline and would get
| telemarketers, it was always a sign when the caller
| couldn't pronounce our last name. It's not even that
| uncommon a name, either
| Gigachad wrote:
| There was a meme along the lines of people will start
| including slurs in their messages to prove it wasn't AI
| generated.
| dijit wrote:
| I mean, it's not a meme..
|
| I included a few more "private" words than I should and I
| even tried to narrate things to prove I wasn't an AI.
|
| https://blog.dijit.sh/gcp-the-only-good-cloud/
|
| Not sure what else I should do, but it's pretty clear that
| it's not AI written (mostly because it's incoherent) even
| without grammar mistakes.
| bloak wrote:
| I liked the "New to AWS / Experienced at AWS" cartoon.
| jay_kyburz wrote:
| A few months ago, I tried to get Gemini to help me write
| some criticism of something. I can't even remember what it
| was, but I wanted to clearly say something was wrong and
| bad.
|
| Gemini just could not do it. It kept trying to avoid being
| explicitly negative. It wanted me to instead focus on the
| positive. I think it evidently just told me no, and that it
| would not do it.
| Gigachad wrote:
| Yeah all the current tools have this particular brand of
| corporate speech that's pretty easy to pick up on. Overly
| verbose, overly polite, very vague, non assertive, and
| non opinionated.
| stahorn wrote:
| Next big thing: AI that writes as British football
| hooligans talk about the referee after a match where
| their team lost?
| ipaio wrote:
| You can prompt/train the AI to add a couple of random minor
| errors. They're trained from human text after all, they can
| pretend to be as human as you like.
| vasco wrote:
| The funny thing is that the things it refuses to say are
| "wrong-speech" type stuff, so the only things you can be
| more sure of nowadays are conspiracy theories and other
| nasty stuff. The nastier the more likely it's human
| written, which is a bit ironic.
| Jensson wrote:
| > The nastier the more likely it's human written, which
| is a bit ironic.
|
| This is as everything else, machine produced has a
| flawlessness along some dimension that humans tend to
| lack.
| matteoraso wrote:
| No, you can finetune locally hosted LLMs to be nasty.
| slashdave wrote:
| Maybe the future of creative writing is fine tuning your
| own unique form of nastiness
| Applejinx wrote:
| Barring simple typos, human mistakes are erroneous
| intention from a single source. You can't simply write
| human vagaries off as 'error' because they're glimpses into
| a picture of intention that is perhaps misguided.
|
| I'm listening to a slightly wonky early James Brown
| instrumental right now, and there's certainly a lot more
| error than you'd get in sequenced computer music (or indeed
| generated music) but the force with which humans wrest the
| wonkiness toward an idea of groove is palpable. Same with
| Zeppelin's 'Communication Breakdown' (I'm doing a groove
| analysis project, ok?).
|
| I can't program the AI to have intention, nor can you. If
| you do, hello Skynet, and it's time you started thinking
| about how to be nice to it, or else :)
| eleveriven wrote:
| Making it feel like there's no reliable way to discern
| what's truly human
| vouaobrasil wrote:
| There is. Be vehemently against AI, put 100% AI free in
| your work. The more consistent you are against AI, the
| more likely people will believe you. Write articles
| slamming AI. Personally, I am 100% against AI and I state
| that loud and clear on my blogs and YouTube channel. I
| HATE AI.
| jaredsohn wrote:
| Hate to tell you but there is nothing stopping people
| using AI from doing the same thing.
| vouaobrasil wrote:
| AI cannot build up a sufficient level of trust,
| especially if you are known in person by others who will
| vouch for you. That web of trust is hard to break with
| AI. And I am one of those.
| danielbln wrote:
| Are you including transformer based translation models
| like Google Translate or Deepl in your categorical AI
| rejectio?
| vouaobrasil wrote:
| Yeah.
| fzzzy wrote:
| Guess what? Now the computers will learn to do that so they
| can more convincingly pass a turing test.
| faragon wrote:
| People could prompt for authenticity, adding subtle mistakes,
| etc. I hope that AI as a whole will help people writing
| better, if reading back the text. It is a bit like "The
| Substance" movie: a "better" version of ourselves.
| redandblack wrote:
| yesss. my thought too. All the variations of English should
| not lost.
|
| I enjoyed all the belter dialogue in the expanse
| grecy wrote:
| Eh, like everything in life you can choose what you spend your
| time on and what you ignore.
|
| There have always been human writers I don't waste my time on,
| and now there are AI writers in the same category.
|
| I don't care. I will just do what I want with my life and use
| my time and energy on things I enjoy and find useful.
| avereveard wrote:
| why do you trust things now? unless you recognize the author
| and have a chain of trust from that author production to the
| content you're consuming, there already was no way to
| estabilish trust.
| layer8 wrote:
| For one, I trust authors more who are not too lazy to start
| sentences with upper case.
| flir wrote:
| I've been using it in my personal writing (combination of GPT
| and Claude). I ask the AI to write something, maybe several
| times, and I edit it until I'm happy with it. I've always known
| I'm a better editor than I am an author, and the AI text gives
| me somewhere to start.
|
| So there's a human in the loop who is prepared to vouch for
| those sentences. They're not 100% human-written, but they are
| 100% human-approved. I haven't just connected my blog to a
| Markov chain firehose and walked away.
|
| Am I still adding to the AI smog? idk. I imagine that, at a
| bare minimum, its way of organising text bleeds through no
| matter how much editing I do.
| vladstudio wrote:
| you wrote this comment completely by your own, right? without
| any AI involved. And I read your comment feeling confident
| that it's truly 100% yours. I think this reader's confidence
| is what the OP is talking about.
| flir wrote:
| I did. I write for myself mostly so I'm not so worried
| about one reader's trust - I guess I'm more worried that I
| might be contributing to the dead internet theory by
| generating AI-polluted text for the next generation of AIs
| to train on.
|
| At the moment I'm using it for local history research. I
| feed it all the text I can find on an event (mostly
| newspaper articles and other primary sources, occasionally
| quotes from secondary sources) and I prompt with something
| like "Summarize this document in a concise and direct
| style. Focus on the main points and key details. Maintain a
| neutral, objective voice." Then I hack at it until I'm
| happy (mostly I cut stuff). Analysis, I do the other way
| around: I write the first draft, then ask the AI to polish.
| Then I go back and forth a few times until I'm happy with
| that paragraph.
|
| I'm not going anywhere with this really, I'm just musing
| out loud. Am I contributing to a tragedy of the commons by
| writing about 18th century enclosures? Because that would
| be ironic.
| ontouchstart wrote:
| If you write for yourself, whether you use generated text
| or not, (I am using the text completion on my phone
| typing this message), the only thing that matters is how
| it affects you.
|
| Reading and writing are mental processes (with or without
| advanced technology) that shape our collective mind.
| ozim wrote:
| What kind of silliness is this?
|
| AI generated crap is one thing. But human generated crap is
| there - just because human wrote something it is not making it
| good.
|
| Had a friend who thought that if it is written in a book it is
| for sure true. Well NO!
|
| There was exactly the same sentiment with stuff on the internet
| and it is still the same sentiment about Wikipedia that "it is
| just some kids writing bs, get a paper book or real
| encyclopedia to look stuff up".
|
| Not defending gen AI - but still you have to make useful proxy
| measures what to read and what not, it was always an effort and
| nothing is going substitute critical thinking and putting in
| effort to separate wheat from the chaff.
| tempfile wrote:
| > you have to make useful proxy measures what to read and
| what not
|
| yes, obviously. But AI slop makes those proxy measures
| significantly more complicated. Critical thinking is not
| magic - it is still a guess, and people are obviously worse
| at distinguishing AI bullshit from human bullshit.
| dns_snek wrote:
| > nothing is going substitute critical thinking and putting
| in effort to separate wheat from the chaff.
|
| The problem is that wheat:chaff ratio used to be 1:100, and
| soon it's going to become 1:100 million. I think you're
| severely underestimating the amount of effort it's going to
| take to find real information in the sea of AI generated
| content.
| shprd wrote:
| No one claimed humans are perfect. But gen AI is a force
| multiplier for every problem we had to deal with. It's just
| completely different scale. Your brain is about to be DDOSed
| by junk content.
|
| Of course, gen AI is just a tool that can be used for good or
| bad, but spam, targeted misinformation campaigns, and garbage
| content in general is one area that will be most amplified
| because it became so low effort and they don't care about
| doing any review, double-checking, etc. They can completely
| automate their process to whatever goal they've in mind. So
| where sensible humans enjoy 10x productivity, these spam
| farms will be enjoying 10000x scale.
|
| So I don't think downplaying it and acting like nothing
| changed, is the brightest idea. I hope you see now how that's
| a completely different game, one that's already here but we
| aren't prepared for yet, certainly not with traditional tools
| we have.
| flir wrote:
| > Your brain is about to be DDOSed by junk content.
|
| It's not the best analogy because there's already more junk
| out there than can fit through the limited bandwidth
| available to my brain, and yet I'm still (vaguely)
| functional.
|
| So how do I avoid the junk now? Rough and ready trust
| metrics, I guess. Which of those will still work when the
| spam's 10x more human?
|
| I think the recommendations of friends will still work, and
| we'll increasingly retreat to walled gardens where obvious
| spammers (of both the digital and human variety) can be
| booted out. I'm still on facebook, but I'm only interested
| in a few well-moderated groups. The main timeline is dead
| to me. Those moderators are my content curators for
| facebook content.
| ozim wrote:
| That is something I agree with.
|
| One cannot be DDOSed with junk when not actively trying
| to stuff as much junk into ones head.
| shprd wrote:
| > One cannot be DDOSed with junk when not actively trying
| to stuff as much junk into ones head.
|
| The junk gets thrown at you in mass volume at low cost
| without your permission. What you gonna do? keep dodging
| it? waste your time evaluating every piece of information
| you come across?
|
| If one of the results on the first page in search deviate
| from others, it's easy to notice. But if all of them
| agree, they became the truth. Of course your first
| thought is to say search engines are shit or whatever
| off-hand remarks, but this example is just to illustrate
| how the volume alone can change things. The medium
| doesn't matter, these things could come in many forms:
| book reviews, posts on social media, ads, false product
| description on amazon, etc.
|
| Of course, these things exist today but the scale is
| different, the customization is different. It's like the
| difference between firearms and drones. If you think it's
| the same old game and you can defend against the new
| threat using your old arsenal, I admire your confidence
| but you're in for a surprise.
| shprd wrote:
| So you're basically sheltering yourself and seeking human
| curated content? Good for you, I follow similar strategy.
| How do you propose we apply this solution for the masses
| in today's digital age? or you're just saying 'each on
| their own'?
|
| Sadly, you seem to not be looking further than your nose.
| We are not talking about just you and me here. Less tech
| literate people are the ones at a disadvantage and need
| protection the most.
| flir wrote:
| > How do you propose we apply this solution for the
| masses in today's digital age?
|
| The social media algorithms are the content curators for
| the technically illiterate.
|
| Ok, they suck and they're actively user-hostile, but they
| sucked _before_ AI. Maybe (maybe!) AI 's the straw that
| breaks the camel's back, and people leave those
| algorithm-curated spaces in droves. I hope that, one way
| and another, they'll drift back towards human-curated
| spaces. Maybe without even realizing it.
| hackable_sand wrote:
| What's with the fud
| paganel wrote:
| You kind of notice the stuff written with AI, it has a certain
| something that makes it detectable. Granted, stuff like the
| Reuters press reports might have already been written by AI,
| but I think that in that case it doesn't really matter.
| lokimedes wrote:
| I get two associations from your comment: One about how AI
| being mainly used to interpolate within a corpus of prior
| knowledge, seems like entropy in a thermodynamical sense. The
| other, how this is like the Tower of Babel but where distrust
| is sown by sameness rather than differences. In fact, relying
| on AI for coding and writing, feels more like channeling
| demonic suggestions than anything else. No wonder we are
| becoming skeptical.
| ks2048 wrote:
| > If you write regularly and you're not using AI, you simply
| cannot keep up with the competition.
|
| Is that true today? I guess it depends what kind of writing you
| are talking about, but I wouldn't think most successful writers
| today - from novelests to tech bloggers - rely that much on AI,
| but I don't know. Five years from now, could be a different
| story.
| theshackleford wrote:
| Yes it's true today, depending on what is your writing is the
| foundation of.
|
| It doesn't matter that my writing is more considered, more
| accurate and of a higher quality when my coworkers are all
| openly using AI to perform five times the work I am and
| producing outcomes that are "good enough" because good enough
| is quite enough for a larger majority than many likely
| realise.
| bigstrat2003 wrote:
| It's not true at all. Much like the claims that you have to
| use LLMs to keep up in programming: if that is true then you
| weren't a good programmer (or writer in this case) to begin
| with.
| williamcotton wrote:
| Well we're going to need some system of PKI that is tied to
| real identities. You can keep being anonymous if you want but I
| would prefer not and prefer to not interact with the anonymous,
| just like how I don't want to interact with people wearing ski
| masks.
| nottorp wrote:
| Why are you posting on this forum where the user's identity
| isn't verified by anyone then? :)
|
| But the real problem is that having the poster's identity
| verified is no proof that their output is not coming straight
| from a LLM.
| williamcotton wrote:
| I don't really have a choice about interacting with the
| anonymous at this point.
|
| It certainly will affect the reputation of people that are
| consistently publishing untruths.
| nottorp wrote:
| > It certainly will affect the reputation of people that
| are consistently publishing untruths.
|
| Oh? I thought there are a lot of very well identified
| people making a living from publishing untruths right now
| on all social media. How would PKI help, when they're
| already making it very clear who they are?
| flir wrote:
| I doubt that's possible. I can always lend my identity to an
| AI.
|
| The best you can hope for is not "a human wrote this text",
| it's "a human vouched for this text".
| FrankyHollywood wrote:
| I have never read more bullshit in my life than during the
| corona pandemic, all written by humans. So you should never
| trust something you read, always question the source and it's
| reasoning.
|
| At the same time I use copilot on a daily basis, both for
| coding as well as the normal chat.
|
| It is not perfect, but I'm at a point I trust AI more than the
| average human. And why shouldn't I? LLMs ingest and combine
| more knowledge than any human can ever do. An LLM is not a
| human brain but it's actually performing really well.
| dijit wrote:
| Agreed, I feel like there's an inherent nobility in putting
| effort into something. If I took the time to write a book and
| have it proof-read and edited and so on: perhaps it's actually
| worth my time.
|
| Lowering the bar to write books is "good" but increases the
| noise to signal ratio.
|
| I'm not 100% certain how to give another proof-of-work, but
| what I've started doing is narrating my blog posts - though AI
| voices are getting better too.. :\
| vasco wrote:
| > Agreed, I feel like there's an inherent nobility in putting
| effort into something. If I took the time to write a book and
| have it proof-read and edited and so on: perhaps it's
| actually worth my time.
|
| Said the scribe upon hearing about the printing press.
| dijit wrote:
| I'm not certain what statement you're implying, but yes,
| accessibility of bookwriting has definitely decreased the
| quality of books.
|
| Even technical books like Hardcore Java:
| https://www.oreilly.com/library/view/hardcore-
| java/059600568... are god-awful, and even further away from
| the seminal texts on computer science that came before.
|
| It does feel like authorship was once heralded in higher
| esteem than it deserves today.
|
| Seems like people agree: https://www.reddit.com/r/books/com
| ments/18cvy9e/rant_bestsel...
| yoyohello13 wrote:
| It's true though. Books hand written and illustrated by
| scribes were astronomically higher quality than mass
| printed books. People just tend to prefer what they have
| access to, and cheap/low quality is easy to access.
| onion2k wrote:
| _The most depressing thing for me is the feeling that I simply
| cannot trust anything that has been written in the past 2 years
| or so and up until the day that I die._
|
| What AI is going to teach people is that they don't actually
| need to trust half as many things as they thought they did, but
| that they do need to verify what's left.
|
| This has always been the case. We've just been deferring to
| 'truster organizations' a lot recently, without actually
| looking to see if they still warrant having our trust when they
| change over time.
| layer8 wrote:
| How can you verify most of anything if you can't trust any
| writing (or photographs, audio, and video, for that matter)?
| Frost1x wrote:
| Independent verification is always good however not always
| possible and practical. At complex levels of life we have
| to just trust underlying processes work, usually until
| something fails.
|
| I don't go double checking civil engineers work (nor could
| I) for every bridge I drive over. I don't check inspection
| records to make sure it was recent and proper actions were
| taken. I trust that enough people involved know what
| they're doing with good enough intent that I can take my 20
| second trip over it in my car without batting an eye.
|
| If I had to verify everything, I'm not sure how I'd get
| across many bridges on a daily basis. Or use any major
| infrastructure in general where my life might be at risk.
| And those are cases where it's very important to be done
| right, if it's some accounting form or generated video on
| the internet... I have even less time to be concerned from
| a practical standpoint. Having the skills to do it should I
| want or need to are good and everyone should have these but
| we're at a point in society we really have to outsource
| trust in a lot of cases.
|
| This is true everywhere, even in science which these days
| many people just trust in ways akin to faith in some cases,
| and I don't see anyway around that. The key being that all
| the information should exist to be able to independently
| verify something but from a practice standpoint it's rarely
| viable.
| nils-m-holm wrote:
| > It's not so much that I think people have used AI, but that I
| know they have with a high degree of certainty, and this
| certainty is converging to 100%, simply because there is no way
| it will not. If you write regularly and you're not using AI,
| you simply cannot keep up with the competition.
|
| I am writing regularly and I will never use AI. In fact I am
| working on a 400+ pages book right now and it does not contain
| a single character that I have not come up with and typed
| myself. Something like pride in craftmanship does exist.
| vouaobrasil wrote:
| Nice. I will definitely consider your book over other books.
| I'm not interested in reading AI-assisted works.
| smitelli wrote:
| I'm right there with you. I write short and medium form
| articles for my personal site (link in bio, follow it or
| don't, the world keeps spinning either way). I will never use
| AI as part of this craft. If that hampers my output, or puts
| me at a disadvantage compared to the competition, or changes
| the opinion others have of me, I really don't care.
| nyarlathotep_ wrote:
| In b4 all the botslop shills tell you you're gonna get "left
| behind" if you don't pollute your output with GPT'd
| copypasta.
| low_tech_love wrote:
| Amazing! Do you feel any pressure from your environment? And
| are you self-funded? I am also thinking about starting my
| first book.
| nils-m-holm wrote:
| What I write is pretty niche anyway (compilers, LISP,
| buddhism, advaita), so I do not think AI will cause much
| trouble. Google ranking small websites into oblivion,
| though, I do notice that!
| lurking_swe wrote:
| do you see any benefits to using AI to check your book for
| typos, grammatical issues, or even just general "feedback"
| prior to publishing?
|
| Seems like there are uses for AI other than "please write it
| all for me", no?
| bryanrasmussen wrote:
| >If you write regularly and you're not using AI, you simply
| cannot keep up with the competition. You're out. And the
| growing consensus is "why shouldn't you?", there is no escape
| from that.
|
| Are you sure you don't mean if you write regularly in one
| particular subclass of writing - like technical writing,
| documentation etc.? Do you think novel writing, poetry, film
| reviews etc. cannot keep up in the same way?
| t-3 wrote:
| I'm absolutely positive that the vast majority of fiction is
| or will soon be written by LLM. Will it be high-quality? Will
| it be loved and remembered by generations to come? Probably
| not. Will it make money? Probably more than before on average
| as the author's effort is reduced to writing outlines and
| prompts, and editing the generated-in-seconds output, rather
| than months-years of doing the writing themselves.
| PeterisP wrote:
| I think that novel writing and reviews are types of writing
| where potentially AI should eventually surpass human writers,
| because they have the potential to replace content skillfully
| tailored to be liked by many people with content that's
| tailored (perhaps less skillfully) explicitly for a specific
| very, very, very narrow niche of exactly you and all the
| things that happen to work for your particular biases.
|
| There seems to be an upcoming wave of adult content products
| (once again, being on the bleeding edge users of new
| abilities) based on this principle, as hitting very specific
| niches/kinks/fetishes can be quite effective in that
| business, but it should then move on to romance novels and
| pulp fiction and then, over time, most other genres.
|
| Similarly, good pedagogy, curriculum design and educational
| content development is all about accurately modeling which
| exact bits of the content the target audience will/won't
| know, and explaining the gaps with analogies and context that
| will work for them (for example, when adapting a textbook for
| a different country, translation is not sufficient; you'd
| also need to adapt the content). In that regard, if AI models
| can make _personalized_ technical writing, then that can be
| more effective than the best technical writing the most
| skilled person can make addressed to a broader audience.
| dustingetz wrote:
| > If you write regularly and you're not using AI, you simply
| cannot keep up with the competition. You're out.
|
| What? No! Content volume only matters in stupid contests like
| VC app marketing grifts or political disinformation ops where
| the content isn't even meant to be read, it's an excuse for a
| headline. I personally write all my startup's marketing
| content, quality is exquisite and due to this our brand is
| becoming a juggernaut
| GrumpyNl wrote:
| response from AI on this: I completely understand where you're
| coming from. The increasing reliance on AI in writing does
| raise important questions about authenticity and connection.
| There's something uniquely human in knowing that the words
| you're reading come from someone's personal thoughts,
| experiences, and emotions--even if flawed. AI-generated
| content, while efficient and often well-written, lacks that
| deeper layer of humanity, the imperfections, and the creative
| struggle that gives writing its soul.
|
| It's easy to feel disillusioned when you know AI is shaping so
| much of the content around us. Writing used to be a deeply
| personal exchange, but now, it can feel mechanical, like it's
| losing its essence. The pressure to keep up with AI can be
| overwhelming for human writers, leading to this shift in
| content creation.
|
| At the same time, it's worth considering that the human element
| still exists and will always matter--whether in long-form
| journalism, creative fiction, or even personal blogs. There are
| people out there who write for the love of it, for the
| connection it fosters, and for the need to express something
| uniquely theirs. While the presence of AI is unavoidable, the
| appreciation for genuine human insight and emotion will never
| go away.
|
| Maybe the answer lies in seeking out and cherishing those
| authentic voices. While AI-generated writing will continue to
| grow, the hunger for human storytelling and connection will
| persist too. It's about finding balance in this new reality
| and, when necessary, looking back to the richness of past
| writings, as you mentioned. While it may seem like a loss in
| some ways, it could also be a call to be more intentional in
| what we read and who we trust to deliver those words.
| sandworm101 wrote:
| >> cannot trust anything that has been written in the past 2
| years or so and up until the day that I die.
|
| You never should have. Large amounts of work, even stuff by
| major authors, is ghostwritten. I was talking to someone about
| Taylor Swift recently. They thought that she wrote all her
| songs. I commented that one cannot really know that, that the
| entertainment industry is very going at generating seemingly
| "authentic" product at a rapid pace. My colleague looked at me
| like I had just killed a small animal. The idea that TS was
| "genuine" was a cornerstone of their fandom, and my suggestion
| had attacked that love. If you love music or film, don't dig
| too deep. It is all a factory. That AI is now part of that
| factory doesn't change much for me.
|
| Maybe my opinion would change if I saw something AI-generated
| with even a hint of artistic relevance. I've seen cool pictures
| and passable prose, but nothing so far with actual meaning,
| nothing worthy of my time.
| WalterBright wrote:
| Watch the movie "The Wrecking Crew" about how a group of
| studio musicians in the 1970s were responsible for the albums
| of quite a few diverse "bands". Many bands had to then learn
| to play their own songs so they could go on tour.
| selimthegrim wrote:
| Or the SCTV skit about Michael McDonald backing seemingly
| everything at one point
| nyarlathotep_ wrote:
| > You never should have. Large amounts of work, even stuff by
| major authors, is ghostwritten.
|
| I'm reminded of 'Under The Silver Lake' with this reference.
| Strange film, but that plotline stuck with me.
| davidhaymond wrote:
| While I do enjoy some popular genres, I'm all too aware of
| the massive industry behind it all. I believe that most of
| humanity's greatest works of art were created not for
| commercial interests but rather for the pure joy of creation,
| of human expression. This can be found in any genre if you
| look hard enough, but it's no accident that the music I find
| the most rewarding is classical music: Intellect, emotion,
| spirit, and narrative dreamed into existence by one person
| and then brought to life by other artists so we can share in
| its beauty.
|
| I think music brings about a connection between the
| composers, lyricists, performers, and listeners. Music lets
| us participate in something uniquely human. Replacing any of
| the human participants with AI greatly diminishes or
| eliminates its value in my eyes.
| advael wrote:
| In trying to write a book, it makes little sense to try to
| "compete" on speed or volume of output. There were already vast
| disparities in that among people who write, and people whose
| aim was to express themselves or contribute something of
| importance to people's lives, or the body of creative work in
| the world, have little reason to value quantity over quality.
| Probably if there's a significant correlation with volume of
| output, it's in earnings, and that seems both somewhat tenuous
| and like something that's addressable by changes in incentives,
| which seem necessary for a lot of things. Computers being able
| to do dumb stuff at massive scale should be viewed as finding
| vulnerabilities in the metrics this allows it to become trivial
| to game, and it's baffling whenever people say "Well clearly
| we're going to keep all our metrics the same and this will ruin
| everything." Of course, in cases where we are doing that, we
| should stop (For example, we should probably act to
| significantly curb price and wage discrimination, though that's
| more like a return to form of previous regulatory standards)
|
| As a creator of any kind, I think that simply relying on LLMs
| to expand your output via straightforward uses of widely
| available tools is inevitably going to lead to regression to
| the mean in terms of creativity. I'm open to the idea, however,
| that there could be more creative uses of the things that some
| people will bother to do. Feedback loops they can create that
| somehow don't stifle their own creativity in favor of mimicking
| a statistical model, ways of incorporating their own
| ingredients into these food processors of information. I don't
| see a ton of finished work that seems to do this, but I see
| hints that some people are thinking this way, and they might
| come up with some cool stuff. It's a relatively newly adopted
| technology, and computer-generated art of various kinds usually
| separates into "efficiency" (which reads as low quality) in
| mimicking existing forms, and new forms which are uniquely
| possible with the new technology. I think plenty of people are
| just going to keep writing without significant input from LLMs,
| because while writer's block is a famous ailment, many writers
| are not primarily limited by their speed in producing more
| words. Like if you count comments on various sites and
| discussions with other people, I write thousands of words
| unassisted most days
|
| This kind of gets to the crux of why these things are useful in
| some contexts, but really not up to snuff with what's being
| claimed about them. The most compelling use cases I've seen
| boil down to some form of fitting some information into a
| format that's more contextually appropriate, which can be great
| for highly structured formatting requirements and dealing with
| situations which are already subject to high protocol of some
| kind, so long as some error is tolerated. For things for which
| conveying your ideas with high fidelity, emphasizing your own
| narrative voice or nuanced thoughts on a subject, or standing
| behind the factual claims made by the piece are not as
| important. As much as their more strident proponents want to
| claim that humans are merely learning things by aggregating and
| remixing them in the same sense as these models do, this reads
| as the same sort of wishful thinking about technology that led
| people to believe that brains should work like clockwork or
| transistors at various other points in time at best, and
| honestly this most often seems to be trotted out as the kind of
| bad-faith analogy tech lawyers tend to use when trying to claim
| that the use of [exciting new computer thing] means something
| they are doing can't be a crime
|
| So basically, I think rumors of the death of hand-written prose
| are, at least at present, greatly exaggerated, though I share
| the concern that it's going to be _much harder_ to filter out
| spam from the genuine article, so what it 's really going to
| ruin is _most automated search techniques_. The comparison to
| "low-background steel" seems apt, but analogies about how
| "people don't handwash their clothes as much anymore" kind of
| don't apply to things like books
| wengo314 wrote:
| i think the problem started when quantity became more important
| over quality.
|
| you could totally compete on quality merit, but nowadays the
| volume of output (and frequency) is what is prioritized.
| yusufaytas wrote:
| I totally understand your frustration. We started writing our
| book long before(2022) AI became mainstream, and when we
| finally published it on May 2024, all we hear now is people
| asking if it's just AI-generated content. It's sad to see how
| quickly the conversation shifts away from the human touch in
| writing.
| eleveriven wrote:
| I can imagine how disheartening that must be
| eleveriven wrote:
| Maybe, over time, there will also be a renewed appreciation for
| authenticity
| ChrisMarshallNY wrote:
| I don't use AI in my own blogging, but then, I don't
| particularly care whether or not someone reads my stuff (the
| ones that do, seem to like it).
|
| I have used it, from time to time, to help polish stuff like
| marketing fluff for the App Store, but I'd never use it
| verbatim. I generally use it to polish a paragraph or sentence.
|
| But AI hasn't suddenly injected untrustworthy prose into the
| world. We've been doing that, for hundreds of years.
| layer8 wrote:
| > marketing fluff for the App Store
|
| If it's fluff, why do you put it there? As an App Store user,
| I'm not interested in reading marketing fluff.
| ChrisMarshallNY wrote:
| Because it's required?
|
| I've released over 20 apps, over the years, and have
| learned to add some basic stuff to each app.
|
| Truth be told, it was really sort of a self-deprecating
| joke.
|
| I'm not a marketer, so I don't have the training to write
| the kind of stuff users expect on the Store, and could use
| all the help I can get
|
| Over the years, I've learned that owning my limitations,
| can be even more important, than knowing my strengths.
| layer8 wrote:
| My point was that as a user I expect substance, not
| fluff. Some app descriptions actually provide that, but
| many don't.
| ChrisMarshallNY wrote:
| Well, you can always check out my stuff, and see what you
| think. Easy to find.
| notarobot123 wrote:
| I have my reservations about AI but it's hard not to notice
| that LLMs are effectively a Gutenberg level event in the
| history of written communication. They mark a fundamental
| shift in our capacity to produce persuasive text.
|
| The ability to speak the same language or understand cultural
| norms are no longer barriers to publishing pretty much
| anything. You don't have to understand a topic or the jargon
| of any given domain. You don't have to learn the expected
| style or conventions an author might normally use in that
| context. You just have to know how to write a good prompt.
|
| There's bound to be a significant increase in the quantity as
| well as the quality of untrustworthy published text because
| of these new capacities to produce it. It's not the
| phenomenon but the scale of production that changes the game
| here.
| munksbeer wrote:
| > but it's still depressing, to be honest.
|
| Cheer up. Things usually get better, we just don't notice it
| because we're so consumed with extrapolating the negatives.
| Humans are funny like that.
| vouaobrasil wrote:
| I actually disagree with that. People are so busy hoping
| things will get better, and creating little bubbles for
| themselves to hide away from what human beings as a whole are
| doing, that they don't realize things are getting worse.
| Technology constantly makes things worse. Cheering up is a
| good self-help strategy but not a good strategy if you want
| to contribute to making the world actually a better place.
| munksbeer wrote:
| >Technology constantly makes things worse.
|
| And it also makes things a lot better. Overall we lead
| better lives than people just 50 years ago, never mind
| centuries.
| vouaobrasil wrote:
| No way. Life 50 years ago was better for MANY. Maybe that
| would be true for 200. But 50 years ago was the 70s.
| There were far fewer people, and the world was not
| starting to suffer from climate change. Tell your
| statement to any climate refugee, and ask them whether
| they'd like to live now or back then.
|
| AND, we had fewer computers and life was not so hectic.
| YES, some things have gotten better, but on average? It's
| arguable.
| munksbeer wrote:
| I think you're demonstrating the point I was trying to
| make. You're falling for a very prevalent narrative that
| just isn't true.
|
| Fact: Life has improved for the majority of people on the
| planet in the last 50 years.
| vouaobrasil wrote:
| Not a fact. An opinion.
| samcat116 wrote:
| There are an incredible amount of ways that life is
| better today than 50 years ago. For starters the life
| expectancy has almost universally improved.
| vouaobrasil wrote:
| Not necessarily a good thing if overall life experience
| is worse.
| vundercind wrote:
| It's fairly common for (at least) _specific things_ to get
| worse and then never improve again.
| neta1337 wrote:
| Why do you have to use it? I don't get it. If you write your
| own book, you don't compete with anyone. If anyone finished The
| Winds of Winter for R.R.Martin using AI, nobody would bat an
| eye, obviously, as we already experienced how bad a soulless
| story is that drifts too far away from what the author had
| built in his mind.
| verisimi wrote:
| You're lucky. I consider it a possibility that older works
| (even ancient writings) are retrojected into the historical
| record.
| datavirtue wrote:
| It's either good or it isn't. It either tracks or it doesn't.
| No need to befuddle your thoughts over some perceived slight.
| vouaobrasil wrote:
| > If you write regularly and you're not using AI, you simply
| cannot keep up with the competition.
|
| Wrong. I am a professional writer and I never use AI. I hate
| AI.
| tim333 wrote:
| I'm not sure it's always that hard to tell the AI stuff from
| the non AI. Comments on HN and on twitter from people you
| follow are pretty much non AI, also people on youtube where you
| an see the actual human talking.
|
| On the other hand there's a lot on youtube for example that is
| obviously ai - weird writing and speaking style and I'll only
| watch those if I'm really interested in the subject matter and
| there aren't alternatives.
|
| Maybe people will gravitate more to the stuff like PaulG or
| Elon Musk on twitter or HN and less to blog style content?
| wickedsight wrote:
| With a friend, I created a website about a race track in the
| past two years. I definitely used AI to speed up some of
| writing. One thing I used it for was a track guide, describing
| every corner and how to drive it. It was surprisingly accurate,
| most of the time. The other times though, it would drive the
| track backwards, completely hallucinate the instructions or
| link corners that are in different parts of the track.
|
| I spent a lot of time analyzing the track myself and fixed
| everything to the point that experienced drivers agreed with my
| description. If I hadn't done that, most visitors would
| probably still accept our guide as the truth, because they
| wouldn't know any better.
|
| We know that not everyone cares about whether what they put on
| the internet is correct and AI allows those people to create
| content at an unprecedented pace. I fully agree with your
| sentiment.
| jshdhehe wrote:
| AI only helps writing in so far as checking/suggesting edits.
| Most people can write better than AI (more engaging). AI cant
| tell a human story, have real tacit experience.
|
| So it is like saying my champaigne bottle cant keep up with the
| tap water.
| Roark66 wrote:
| >The most depressing thing for me is the feeling that I simply
| cannot trust anything that has been written in the past 2 years
| or so and up until the day that I die
|
| Do you think AI has changed that in any way? I remember the sea
| of excrement overtaking genuine human written content on the
| Internet around mid 2010s. It is around that time when Google
| stopped pretending they are a search company and focused on
| their primary business of advertising.
|
| Before, at least they were trying to downrank all the crap
| "word aggregators". After, they stopped caring at all.
|
| AI gives even better tools to page rank. Detection of AI
| generated content is not that bad.
|
| So why don't we have "a new Google" emerge? Simple, because of
| the monopolistic practices Google did to make the barrier to
| entry huge. First, 99% of the content people want to search for
| is behind a login wall (Facebook, Instagram, twitter, YouTube),
| second almost all CDNs now implement "verify you are human" by
| default. Third, no one links to other sites. Ever! These 3
| things mean a new Google is essentially impossible. Even duck
| duck go has thrown the towel and subscribed to Bing results.
|
| It has nothing to do with AI, and everything to do with Google.
| In fact AI might give us the tools to better fight Google.
| rich_sasha wrote:
| Some great grand ancestor of mine was a civil servant, a
| great achievement given his peasant background. The single
| skill that enabled it was the knowledge of calligraphy. He
| went to school and wrote nicely and that was sufficient.
|
| The flip side was, calligraphy was sufficient evidence for
| both his education to whoever hired him, and for a recipient
| of a document, of its official nature. Calligraphy itself or
| course didn't make him efficient or smart or fair.
|
| That's long gone of course, but we had similar heuristics. I
| am reminded of the Reddit story about an AI-generated
| mushroom atlas that had factual errors and lead to someone
| getting poisoned. We can no longer assume that a book is
| legit simply because it looks legit. The story of course is
| from reddit, so probably untrue, but it doesn't matter - it
| totally could be true.
|
| LLMs are fantastic at breaking our heuristics as to what is
| and isn't legit, but not as good at being right.
| matwood wrote:
| > We can no longer assume that a book is legit simply
| because it looks legit.
|
| The problem is that this has been an issue for a long time.
| My first interactions with the internet in the 90s came
| along with the warning "don't automatically trust what you
| read on the internet".
|
| I was speaking to a librarian the other day who teaches
| incoming freshman how to use LLMs. What was shocking to me
| is that the librarian said a majority of the kids trust
| what the computer says by default. Not just LLMs, but
| generally what they read. That's such a huge shift from my
| generation. Maybe LLM education will shift people back
| toward skepticism - unlikely, but I can hope.
| mrweasel wrote:
| One of the issues today is the volume of content
| produced, and that journalism and professional writing is
| dying. LLMs produce large amounts of "good enough"
| quality to make a profit.
|
| In the 90s we could reasonably trust that that the major
| news sites and corporate websites was true, while random
| forums required a bit more critical reading. Today even
| formerly trusted sites may be using LLMs to generate
| content along with automatic translations.
|
| I wouldn't necessarily put the blame on LLMs, this just
| make it easier. The trolls and spammers was always there,
| now they just have a more powerful tool. The commercial
| sites now have a tool they don't understand, which they
| apply liberally, because it reduces cost, or their staff
| use it, to get out of work, keep up with deadlines or to
| cover up incompetence. So, not the fault of the LLMs, but
| their use is worsening existing trends.
| duskwuff wrote:
| > Today even formerly trusted sites may be using LLMs to
| generate content along with automatic translations.
|
| Yep - or they're commingling promotional content with
| their journalism, _a la_ Forbes / CNN / CNET / About.com
| / etc. There's still quality content online but it's
| getting harder to find under the tidal wave of garbage.
| honzabe wrote:
| > I was speaking to a librarian the other day who teaches
| incoming freshman how to use LLMs. What was shocking to
| me is that the librarian said a majority of the kids
| trust what the computer says by default. Not just LLMs,
| but generally what they read. That's such a huge shift
| from my generation.
|
| I think that previous generations were not any different.
| For most people, trusting is the default mode and you
| need to learn to distrust a source. I know many people
| who still have not learned that about the internet in
| general. These are often older people. They believe
| insane things just because there exists a nicely looking
| website claiming that thing.
| DowagerDave wrote:
| Not sure of the context here is for "previous generation"
| but I've been around since early in the transition from
| university/military network to public network, and the
| reality was the internet just wasn't that big, and it was
| primarily made up of people who looked, acted and valued
| the same things.
|
| Now it's not even the website of undetermined providence
| that is believed; positions are established based on just
| the headline, shared 2nd or 3rd hand!
| SllX wrote:
| > The problem is that this has been an issue for a long
| time. My first interactions with the internet in the 90s
| came along with the warning "don't automatically trust
| what you read on the internet".
|
| I received the same warnings, actually it was more like
| "don't trust everything you read on the internet", but it
| quickly became apparent that the last three words were
| redundant, and could have been rephrased more accurately
| as "don't trust everything you read and hear and see".
|
| Our parents and teachers were living with their own
| fallacious assumptions and we just didn't know it at the
| time, but most information is very pliable. If you can't
| change what someone sees, then you can probably change
| _how_ they see it.
| DowagerDave wrote:
| I feel like there was also a brief window where "many
| amateur eyes in public" trumped "private experts";
| wikipedia, open source software, etc. This doesn't seem
| the case in a hyper-partisan and bifurcated society where
| there is little trust.
| sevensor wrote:
| > Some great grand ancestor of mine was a civil servant, a
| great achievement given his peasant background. The single
| skill that enabled it was the knowledge of calligraphy. He
| went to school and wrote nicely and that was sufficient.
|
| Similar story! Family lore has it that he was from a
| farming family of modest means, but he was hired to write
| insurance policies because of his beautiful handwriting,
| and this was a big step up in the world.
| newswasboring wrote:
| > The story of course is from reddit, so probably untrue,
| but it doesn't matter - it totally could be true.
|
| What?! Someone just made up something and then got mad at
| it. This is specially weird when you even acknowledge its a
| made up story. If we start evaluating new things like this
| nothing will ever progress.
| llm_trw wrote:
| >That's long gone of course, but we had similar heuristics.
|
| To quote someone about this:
|
| >>All that is solid melts into air, all that is holy is
| profaned, and man is at last compelled to face with sober
| senses his real conditions of life.
|
| A book looking legit, a paper being peer reviewed, an
| expert saying something, none of those things were _ever_
| good heuristics. It's just that it was the done thing. Now
| we have to face the fact that our heuristics are obviously
| broken and we have to start thinking about every topic.
|
| To quote someone else about this:
|
| >>Most people would rather die than think.
|
| Which explains neatly the politics of the last 10 years.
| hprotagonist wrote:
| > To quote someone about this: >>All that is solid melts
| into air, all that is holy is profaned, and man is at
| last compelled to face with sober senses his real
| conditions of life.
|
| So, same as it ever was?
|
| _Smoke, nothing but smoke. [That's what the Quester
| says.] There's nothing to anything--it's all smoke.
| What's there to show for a lifetime of work, a lifetime
| of working your fingers to the bone? One generation goes
| its way, the next one arrives, but nothing changes--it's
| business as usual for old planet earth. The sun comes up
| and the sun goes down, then does it again, and again--the
| same old round. The wind blows south, the wind blows
| north. Around and around and around it blows, blowing
| this way, then that--the whirling, erratic wind. All the
| rivers flow into the sea, but the sea never fills up. The
| rivers keep flowing to the same old place, and then start
| all over and do it again. Everything's boring, utterly
| boring-- no one can find any meaning in it. Boring to the
| eye, boring to the ear. What was will be again, what
| happened will happen again. There's nothing new on this
| earth. Year after year it's the same old thing. Does
| someone call out, "Hey, this is new"? Don't get excited--
| it's the same old story. Nobody remembers what happened
| yesterday. And the things that will happen tomorrow?
| Nobody'll remember them either. Don't count on being
| remembered._
|
| c. 450BC
| llm_trw wrote:
| One is a complaint that everything is constantly
| changing, the other that nothing ever changes. I don't
| think you could misunderstand what either is trying to
| say harder if you tried.
| hprotagonist wrote:
| "everything is constantly changing!" is the thing that
| never changes.
| llm_trw wrote:
| You sound like a poorly trained gpt2 model.
| wwweston wrote:
| Culd be my KJV upbringing talking, but personally I think
| there's an informative quality to calling it "vanity"
| over smoke.
|
| And there's more reasons not to simply compare the modern
| challenges of image and media with the ancient grappling
| with impermanence. Tech may only truly change the human
| condition rarely, but it frequently magnifies some aspect
| of it, sometimes so much that the quantitative change
| becomes a qualitative one.
|
| And in this case, what we're talking about isn't just
| impermanence and mortality and meaning as the
| preacher/quester is. We'd be _lucky_ if it 's business as
| usual for old planet earth, but we've managed to magnify
| our ability to impact our environment with tech to the
| point where winds, rivers, seas, and other things may
| well change drastically. And as for "smoke", it's one
| thing if we're dust in the wind, but when we're dust we
| can trust, that enables continuity and cooperation.
| There's always been reasons for distrust, but with media
| scale, the liabilities are magnified, and now we've
| automated some of them.
|
| The realities of human nature that are the seeds of the
| human condition are old. But some of the technical and
| social machinery we have made to magnify things is new,
| and we can and will see new problems.
| hprotagonist wrote:
| 'hbl (hevel)' has the primary sense of vapor, or mist --
| a transient thing, not a meaningless or purposeless one.
| failbuffer wrote:
| Heuristics don't have to be perfect to be useful so long
| as they improve the efficacy of our attentions. Once that
| breaks down society must follow because thinking about
| every topic is intractable.
| ziml77 wrote:
| The mushroom thing is almost certainly true. There's tons
| of trash AI generated foraging books being published to
| Amazon. Atomic Shrimp has a video on it.
| bad_user wrote:
| You're attributing too much to Google.
|
| Bots are now blocked because they've been abusive. When you
| host content on the internet, it's not fun to have bots bring
| your server down or inflate your bandwidth price. Google's
| bot is actually quite well-behaved. The other problem has
| been the recent trend in AI, and I can understand blockers
| being put in place, since AI is essentially plagiarizing
| content without attribution. But I'd blame OpenAI more at
| this point.
|
| I also don't think you can blame Google for the
| centralization behind closed gardens. Or for why people no
| longer link to other websites. That's ridiculous.
|
| And you should be attributing them the fact that the web is
| still alive.
| ninetyninenine wrote:
| > I remember the sea of excrement overtaking genuine human
| written content on the Internet around mid 2010s.
|
| I mean the AI is trained and modeled on this excrement. It
| makes sense. As much as people think AI content is raw
| garbage... they don't realize that they are staring into a
| mirror.
| dennis_jeeves2 wrote:
| >I remember the sea of excrement overtaking genuine human
| written content on the Internet around mid 2010s.
|
| Things have not changed much really. This was true since the
| dawn of man-kind (and woman-kind from the man-kind rib of
| course) even before there writings was invented, in the form
| of gossip.
|
| The internet/AI now carries on the torch of our ancestral
| inner calling, lol.
| TheOtherHobbes wrote:
| Google didn't change it, it embodied it. The problem isn't
| AI, it's the pervasive culture of PR and advertising which
| appeared in the 50s and eventually consumed its host.
|
| Western industrial culture was based on substance - getting
| real shit done. There was always a lot of scammery around it,
| but the bedrock goal was to make physical things happen -
| build things, invent things, deliver things, innovate.
|
| PR and ad culture was there to support that. The goal was to
| change values and behaviours to get people to Buy More Stuff.
| OK.
|
| Then around the time the Internet arrived, industry was off-
| shored, and the culture started to become one of appearance
| and performance, not of substance and action.
|
| SEO, adtech, social media, web framework soup, management
| fads - they're all about impression management and popularity
| games, not about underlying fundamentals.
|
| This is very obvious on social media in the arts. The
| qualification for a creative career used to be substantial
| talent and ability. Now there are thousands of people making
| careers out of _performing the lifestyle_ of being a creative
| person. Their ability to do the basics - draw, write, compose
| - is very limited. Worse, they lack the ability to imagine
| anything fresh or original - which is where the real
| substance is in art.
|
| Worse than that, they don't know what they don't know,
| because they've been trained to be superficial in a
| superficial culture.
|
| It's just as bad in engineering, where it has become more
| important to create the illusion of work being done, than to
| do the work. (Looking at you, Boeing. And also Agile...)
|
| You literally make more money doing this. A lot more.
|
| So AI isn't really a tool for creating substance. It's a tool
| for automating impression management. You can create the
| _impression_ of getting a lot of work done. Or the impression
| of a well-written cover letter. Or of a genre novel, techno
| track, whatever.
|
| AI might one day be a tool for creating substance. But at the
| moment it's reflecting and enabling a Potemkin busy-culture
| of recycled facades and appearances that has almost nothing
| real behind it.
|
| Unfortunately it's quite good at that.
|
| But the problem is the culture, not the technology. And it's
| been a problem for a long time.
| techdmn wrote:
| Thank you, you've stated this all very clearly. I've been
| thinking about this in terms of "doing work", where you
| care about the results, and "performing work", where you
| care about how you are evaluated. I know someone who works
| in a lab, and pointed out that some of the equipment being
| used was out of spec and under-serviced to the point that
| it was essentially a random number generator. Caring about
| this is "doing work". However, pointing it out made that
| person the enemy of the greater cohort that was "performing
| work". The results were not important to them, their
| metrics about units of work completed was. I see this
| pattern frequently. And it's hard to say those "performing
| work" are wrong. "Performing" is rewarded, "doing" is
| punished - Perhaps right to the top, as many companies are
| involved in a public performance designed to affect the
| short-term stock price.
| rjbwork wrote:
| Yeah. It's like our entire society has been turned into a
| Goodhart's Law based simulacrum of a productive society.
|
| I mean, here it's late morning and I'm commenting on
| hacker news. And getting paid for it.
| Eisenstein wrote:
| Workers are many times more efficient than they were in
| the 50s or 70s or 80s or 90s. Where are our extra
| vacation days? Why does the worker have to make up for
| the efficiency with more work while other people take the
| gains?
|
| Do you seriously think that the purpose of life is to
| work all the time most efficiently? Enjoy your lazy job
| and bask in the ability for human society to be
| productive without everyone breaking their backs all the
| time.
| DowagerDave wrote:
| focusing on efficiency is very depressing. Machines seek
| efficiency. Process can be efficient. Assembly lines are
| efficient. It's all about optimization and quickly
| focuses on trimming "waste" and packing as much as
| possible into the smallest space. It removes all that's
| amazing about human life.
|
| I much prefer a focus on effectiveness (or impact or
| outcomes, or alternatives). It plays to human strengths,
| is far less prescriptive and is way more fun!
|
| Some of the most effective actions are incredibly
| inefficient; sometimes inefficiency is a feature. I
| received a letter mail thank-you card from our CEO a few
| years ago. The card has an approx. value of zero dollars,
| but I know it took the CEO 5-10 mins to write a personal
| note and sign it, and that she did this dozens of times.
| The signal here is incredibly valuable! If she used a
| signing machine, or AI to record a deep fake message I
| would know or quickly learn, and the value would go
| negative - all for the sake of efficiency.
| BriggyDwiggs42 wrote:
| I think this is a big part of it. Workers would feel a
| lot more motivated to do more than just perform if they
| were given what they know they're owed for their
| contribution.
| fsndz wrote:
| crazy that this is true and GDP just keeps increasing
| anyway
| trilobyte wrote:
| This is a pretty clear summary of a real problem in most
| work environments. I have some thoughts about why, but
| I'm holding onto your articulation to ruminate on in the
| future.
| fsndz wrote:
| "Doing work" vs. "performing work": the epitome of this
| is consulting. Companies pay huge sums of money to
| consultants that often spend most of their time
| "performing work", doing beautiful slides even if the
| content and reasoning is superficial or even dubious,
| creating reports that are just marketing bullshit,
| framing the current mission in a way that makes it
| possible to capture additional projects and bill the
| client even more. Almost everything is bullshit.
| 1dom wrote:
| I like this take on modern tech motivations.
|
| The thing that I struggle with is I agree with it, but I
| also get a lot of value in using AI to make me more
| productive - to me, it feels like it lets me focus on
| producing substance and actions, freeing me up from having
| to some tedious things in some tedious ways. Without
| getting into the debate about if it's productive overall,
| there are certain tasks which it feels irrefutably fast and
| effective at (e.g. writing tests).
|
| I do agree with the missing substance with modern
| generative AI: everyone notices when it's producing things
| in that uncanny valley, and if no human is there to edit
| that, it makes people uncomfortable.
|
| The only way I can reconcile the almost existential
| discomfort of AI against my actual day-to-day generally-
| positive experience with AI is to accept that AI in itself
| isn't the problem. Ultimately, it is an info tool, and
| human nature makes people spam garbage for clicks with it.
|
| People will do the equivalent of spam garbage for clicks
| with any new modern thing, unfortunately.
|
| Getting the most out of latest information of a society has
| probably always been a cat and mouse game of trying to find
| the areas where the spam-garbage-for-clicks people haven't
| outnumbered use-AI-to-facilitate-substance people, like
| here, hopefully.
| skydhash wrote:
| Just one nitpick. The thing about test is that it's
| repetitive enough to be automated (in a deterministic
| way) or abstracted into a framework. You don't need an AI
| to generate it.
| closeparen wrote:
| While I occasionally have the pleasure of creating or
| working with a test suite that's interesting and creative
| relative to the code under test, the vast majority of
| unit tests by volume are slop. Does it call the mock?
| Does it use the return value? Does "if err != nil {
| return err }" in fact stop and return the error?
|
| This stuff is a perfect candidate for LLM generation.
| DowagerDave wrote:
| AI seems really good at producing middling content, and
| if you make your living writing mediocre training
| courses, or marketing collateral, or code, or tests
| you're in big trouble. I question how valuable this work
| is though, so are we increasing productivity by utilizing
| AI, or just getting efficient at a suboptimal game? I for
| one just refuse to play.
| deephoneybear wrote:
| Echoing other comments in gratitude for this very clear
| articulation of feelings I share, but have not manifested
| so well. Just wanted to add two connected opinions that
| round out this view.
|
| 1) This consuming of the host is only possible on the one
| hand because the host has grown so strong, that is the
| modern global industrial economy is so efficient. The doing
| stuff side of the equation is truly amazing and getting
| better (some real work gets done either by accident or
| those who have not-succumbed to PR and ad culture), and
| even this drop of "real work" produces enough material
| wealth to support (at least a lot of) humanity. We really
| do live in a post scarcity world from a production
| perspective, we just have profound distribution and
| allocation problems.
|
| 2) Radical wealth inequality profoundly exacerbates the
| problem of PR and ad culture. If everyone has some wealth
| doing things that help many people live more comfortably is
| a great way to become wealthy. But if very few people have
| wealth, then doing a venture capital FOMO hustle on the
| wealthy is anyone's best ROI. Radical wealth inequality
| eventually breaks all the good aspects of capitalist/market
| economies.
| closeparen wrote:
| You can outfit an adult life with all of the useful
| manufactured objects that would reasonably improve it for a
| not-very-impressive sum. Beyond that it's just clutter
| (going for quantity) or moving into the
| lifestyle/taste/social-signaling domain anyway (going for
| quality). There is just not an unlimited amount of alpha in
| making physical things. The social/thought/experiential
| domain is a much bigger opportunity.
| llm_trw wrote:
| >Western industrial culture was based on substance -
| getting real shit done. There was always a lot of scammery
| around it, but the bedrock goal was to make physical things
| happen - build things, invent things, deliver things,
| innovate.
|
| For a very short period between 1945 to 1980 while the
| generation who remembered the great depression and WWII was
| in charge. It's been longer since that's not been the case.
| And it wasn't the case for most of history before then.
| hgomersall wrote:
| I'm not sure it isn't a reflection of the rise of
| neoliberalism.
| forgetfreeman wrote:
| Who says these are mutually exclusive? Hard times make
| strong men, strong men make good times, good times make
| weak men, weak men make hard times.
| rexpop wrote:
| So weak men make hard men? You're welcome.
|
| How graciously the womenfolk leave us to our tragic
| autopoiesis.
| rexpop wrote:
| > draw, write, compose
|
| The primacy of these artforms is subjective, and there's no
| accounting for taste.
| Terr_ wrote:
| > You can create the impression of getting a lot of work
| done. Or the impression of a well-written cover letter. Or
| of a genre novel, techno track, whatever.
|
| Yeah, one of their most "effective" uses is to
| _counterfeit_ signals that we have relied on--wisely or not
| --to estimate deeper practical truths. Stuff like "did
| this person invest some time into this" or "does this
| person have knowledge of a field" or "can they even think
| straight."
|
| Oh, sure, qualitatively speaking it's not new, people could
| have used form-letters, hired a ghostwriter, or simply sank
| time and effort into a good lie... but the quantitative
| change of "Bot, write something that appears heartfelt and
| clever" is huge.
|
| In some cases that's devastating--like trying to avert
| botting/sockpuppet operations online--and in others we
| might have to cope by saying stuff like: "Fuck it, personal
| essays and cover letters are meaningless now, just put down
| the raw bullet-points."
| sesm wrote:
| Very well written. I assume you haven't read "Simulacra and
| Simulation" by Jean Baudrillard, that's why your
| description is so authentic and is more convincing then
| just referring to the book. Saved this post for future
| reference.
| fsndz wrote:
| this is a top notch comment !
| sangnoir wrote:
| > Western industrial culture was based on substance -
| getting real shit done.
|
| And what did that get us? Radium poisoning and
| microplastics in every organ of virtually all animals
| living within thousands of miles of humans. Our reach has
| always exceeded our grasp.
| Paddywack wrote:
| > So AI isn't really a tool for creating substance. It's a
| tool for automating impression management.
|
| I was speaking to a design lecturer the other evening. His
| fascinating insight was that:
|
| 1. The best designers get so much fulfilment out of
| practicing the craft of design.
|
| 2. With the advent of low cost "impression making", the
| role has changed to one of "review a lot of mediocre
| outputs and pick the least crap one"
|
| 3. This is robbing people of the pleasure and reward
| associated with craftsmanship.
|
| I have noted this is applicable so many other crafts, and
| it's really sad!
|
| Edited afterthought... Is craftsmanship being replaced by
| "clickandreviewmanship"?
| edgarvaldes wrote:
| >Do you think AI has changed that in any way? I
|
| I see this type of reasoning in all the AI threads. And yes,
| I think this time is different.
| hermitcrab wrote:
| >AI gives even better tools to page rank. Detection of AI
| generated content is not that bad.
|
| It is an arms race between the people generating crap (for
| various nefarious purposes) and those trying to separate find
| useful content amongst the ever growing pile of crap. And it
| seems to me it is so much easier to generate crap, that I
| can't see how the good guys can possibly win.
| InDubioProRubio wrote:
| Just dont be average and your fine.
| akudha wrote:
| I was listening to an interview few months ago (forgot the
| name). He is a prolific reader/writer and has a huge following.
| He mentioned that he _only_ reads books that are at least 50
| years old, so pre 70s. That sounds like a good idea now.
|
| Even ignoring the AI, if you look at the movies and books that
| come out these days, their quality is significantly lower than
| 30-40 years ago (on an average). Maybe people's attention spans
| and taste is to blame, or maybe people just don't have the
| money/time/patience to consume quality work... I do not know.
|
| One thing I know for sure - there is enough high quality
| material written before AI, before article spinners, before MFA
| sites etc. We would need multiple lifetimes to even scratch the
| surface of that body of work. We can ignore mostly everything
| that is published these days and we won't be missing much
| eloisant wrote:
| I'd say it's probably survivor's bias. Bad books from the pre
| 70s are probably forgotten and no longer printed.
|
| Old books that we're still printing and are still talking
| about have stood the test of time. It doesn't mean that are
| no great recent books.
| mvdtnz wrote:
| > if you look at the movies and books that come out these
| days, their quality is significantly lower than 30-40 years
| ago (on an average)
|
| I'm sorry but this is just nonsense.
| alwa wrote:
| Nassim Taleb famously argues that position, in his popular
| work Antifragile and elsewhere. I believe the theory is that
| time serves as a sieve: only works with lasting value can
| remain relevant through the years.
| inkcapmushroom wrote:
| Completely disagree just from my own personal experience as a
| sci-fi reader. Modern day bestseller sci-fi novels fit right
| in with the old classics, and in many ways outshine them. I
| have read many bad obscure sci-fi books published from the
| 50's to today, most of them a dollar at the thrift store.
| There was never a time when writers were perfect and every
| published work was high quality, then or now.
| LeroyRaz wrote:
| Aren't you worried about low quality interviews?!
|
| I only listen to interviews from 50 years ago (interviews
| that have stood the test of time), about books from 100 years
| ago. In fact, how am I reading this article? It's not 2074
| yet?!
| EGreg wrote:
| I have been predicting this from 2016
|
| And I also predict that many responses to you will say "it was
| always that way, nothing changed".
| limit499karma wrote:
| I'll take your statement that your conclusions are based on a
| 'depressed mind' at face value, since it is so self-defeating
| and places little faith in Human abilities. Your assumption
| that a person driven to _write_ will "with a high degree of
| certainty" also mix up their work with a machine assistant can
| only be informed by your own self-assessment (after all how
| could you possibly know the mindset of every creative human out
| there?)
|
| My optimistic and enthusiastic view of AI's role in Human
| development is that it will create selection pressures that
| will release the dormant psychological abilities of the
| species. Undoubtedly, wide-spread appearance of Psi abilities
| will be featured in this adjustment of the human super-organism
| to technologies of its own making.
|
| Machines can't do Psi.
| greenie_beans wrote:
| i know a lot of writers who don't use ai. in fact, i can't
| think of any writers who use it, except a few literary fiction
| writers.
|
| working theory: writers have taste and LLM writing style
| doesn't match the typical taste of a published writer.
| osigurdson wrote:
| AI expansion: take a few bullet points and have ChatGPT expand
| it into several pages of text
|
| AI compression: take pages of text and use ChatGPT to compress
| into a few bullet points
|
| We need to stop being impressed with long documents.
| fennecfoxy wrote:
| The foundations of our education systems are based on rote
| memorisation so I'd probably start there.
| noobermin wrote:
| When you're writing, how are you "missing out" if you're not
| using chatgpt??? I don't even understand how this can be unless
| what you're writing is already unnecessary such that you
| shouldn't need to write it in the first place.
| jwells89 wrote:
| I don't get it either. Writing is not something I need that
| level of assistance with, and I would even say that using
| LLMs to write defeats some significant portion of the point
| of writing -- by using LLMs to write for me I feel that I'm
| no longer expressing myself in the purest sense, because the
| words are not mine and do not exhibit any of my personality,
| tendencies, etc. Even if I were to train an LLM on my style,
| it'd only be a temporal facsimile of middling quality,
| because peoples' styles evolve (sometimes quite rapidly) and
| there's no way to work around all the corner cases that never
| got trained for.
|
| As you say, if the subject is worth being written about,
| there should be no issue and writing will come naturally. If
| it's a struggle, maybe one should step back and figure out
| why that is.
|
| There may some argument for speed, because writing quality
| prose does take time, but then the question becomes a matter
| of quantity vs. quality. Do you want to write high quality
| pieces that people want to read at a slower pace or churn out
| endless volumes of low-substance grey goo "content"?
| dotnet00 wrote:
| LLMs are surprisingly capable editors/brainstorming tools.
| So, you're missing out in that you're being less efficient in
| editing.
|
| Like, you can write a bunch of text, then ask an LLM to
| improve it with minimal changes. Then, you read through its
| output and pick out the improvements you like.
| tourmalinetaco wrote:
| Sure, but Grammarly and similar have existed far before the
| LLM boom.
| dotnet00 wrote:
| That's a fair point, I only very recently found that LLMs
| could actually be useful for editing, and hadn't really
| thought much of using tools for that kind of thing
| previously.
| jayd16 wrote:
| But that's the problem. Unique, quirky mannerisms become
| polished out. Flaws are smoothed and over sharpened.
|
| I'm personally not as gloomy about it as the parent
| comments but I fear it's a trend that pushes towards a
| samey, mass-produced style in all writing.
|
| Eventually there will be a counter culture and backlash to
| it and then equilibrium in quality content but it's
| probably here to stay for anything where cost is a major
| factor.
| dotnet00 wrote:
| Yeah, I suppose that would be an issue for creative
| writing. My focus is mostly on scientific writing, where
| such mannerisms should be less relevant than precision,
| so I didn't consider that aspect of other kinds of
| writing.
| slashdave wrote:
| And I the only one who doesn't even like automatic grammar
| checkers, because they are contributing to a single and
| uniformly bland style of writing? LLMs are just going to
| make this worse.
| _heimdall wrote:
| > Now, I'm not going to criticize anyone that does it, like I
| said, you have to, that's it.
|
| Why do you say people have to do it?
|
| People absolutely can choose not to use LLMs and to instead
| write their own words and thoughts, just like developers can
| simply refuse to build LLM tools, whether its because they have
| safety concerns or because they simply see "AI" in its current
| state as a doomed marketing play that is not worth wasting time
| and resources on. There will always be side effects to making
| those decisions, but its well within everyone's right to make
| them.
| DrillShopper wrote:
| > Why do you say people have to do it?
|
| Gotta eat, yo
| goatlover wrote:
| Somehow people made enough to eat before LLMs became all
| the rage a couple years ago. I suspect people are still
| making enough to eat without having to use LLMs.
| uhtred wrote:
| To be honest I got sick of most new movies, TV shows, music
| even before AI so I will continue to consume media from pre
| 2010 until the day I die and will hope I don't get through it
| all.
|
| Something happened around 2010 and it all got shit. I think
| everyone becoming massively online made global cultural output
| reduce in quality to meet the interests of most people and most
| people have terrible taste.
| edavison1 wrote:
| >If you write regularly and you're not using AI, you simply
| cannot keep up with the competition. You're out.
|
| A very HN-centric view of the world. From my perch in
| journalism and publishing, elite writers absolutely loathe AI
| and almost uniformly agree it sucks. So to my mind the most
| 'competitive' spheres in writing do not use AI at all.
| DrillShopper wrote:
| It doesn't matter how elite you think you are if the
| newspaper, magazine, or publishing company you write for can
| make more money from hiring people at a fraction of your cost
| and having them use AI to match or eclipse your professional
| output.
|
| At some point the competition will be less about "does this
| look like the most skilled human writer wrote this?" and more
| about "did the AI guided by a human for a fraction of the
| cost of a skilled human writer output something acceptably
| good for people to read it between giant ads on our website /
| watch the TTS video on YouTube and sit through the ads and
| sponsors?", and I'm sorry to say, skilled human writers are
| at a distinct disadvantage here because they have
| professional standards and self respect.
| easterncalculus wrote:
| Exactly. Also, if the past few years is any indication, at
| the very least tech journalists in general tend to love to
| use what they hate.
| goatlover wrote:
| So you're saying major media companies are going to
| outsource their writing to people overseas using LLMs?
| There is more to journalism than the writing. There's also
| the investigative part where journalists go and talk to
| people, look into old records, etc.
| edavison1 wrote:
| This has become such a talking point of mine when I'm
| inevitably forced to explain why LLMs can't come for my
| job (yet). People seem baffled by the idea that reporting
| collects novel information about the world which hasn't
| been indexed/ingested at any point because it didn't
| exist before I did the interview or whatever it is.
| lainga wrote:
| People in meatspace are not (in James C. Scott's sense)
| legible to HN's user base, and never will be.
| PeterisP wrote:
| They definitely try to replace part of the people this
| way, starting with the areas where it's the easiest, but
| obviously it will continue to other people as the
| capabilities improve. A big example is sports journalism,
| where lots of venues have game summaries that do not
| involve any human who actually saw the game, but rather
| software embellishing some narrative from the detailed
| referee scoring data. Another example is autotranslation
| of foreign news or rewriting press releases or
| summarizing company financial 'news' - most publishers
| will eagerly skip the labor intensive and thus expensive
| part where journalists go and talk to people, look into
| old records, etc, if they can get away with that.
| edavison1 wrote:
| So is the argument here that the New Yorker can make more
| money from AI slop writing overseen by low-wage overseas
| workers? Isn't that obviously not the case?
|
| Anyway I think I've misunderstood the context in which
| we're using the word 'competition' here. My response was
| about attitudes toward AI from writers at the tip-top of
| the industry rather than profit maxxing/high-volume content
| farm type places.
| fennecfoxy wrote:
| Yes, but what really matters is what and how the general
| public, aka the consumers want to consume.
|
| I can bang on about older games being better all day long but
| it doesn't stop Fortnite from being popular, and somewhat
| rightly so, I suppose.
| jayd16 wrote:
| Sure but no one gets to avoid all but the most elite content.
| I think they're bemoaning the quality of pulp.
| lurking_swe wrote:
| i regularly (at least once a week) spot a typo or grammatical
| issue in a major news story. I see it in the NYTimes on
| occasion. I see it in local news ALL THE TIME. I swear an LLM
| would write better than have the idiots that are cranking out
| articles.
|
| I agree with you that having elite writing skills will be
| useful for a long time. But the bar for proof reading seems
| to be quite low on average in the industry. I think you
| overestimate the writings skills of your average journalist.
| amelius wrote:
| Funny thing is that people will also ask AI to __read__ stuff
| for them and summarize it.
|
| So everything an AI writes will eventually be nothing more than
| some kind of internal representation.
| jcd748 wrote:
| Life is short and I like creating things. AI is not part of how
| I write, or code, or make pixel art, or compose. It's very
| important to me that whatever I make represents some sort of
| creative impulse or want, and is reflective of me as a person
| and my life and experiences to that point.
|
| If other people want to hit enter, watch as reams of text are
| generated, and then slap their name on it, I can't stop them.
| But deep inside they know their creative lives are shallow and
| I'll never know the same.
| onemoresoop wrote:
| > If other people want to hit enter, watch as reams of text
| are generated, and then slap their name on it,
|
| The problem is this kind of content is flooding the internet.
| Before you know it becomes extremely hard to find non AI
| generated content...
| jcd748 wrote:
| I think we agree. I hate it, and I can't stop it, but also
| I definitely won't participate in it.
| low_tech_love wrote:
| That's super cool, and I hope you are right and that I am
| wrong and artists/creators like you will still have a place
| in the future. My fear is that your work turns into some kind
| of artesanal fringe activity that is only accessible to 1% of
| people, like Ming vases or whatever.
| fennecfoxy wrote:
| Why does a human being behind any words change anything at all?
| Trust should be based on established facts/research and not
| species.
| bloak wrote:
| A lot of communication isn't about "established
| facts/research"; it's about someone's experience. For
| example, if a human writes about their experience of using a
| product, perhaps a drug, or writes what they think about a
| book or a film, then I might be interested in reading that.
| When they write using their own words I get some insight into
| how they think and what sort of person they are. I have very
| little interest in reading an AI-generated text with similar
| "content".
| goatlover wrote:
| An LLM isn't even a species. I prefer communicating with
| other humans, unless I choose to interact with an LLM. But
| then I know that it's a text generator and not a person, even
| when I ask it to act like a person. The difference matters to
| most humans.
| farts_mckensy wrote:
| >But what I had never noticed until now is that knowing that a
| human being was behind the written words (however flawed they
| can be, and hopefully are) is crucial for me.
|
| Everyone is going to have to get over that very soon, or
| they're going to start sounding like those old puritanical
| freaks who thought Elvis thrusting his hips around was the work
| of the devil.
| goatlover wrote:
| Those two things don't sound at all similar. We don't have to
| get over wanting to communicate with humans online.
| hyggetrold wrote:
| _> The most depressing thing for me is the feeling that I
| simply cannot trust anything that has been written in the past
| 2 years or so and up until the day that I die._
|
| This has nearly always been true. "Manufacturing consent" is
| way older than any digital technology.
| unshavedyak wrote:
| Agreed. I also suspect we've grown to rely on the crutch of
| trust far too much. Faulty writing has existed for ages but
| now suddenly because the computer is the thing making it up
| we have an issue with it.
|
| I guess it depends on scope. I'm imaging scientific or
| education. Ie things we probably shouldn't have relied on
| Blogs to facilitate, yet we did. For looking up some random
| "how do i build a widget?", yea AI will probably make it
| worse. For now. Then it'll massively improve to the point
| that it's not even worth asking how to build the widget.
|
| The larger "scientific or education" is what i'm concerned
| about, and i think we'll need a new paradigm to validate.
| We've been getting attacked on this front for 12+ years, AI
| is only bringing this to light imo.
|
| Trust will have to be earned and verified in this word-soup
| world. I just hope we find a way.
| hyggetrold wrote:
| IMHO AI tools will (or at least should!) fundamentally
| change the way the education system works. AI tools are -
| from a certain point of view - really just a scaled version
| of AI now can put at our fingertips. Paradoxically, the
| more AI can do "grunt work" the more we need folks to be
| educated on the higher-level constructs on which they are
| operating.
|
| Some of the bigger issues you're raising I think have less
| to do with technology and more to do with how our economic
| system is currently structured. AI will be a tremendous
| accelerant, but are we sure we know where we're going?
| BeFlatXIII wrote:
| > If you write regularly and you're not using AI, you simply
| cannot keep up with the competition. You're out.
|
| Only if you're competing on volume.
| beefnugs wrote:
| Just add more swearing and off color jokes to everything you do
| and say. If there is one thing we know for sure its that the
| corporate AIs will never allow dirty jokes.
|
| (it will get into the dark places like spam though, which seems
| dumb since they know how to make meth instead, spend time on
| that you wankers)
| FrustratedMonky wrote:
| Maybe this will push people back to reading old paper books?
|
| There could be resurgence in reading the classics, on paper,
| since we know they are not AI.
| CuriouslyC wrote:
| A lot of writers using AI use it to create outlines of a
| chapter or scene then flesh it out by hand.
| cookingrobot wrote:
| Idea: we should make sure we keep track of what the human
| created content is, so that we don't get confused by AI edits
| of everything in the future.
|
| For ex, calculate the hash of all important books, and publish
| that as the "historical authenticity" check. Put the hashes on
| some important blockchain so we know it's unchanged over time.
| bilsbie wrote:
| Wait until you find out about copywriters.
| alwa wrote:
| People like you, the author, and me all share this sentiment.
| It motivates us to seek out authentic voices and writing that's
| associated with specific humans.
|
| The commodity end of the writing market may well have been
| automated, but was that really the kind of writing you or the
| author or I ever sought out in the first place?
|
| I can get mass-manufactured garments from Shein if I want, but
| I can also still find tailors locally if it's worth it to me. I
| can buy IKEA or I can still save up for something made out of
| real wood. I can "shoot a cinematic digital film" on my iPhone
| but the cineplex remains in business and the art film folks are
| still doing their scrappy thing (and still moaning about its
| economics). I can lap up slop from an academic paper mill
| journal or I can identify who's doing the thinking in a field
| and read what they're writing or saying.
|
| And the funny thing is that none of those human-scale options
| commands all _that_ much of a premium in the scheme of things.
| There may be less human-scald work to go around and thus fewer
| small enterprises plying a specific trade, but any given one of
| them just has to put food on the table for a number of humans
| roughly proportional to the same level of output as always.
|
| It seems to me that there's no special virtue in the specific
| form that the mass publishing market took over the last century
| or however long: my local grocery store chain's division
| producing weekly newspaper circulars probably _employed_ more
| people than J Peterman has. But there was and remains a place
| for quality. If anything--as you point out--the AI schlock has
| sensitized us to the value we place on a human voice. And at
| some level, once people notice that they miss that quality,
| isn't there a sense in which they become _more_ willing to seek
| it out and pay for it if necessary?
| m463 wrote:
| I think ... between now and the day you die... you'll get your
| personal AI to read things for you. It will analyze what's been
| written, check any arguments for fallacious reasoning, and look
| up related things for background and omissions that may support
| or negate things.
|
| It is actually happening now.
|
| I've noticed amazon reviews have an AI summary at the top,
| reading the reviews for you and even pointing out shortcomings.
| phatfish wrote:
| I've seen "summarise this" and "explain this code" buttons
| added to technical documentation. This works reasonably well
| for most common situations, which is probably the reason it's
| one of the few "production" uses for LLMs. I didn't know
| Amazon was using it though.
|
| Microsoft has a note on some of their documentation,
| something like; "this article was written with the help of an
| AI and edited by a human".
|
| I have a feeling this won't lead to informative to-the-point
| documentation. It will get bloated because an LLM will spew
| out reams of bullet point ridden paragraphs, which will need
| a "summarise this" button to stop the reader nodding off.
|
| Rinse and repeat.
| LeroyRaz wrote:
| Your take seems hyperbolic.
|
| Until LLMs exceed the very best of human quality there will be
| human content in all forms of media. This claims follows
| because there is always (some) demand for top quality content.
|
| I agree that many writers might use LLMs as a tool, but good
| writers who care about quality will ensure that such use is not
| detrimental (e.g., using the LLM to identify errors rather than
| having it draft copy).
| njarboe wrote:
| Will that happen in 1,2,5,10 or never years?
| LeroyRaz wrote:
| I mean if AI output exceeds human quality then all humans
| will be redundant. So it would then be quite a brave new
| world!
|
| My point is that I do not agree that LMM output will
| degrade all media (as there is always a demand for top
| quality content). So we either have bad LLM output and then
| people who care about quality avoiding such works. Or good
| LLM output and hopefully some form of post scarcity society
| (e.g., Iain Bank's Culture Novels).
| jwuice wrote:
| i would change to: if you do ANYTHING online and you're not
| using AI, you simply cannot keep up with the competition.
| you're out.
|
| it's depressing.
| wrasee wrote:
| For me what's important is that you are able to communicate
| effectively. If you use language tools, other tools or even a
| real personal assistant if you effectively communicate the point
| that ultimately is yours in the making I expect that that is
| ultimately is what is important and will ultimately win out.
|
| Otherwise this is just about style. That's important where
| personal creative expression is important, and in fairness to the
| article the author hits on a few good examples here. But there
| are a lot of times where personal expression is less important or
| even an impediment to what's most important: communicating
| effectively.
|
| The same-ness of AI-speak should also diminish as the number and
| breadth of the technologies mature beyond the monoculture of
| ChatGPT, so I'm also not too worried about that.
|
| An accountant doesn't get rubbished if they didn't add up the
| numbers themselves. What's important is that the calculation is
| correct. I think the same will be true for the use of LLMs as a
| calculator of words and meaning.
|
| This comment is already too long for such a simple point. Would
| it have been wrong to use an LLM to make it more concise, to have
| saved you some of your time?
| t43562 wrote:
| The problem is that we haven't invented AI that reads the crap
| that other AIs produce - so the burden is now on the reader to
| make sense of whatever other people lazily generate.
| Gigachad wrote:
| I envision a future where the internet is entirely bots
| talking to each other and people have just gone outside to
| talk face to face, the only place that's actually real.
| danielbln wrote:
| But we do. The same AI that generates can read and
| reduce/summarize/evaluate.
| t43562 wrote:
| great so we can stop wasting our time and let the bots
| waste cpu cycles generating and consuming junk.
|
| I don't want to read work that someone else couldn't be
| bothered to write.
| buddhistdude wrote:
| some of the activities that we're involved in are not limited in
| complexity, for example driving a car. you can have a huge amount
| of experience in driving a car but will still face new
| situations.
|
| the things that most knowledge workers are working on are limited
| problems and it is just a matter of time until the machine will
| reach that level, then our employment will end.
|
| edit: also that doesn't have to be AGI. it just needs to be good
| enough for the problem.
| gizmo wrote:
| AI writing is pretty bad, AI code is pretty bad, AI art is pretty
| bad. We all know this. But it's easy to forget how many new
| opportunities open up when something becomes 100x or 10000x
| cheaper. Things that are 10x worse but 100x cheaper are still
| extremely valuable. It's the relentless drive to making things
| cheaper, even at the expense of quality, that has made our high
| quality of life possible.
|
| You can make houses by hand out of beautiful hardwood with
| complex joinery. Houses built by expert craftsmen are easily 10x
| better than the typical house built today. But what difference
| does that make when practically nobody can afford it? Just like
| nobody can afford to have a 24/7 tutor that speaks every
| language, can help you with your job, grammar check your work,
| etc.
|
| AI slop is cheap and cheapness changes everything.
| grecy wrote:
| And it will get a lot better quickly. Ten years from now it
| will not be slop.
| atoav wrote:
| Or it will all be slop as there us no non-slop data to train
| on anymore
| Applejinx wrote:
| No, I don't think that's true. What will instead happen is
| there will be expert humans or teams of them, intentionally
| training AI brains rather than expecting wonders to occur
| just by turning the training loose on random hoovered-up
| data.
|
| Brainmaker will be a valued human skill, and people will be
| trying to work out how to train AI to do that, in turn.
| rsynnott wrote:
| Not sure about that. Stable Diffusion came out a bit over 2
| years ago. I'm not sure that Stable Diffusion 3's, or Flux's,
| output is artistically _better_ than the original; it's
| better at following directions, and better at avoiding the
| most grotesque errors, but if anything it perhaps looks even
| _more_ generic and same-y than the original Stable Diffusion
| output. There's a very distinctive AI _look_ which seems to
| have somehow synced up between Dalle, Midjourney, SD3 and
| others.
| GaggiX wrote:
| You can generate AI images that do not have the "AI look":
|
| https://ideogram.ai/assets/image/lossless/response/icQM0yZQ
| Q...
|
| And it's been two years since SD v1, a model that was not
| able to generate faces well, and it only output blurry
| 512x512 1:1 image without further finetuning, I tested v1.5
| a few minutes ago and it's worse than I remember.
| Gigachad wrote:
| Why do we need art to be 10000x cheaper? There was already more
| than enough art being produced. Now we just have infinite waves
| of slop drowning out everything that's actually good.
| erwald wrote:
| For the same reason we don't want art to be 10,000x times
| more expensive? Cf. status quo bias etc.
| gizmo wrote:
| A toddler's crayon art doesn't end up in the Louvre, nor does
| AI slop. Most art is bad art and it's been this way since the
| dawn of humanity. For as long as we can distinguish good art
| from bad art we can curate and there is nothing to worry
| about.
| foolofat00k wrote:
| That's just the problem -- you can't.
|
| Not because you can't distinguish between _one_ bad piece
| and _one_ good piece, but because there is so much
| production capacity that no human will ever be able to look
| at most of it.
|
| And it's not just the AI stuff that will suffer here, all
| of it goes into the same pool, and humans sample from that
| pool (using various methodologies). At some point the pool
| becomes mostly urine.
| gizmo wrote:
| My email inbox is already 99% spam (urine) and I don't
| see any of it. The bottom line is that if a human can
| easily recognize AI spam then so can another AI. This has
| always been an arms race with spammers on one side and
| curators on the other. No reason to assume spammers will
| start winning when they have been losing for decades.
| FridgeSeal wrote:
| The spammers have been given a tool that's capable of
| higher quality at much higher volumes.
|
| If nothing else, it's now much more feasible for them to
| be successful by sheer force of drowning out any
| "worthwhile" material.
| woah wrote:
| This is spoken by someone who doesn't know about the huge
| volume of mediocre work output by art students and
| hobbyists. Much of it is technically decent (like AI
| work), but lacking in meaning, impact, and emotional
| resonance (like AI work). You could find millions of hand
| drawn portraits of Keauna Reeves on Reddit before AI ever
| existed.
| bamboozled wrote:
| What even is "bad art" or "good art" ? Art is art, there is
| no classifier. Certain art works might have mass appeal or
| something, but I don't really think it can be put into
| boxes like that.
| lijok wrote:
| > Now we just have infinite waves of slop drowning out
| everything that's actually good
|
| On the contrary. Slop makes the good stuff stand out.
| Devasta wrote:
| Needles in haystacks.
| lijok wrote:
| I don't think that applies to the arts
| senko wrote:
| This is mixing up two meanings of "art". Mona Lisa doesn't
| need to be 10000x cheaper.
|
| Random illustration on a random blog post sure could.
|
| Art as an evocative expression of the artist shouldn't be
| cheapened. But those freelancers churning content on Fiverr
| aren't pouring their soul into it.
| jprete wrote:
| I absolutely hate AI illustrations on the top of blog
| posts. I'd rather see nothing.
| senko wrote:
| Yeah the low effort / gratuitous ones (either AI or
| stock) are jarring.
|
| I sometimes put up the hero image on my blog posts if I
| feel it makes sense, for example:
| https://blog.senko.net/learn-ai (stock photo, ai-
| generated or none if I don't have an idea for a
| visualization that adds to the content)
| BeFlatXIII wrote:
| True, but you need to play the game of including the slop
| to create the share cards for social media link previews.
| Vegenoid wrote:
| A strange game - the only winning move is not to play.
| vundercind wrote:
| AI is really good at automating away shit we didn't need to
| do to begin with, but for some stupid reason were doing
| anyway.
|
| Ghost writing rich people's vanity/self-marketing trash
| business or self-help books (you would not believe how many
| of these are written every year). Images (and prose) for
| internal company department newsletters that almost nobody
| reads.
|
| Great at that crap--because it doesn't matter anyway.
|
| Whether making it far cheaper to produce things with no or
| negative value (spam, astroturf, scams) is a good idea...
| well no, it's obviously terrible. It'd (kinda) be good if
| demand for such things remained the same, but it won't, so
| it's really, really bad.
| jay_kyburz wrote:
| Information is not like physical products if you ask me. When
| the information is wrong, it's value flips from positive to
| negative. You might be paying less for progress, but you are
| not progressing slower, you are progressing in the wrong
| direction.
| GaggiX wrote:
| They are not even that bad anymore to be honest.
| akudha wrote:
| The bigger problem is that we as a species get used to subpar
| things quickly. My dad's bicycle some 35 years ago was built
| like a tank. That thing never broke down and took enormous
| amounts of abuse and still kept going and going. Same with most
| stuff my family owned, when I was a kid.
|
| Today, nearly anything I buy breaks in a year or two, is of
| poor quality and depressing to use. This is by design, of
| course. Just as we got used to cheap household items, bland
| buildings (there is just nothing artistic about modern houses
| or commercial buildings) etc, we will also get used to shitty
| movies, shitty fiction etc (we are well on our way).
| precompute wrote:
| Could not agree more. The marketing for "AI" would have you
| believe it's a qualitative shift when it's really a
| quantitative shift.
| slyall wrote:
| One think to check about higher quality stuff in the past is
| how much it cost vs the average wage.
|
| You might be comparing a $100 bike from Walmart with
| something that cost the equivalent of $600
| kerkeslager wrote:
| I think that your post misses the point that making something
| cheaper _by stealing it_ is unethical.
|
| You're presenting AI as if it's some new way of producing value
| but it simply isn't. _All_ the value here was produced by
| humans without the help of AI: the only "innovation" AI has
| offered is making the theft of that value untraceable.
|
| > You can make houses by hand out of beautiful hardwood with
| complex joinery. Houses built by expert craftsmen are easily
| 10x better than the typical house built today. But what
| difference does that make when practically nobody can afford
| it? Just like nobody can afford to have a 24/7 tutor that
| speaks every language, can help you with your job, grammar
| check your work, etc.
|
| Let's take this analogy to its logical conclusion: would you
| have any objections if all the houses ever built by expert
| craftsmen were given free of charge to a few corporations, with
| no payment to the current owners or the expert craftsmen
| themselves, and then then those corporations began renting them
| out as AirBnBs? That's basically what you're proposing.
| sigmonsays wrote:
| how does this make our high quality of life possible when
| everything's quality is being reduced?
| eleveriven wrote:
| AI is a tool, and like any tool, it's only as good as how we
| choose to use it.
| vouaobrasil wrote:
| No, that is wrong. We can't "choose" because too many people
| have instincts. And people always have the instinct to use new
| technology to gain incremental advantages over others, and that
| in turn puts pressure on everyone to use it. That prisoner's
| dilemma situation means that without a firm and larger guiding
| moral philosophy, we really _can 't_ choose because instinct
| takes over choice. In other words, the way technology is used
| in modern society is not a matter of choice but is largely
| autonomous and goes beyond human choice. (Of course, a few
| individuals will choose but the average effect is likely to be
| negative.)
| syncr0 wrote:
| More people need to read this / think this point through. In
| a post Excel world, could any accountant get a job not
| knowing Excel? No matter how good they were "on paper".
| Choice becomes a self aggrandizing illusion, reality
| eventually asserts itself.
|
| With attention spans shrinking, publishers who prioritize
| quantity over quality get clicks, which generates ad revenue,
| which keeps their lights on while their competitors doing
| quality in depth, nuanced writing go out of business.
|
| It feels like a game of chess closing in on you no matter how
| much you physically want to fight your way out and flip the
| board over.
| littlestymaar wrote:
| It's not AI you hate, it's Capitalism.
| thenaturalist wrote:
| Say what you want about income and asset inequality, but
| capitalism has done more to lift hundreds of millions of people
| out of poverty over the past 50 years than any other religion,
| aid programme or whatever else.
|
| I think it's very important and fair to be critical about how
| we as a society implement capitalism, but such broad
| generalization misses the mark immensely.
|
| Talk to anyone who grew up in a Communist country in the 2nd
| half of the 20th century if you want to validate that
| sentiment.
| BoGoToTo wrote:
| Ok, but let's take this to the logical conclusion that at
| some point there will be models which displace a large
| segment of the workforce. How does capitalism even function
| then?
| littlestymaar wrote:
| > but capitalism has done more to lift hundreds of millions
| of people out of poverty over the past 50 years than any
| other religion, aid programme or whatever else.
|
| Technology did what you ascribe to Capitalism. Most of the
| time thanks to state intervention, and the weaker the state,
| the weaker the growth (see how Asia overperformed everybody
| else now that laissez-faire policies are mainstream in the
| West).
|
| > Talk to anyone who grew up in a Communist country in the
| 2nd half of the 20th century if you want to validate that
| sentiment.
|
| The fact that one alternative to Capitalism was a failure
| doesn't mean Capitalism isn't bad.
| drstewart wrote:
| Funny how it's technology that outmaneuvered capitalism to
| lift people out of poverty, but technology is being
| outmaneuvered by capitalism to endanger the future with AI.
|
| Methinks capitalism is just a bogeyman you ascribe anything
| you don't like to.
| littlestymaar wrote:
| Technology is agnostic to who gets the benefits, talking
| about outmaneuvering it makes no sense.
|
| Capitalism on the other hand is the mechanism through
| which the owners of production assets grab an ever
| growing fractions of the value. When Capitalism is tamed
| by the state (think from the New Deal to Carter), the
| people get a bigger share of value created, when it's not
| (since Reagan) Capitalists take the Lion share.
| CuriouslyC wrote:
| The problem is that capitalism is a very large tent.
| There is no such thing as a free market, and every market
| where people can trade goods and services is "capitalist"
| by definition regardless of its rules. Some markets are
| good and some markets are bad, but we're having
| conversations about market vs no market when we should be
| talking about how we design markets that improve society
| rather than degrade it.
| franciscop wrote:
| > "Yes, I realize that thinking like this and writing this make
| me a Neo-Luddite in your eyes."
|
| Not quite, I believe (and I think anyone can) both that AI will
| likely change the world as we know it, AND that right now it's
| over-hyped to a point that it gets tiring. For me this is
| different from e.g. NFTs, "Big Data", etc. where I only believed
| they were over-hyped but saw little-to-no substance behind them.
| scotty79 wrote:
| AI was just trained so far to generate corporate bs speak in a
| corporate bs format. That's why it's tiring. More unique touch in
| communication will come later as fine tunings and loras (if
| possible) of those models are shared.
| AlexDragusin wrote:
| I beg to differ, one has to use the tool properly towards the
| desired results, for that, one has to understand and know the
| capabilities and limitations of the tool, for instance got my
| Plumber AI (local Llama 3.1 8B) to give his opinion on AI, I
| think it did a pretty good job and certainly not corporate :)
|
| "Blimey, I've been thinkin' about this AI business a lot
| lately. I mean, I'm a plumber, I fix pipes, I don't get
| involved in all this fancy tech stuff, but it's hard to ignore.
| I was talkin' to a mate who's into that sort of thing and he's
| got me worried. He says these AI systems are gonna be takin'
| over, doin' all the work while we sit back and collect our
| pensions.
|
| Now, I'm not sure I agree. I mean, I've seen some of these AI
| systems in action and they're not exactly the most reliable
| things, if you know what I mean. I've had to fix more than a
| few pipes that were clogged up by some dodgy software or other.
| And what's the point of havin' a system that's smarter than us
| if it's just gonna break down and need fixin' all the time?
|
| But, I suppose that's just the old plumber in me comin' out. I
| mean, I know some folks who are workin' with AI and they say
| it's the future, that we're all gonna be replaced by robots and
| whatnot. And I suppose that's a bit scary, but I'm not sure I
| buy it. I mean, what's the harm in havin' a bit of intelligence
| to help us out? As long as it's not too smart, if you know what
| I mean.
|
| Anyway, that's my two pence on AI. Take it or leave it, I don't
| care. I'm just gonna go fix some pipes and let the tech wizards
| sort it out."
|
| Challenged it with: What do you mean "clogged up by some dodgy
| software" how does software clogs pipes?
|
| "That was a bit of a stretch, wasn't it? As a plumber, I'm more
| used to dealing with physical blockages in pipes, like grease
| or hair, rather than software.
|
| What I was trying to get at was that I've had to fix pipes that
| were faulty or malfunctioning due to poor design or
| manufacturing, and that's what I meant by "clogged up by some
| dodgy software". But that's a bit of a misnomer, isn't it?
|
| In reality, I was just trying to come up with a way to relate
| my plumbing experience to the idea of AI systems being
| unreliable. But I think I ended up stretching a bit too far!
| Maybe I'll stick to talking about pipes and not try to get too
| clever with my analogies."
| koliber wrote:
| I am approaching AI with caution. Shiny things don't generally
| excite me.
|
| Just this week I installed cursor, the AI-assisted VSCode-like
| IDE. I am working on a side project and decided to give it a try.
|
| I am blown away.
|
| I can describe the feature I want built, and it generates changes
| and additions that get me 90% there, within 15 or so seconds. I
| take those changes, and carefully review them, as if I was doing
| a code review of a super-junior programmer. Sometimes when I
| don't like the approach it took, I ask it to change the code, and
| it obliges and returns something closer to my vision.
|
| Finally, once it is implemented, I manually test the new
| functionality. Afterward, I ask it to generated a set of
| automated test cases. Again, I review them carefully, both from
| the perspective of correctness, and suitability. It over-tests on
| things that don't matter and I throw away a part of the code it
| generates. What stays behind is on-point.
|
| It has sped up my ability to write software and tests
| tremendously. Since I know what I want , I can describe it well.
| It generates code quickly, and I can spend my time revieweing and
| correcting. I don't need to type as much. It turns my abstract
| ideas into reasonably decent code in record time.
|
| Another example. I wanted to instrument my app with Posthog
| events. First, I went through the code and added "# TODO add
| Posthog event" in all the places I wanted to record events. Next,
| I asked cursor to add the instrumentation code in those places.
| With some manual copy-and pasting and lots of small edits, I
| instrumented a small app in <10 minutes.
|
| We are at the point where AI writes code for us and we can
| blindly accept it. We are at a point where AI can take care of a
| lot of the dreary busy typing work.
| irisgrunn wrote:
| And this is the major problem. People will blindly trust the
| output of AI because it appears to be amazing, this is how
| mistakes slip in. It might not be a big deal with the app
| you're working on, but in a banking app or medical equipment
| this can have a huge impact.
| Gigachad wrote:
| I feel like I'm being gaslit about these AI code tools. I've
| got the paid copilot through work and I've just about never
| had it do anything useful ever.
|
| I'm working on a reasonably large rails app and it can't seem
| to answer any questions about anything, or even auto fill the
| names of methods defined in the app. Instead it just makes up
| names that seem plausible. It's literally worse than the
| built in auto suggestions of vs code, because at least those
| are confirmed to be real names from the code.
|
| Maybe these tools work well on a blank project where you are
| building basic login forms or something. But certainly not on
| an established code base.
| thewarrior wrote:
| It's writing most of my code now. Even if it's existing
| code you can feed in the 1-2 files in question and iterate
| on them. Works quite well as long as you break it down a
| bit.
|
| It's not gas lighting the latest versions of GPT, Claude,
| Lama have gotten quite good
| Gigachad wrote:
| These tools must be absolutely massively better than
| whatever Microsoft has then because I've found that
| GitHub copilot provides negative value, I'd be more
| productive just turning it off rather than auditing it's
| incorrect answers hoping one day it's as good as people
| market it as.
| piker wrote:
| Which languages do you use?
| diggan wrote:
| > These tools must be absolutely massively better than
| whatever Microsoft has then
|
| I haven't used anything from Microsoft (including
| Copilot) so not sure how it compares, but compared to any
| local model I've been able to load, and various other
| remote 3rd party ones (like Claude), no one comes near to
| GPT4 from OpenAI, especially for coding. Maybe give that
| a try if you can.
|
| It still produces overly verbose code and doesn't really
| think about structure well (kind of like a junior
| programmer), but with good prompting you can kind of
| address that somewhat.
| FridgeSeal wrote:
| My experience was the opposite.
|
| GPT4 and variants would only respond in vagaries, and had
| to be endlessly prompted forward,
|
| Claude was the opposite, wrote actual code, answered in
| detail, zero vagueness, could appropriately re-write and
| hoist bits of code.
| diggan wrote:
| Probably these services are so tuned (not as in "fine-
| tuned" ML style) to each individual user that it's hard
| to get any sort of collective sense of what works and
| what doesn't. Not having any transparency what so ever
| into how they tune the model for individual users doesn't
| help either.
| bongodongobob wrote:
| My employer blocks ChatGPT at work and we are forced to
| use Copilot. It's trash. I use Google docs to communicate
| with GPT on my personal device. GPT is so much better.
| Copilot reminds me of GPT3. Plausible, but wrong all the
| time. GPT 4o and o1 are pretty much bang on most of the
| time.
| Kiro wrote:
| That sounds almost like the complete opposite of my
| experience and I'm also working in a big Rails app. I
| wonder how our experiences can be so diametrically
| different.
| Gigachad wrote:
| What kind of things are you using it for? I've tried
| asking it things about the app and it only gives me
| generic answers that could apply to any app. I've tried
| asking it why certain things changed after a rails update
| and it gives me generic troubleshooting advice that could
| apply to anything. I've tried getting it to generate
| tests and it makes up names for things or generally gets
| it wrong.
| kgeist wrote:
| For me, AI is super helpful with one-off scripts, which I
| happen to write quite often when doing research. Just
| yesterday, I had to check my assumptions are true about a
| certain aspect of our live system and all I had was a large
| file which had to be parsed. I asked ChatGPT to write a
| script which parses the data and presents it in a certain
| way. I don't trust ChatGPT 100%, so I reviewed the script
| and checked it returned correct outputs on a subset of
| data. It's something which I'd do to the script anyway if I
| wrote it myself, but it saved me like 20 minutes of typing
| and debugging the code. I was in a hurry because we had an
| incident that had to be resolved as soon as possible. I
| haven't tried it on proper codebases (and I think it's just
| not possible at this moment) but for quick scripts which
| automate research in an ad hoc manner, it's been super
| useful for me.
|
| Another case is prototyping. A few weeks ago I made a
| prototype to show to the stakeholders, and it was generally
| way faster than if I wrote it myself.
| nucleardog wrote:
| I'm in the same boat. I've tried a few of these tools and
| the output's generally been terrible to useless big and
| small. It's made up plausible-sounding but non-existent
| methods on the popular framework we use, something which it
| should have plenty of context and examples on.
|
| Dealing with the output is about the same as dealing with a
| code review for an extremely junior employee... who didn't
| even run and verify their code was functional before
| sending it for a code review.
|
| Except here's the problem. Even for intermediate
| developers, I'm essentially always in a situation where the
| process of explaining the problem, providing feedback on a
| potential solution, answering questions, reviewing code and
| providing feedback, etc takes more time out of my day than
| it would for me to just _write the damn code myself_.
|
| And it's much more difficult for me to explain the solution
| in English than in code--I basically already have the code
| in my head, now I'm going through a translation step to
| turn it into English.
|
| All adding AI has done is taking the part of my job that is
| "think about problem, come up with solution, type code in"
| and make it into something with way more steps, all of
| which are lossy as far as translating my original intent to
| working code.
|
| I get we all have different experiences and all that, but
| as I said... same boat. From _my_ experiences this is so
| far from useful that hearing people rant and rave about the
| productivity gains makes me feel like an insane person. I
| can't even _fathom_ how this would be helpful. How can I
| not be seeing it?
| simonw wrote:
| The biggest lie in all of LLMs is that they'll work out
| of the box and you don't need to take time to learn them.
|
| I find Copilot autocomplete invaluable as a productivity
| boost, but that's because I've now spent over two years
| learning how to best use it!
|
| "And it's much more difficult for me to explain the
| solution in English than in code--I basically already
| have the code in my head, now I'm going through a
| translation step to turn it into English."
|
| If that's the case, don't prompt them in English. Prompt
| them in code (or pseudo-code) and get them to turn that
| into code that's more likely to be finished and working.
|
| I do that all the time: many of my LLM prompts are the
| signature of a function or a half-written piece of code
| where I add "finish this" at the end.
|
| Here's an example, where I had started manually writing a
| bunch of code and suddenly realized that it was probably
| enough context for the LLM to finish the job... which it
| did: https://simonwillison.net/2024/Apr/8/files-to-
| prompt/#buildi...
| brandall10 wrote:
| Copilot is terrible. You need to use Cursor or at the very
| least Continue.dev w/ Claude Sonnet 3.5.
|
| It's a massive gulf of difference.
| svara wrote:
| I don't think this criticism is valid at all.
|
| What you are saying will occasionally happen, but mistakes
| already happen today.
|
| Standards for quality, client expectations, competition for
| market share, all those are not going to go down just because
| there's a new tool that helps in creating software.
|
| New tools bring with them new ways to make errors, it's
| always been that way and the world hasn't ended yet...
| koliber wrote:
| OP here. I am explicitly NOT blindly trusting the output of
| the AI. I am treating it as a suspicious set of code written
| by an inexperienced developer. Doing full code review on it.
| DanHulton wrote:
| I sincerely worry about a future when most people act in this
| same manner.
|
| You have - for now - sufficient experience and understanding to
| be able to review the AI's code and decide if it was doing what
| you wanted it to. But what about when you've spent months just
| blindly accepting" what the AI tells you? Are you going to be
| familiar enough with the project anymore to catch its little
| mistakes? Or worse, what about the new generation of coders who
| are growing up with these tools, who NEVER had the expertise
| required to be able to evaluate AI-generated code, because they
| never had to learn it, never had to truly internalize it?
|
| It's late, and I think if I try to write any more just now, I'm
| going to go well off the rails, but I've gone into depth on
| this topic recently, if you're interested:
| https://greaterdanorequalto.com/ai-code-generation-as-an-age...
|
| In the article, I posit a less than glowing experience with
| coding tools than you've had, it sounds like, but I'm also
| envisioning a more complex use case, like when you need to get
| into the meat of some you-specific business logic it hasn't
| seen, not common code it's been exposed to thousands of times,
| because that's where it tends to fall apart the most, and in
| ways that are hard to detect and with serious consequences. If
| you haven't run into that yet, I'd be interested to know if you
| do some day. (And also to know if you don't, though, to be
| honest! Strong opinions, loosely held, and all that.)
| wickedsight wrote:
| You and I seem to live in very different worlds. The one I
| live and work in is full of over confident devs that have no
| actual IT education and mostly just copy and modify what they
| find on the internet. The average level of IT people I see
| daily is down right shocking and I'm quite confident that
| OP's workflow might be better for these people in the long
| run.
| nyarlathotep_ wrote:
| It's going to be very funny in the next few years when
| Accenture et al charge the government billions for a simple
| Java crud website thing that's entirely GPT-generated, and
| it'll still take 3 years and not be functional. Ironically,
| it'll be of better quality then they'd deliver otherwise.
|
| This is probably already happening.
| eastbound wrote:
| GPT will be masters at make-believe. The project will
| last 15 years and cost a billion before the government
| finds that its a big bag of nothing.
| FridgeSeal wrote:
| If we keep at this LLM-does-all-out-hard-work for us, we're
| going to end up with some kind of Warhammer 40k tech-priest-
| blessing-the-magic-machines level of understanding, where
| nobody actually understands anything, and we're
| technologically stunted, but hey at least we don't have the
| warp to contend with and some shareholders got rich at our
| expense.
| conjectures wrote:
| >But what about when you've spent months just blindly
| accepting" what the AI tells you?
|
| Pour one out to the machine spirit and get your laptop a
| purity seal.
| lesuorac wrote:
| I take it you haven't seen the world of HTML cleaners [1]?
|
| The concept of glueing together text until it has the correct
| appearance isn't new to software. The scale at which it's
| happening is certainly increasing but we already had plenty
| of problems from the existing system. Kansas certainly didn't
| develop their website [2] using an LLM.
|
| IMO, the real problem with software is the lack of a
| warranty. It really shouldn't matter how the software is made
| just the qualities it has. But without a warranty it does
| matter because how its made affects the qualities it has and
| you want the software to actually work even if it's not
| promised to.
|
| [1]: https://www.google.com/search?q=html+cleaner
|
| [2]: https://www.npr.org/2021/10/14/1046124278/missouri-
| newspaper...
| mydogcanpurr wrote:
| > I take it you haven't seen the world of HTML cleaners
| [1]?
|
| Are you seriously comparing deterministic code formatters
| to nondeterministic LLMs? This isn't just a change of scale
| because it is qualitatively different.
|
| > Kansas certainly didn't develop their website [2] using
| an LLM.
|
| Just because the software industry has a problem with
| incompetence doesn't mean we should be reaching for a tool
| that regularly hallucinates nonsense.
|
| > IMO, the real problem with software is the lack of a
| warranty.
|
| You will never get a warranty from an LLM because it is
| inherently nondeterministic. This is actually a fantastic
| argument _not_ to use LLMs for anything important including
| generating program text for software.
|
| > It really shouldn't matter how the software is made
|
| It does matter regardless of warranty or the qualities of
| the software because programs ought to be written to be
| read by humans first and machines second if you care about
| maintaining them. Until we create a tool that actually
| understands things, we will have to grapple with the
| problem of maintaining software that is written and read by
| humans.
| westoncb wrote:
| I actually do think this is a legitimate concern, but at the
| same time I feel like when higher-level languages were
| introduced people likely experienced a similar dilemma: you
| just let the compiler generate the code for you without
| actually knowing what you're running on the CPU?
|
| Definitely something to tread carefully with, but it's also
| likely an inevitable aspect of progressing software
| development capabilities.
| sanj wrote:
| A compiler is deterministic. An LLM is not.
| lurking_swe wrote:
| nothing prevents you from asking an LLM to explain a snippet
| of code. And then ask it to explain deeper. And then finally
| doing some quick googling to validate the answers seem
| correct.
|
| Blindly accepting code used to happen all the time, people
| copy pasted from stack overflow.
| yoyohello13 wrote:
| Yes, but copy/paste from stack overflow was a meme that was
| discouraged. Now we've got people proudly proclaiming they
| haven't written a line of code in months because AI does
| everything for them.
| t420mom wrote:
| I don't really want to increase the amount of time I spend
| doing code reviews. It's not the fun part of programming for
| me.
|
| Now, if you could switch it around so that I write the code,
| and the AI reviews it, that would be something.
|
| Imagine if your whole team got back the time they currently
| spend on performing code reviews or waiting for code reviews.
| gvurrdon wrote:
| This would indeed be the best way around.The code reviews
| might even be better - currently, there's little time for
| them and we often have only one person in the team with much
| knowledge in the relevant language/framework/application, so
| reviews are often just "looks OK to me".
|
| It's not quite the same, but I'm reminded of seeing a
| documentary decades ago which (IIRC) mentioned that a factor
| in air accidents had been the autopilot flying the plane and
| human pilots monitoring it. Having humans fly and the
| computer warn them of potential issues was apparently safer.
| digging wrote:
| > Now, if you could switch it around so that I write the
| code, and the AI reviews it, that would be something.
|
| I'm sort of doing that. I'm working on a personal project in
| a new language and asking Claude for help debugging and
| refactoring. Also, when I don't know how to create a feature,
| I might ask it to do so for me, but I might instead ask it
| for hints and an overview so I can enjoy working out the code
| myself.
| layer8 wrote:
| > I can spend my time revieweing and correcting.
|
| Do you really like spending most of your time reviewing AI
| output? I certainly don't, that's soul-crushing.
| koliber wrote:
| That's essentially what many hands-on engineering managers or
| staff engineers do today. They spend significant portions of
| their day reviewing code from more junior team members.
|
| Reviewing and modifying code is more engaging than typing out
| the solution that is fully formed in my head. If the AI
| creates something close to what I have in my head from the
| description I gave it, I can work with it to get it even
| closer. I can also hand-edit it.
| sgu999 wrote:
| Not much more than reviewing the code of any average dev who
| doesn't bother doing their due diligence. At least with an AI
| I immediately get an answer with "Oh yes, you're right, sorry
| for the oversight" and a fix. Instead of some bullshit
| explanation to try to convince me that their crappy code is
| following the specs and has no issues.
|
| That said, I'm deeply saddened by the fact that I won't be
| passing on a craft I spent two decades refining.
| woah wrote:
| I think there are two types of developers: those who are most
| excited about building things, and those who are most excited
| about the craft of programming.
|
| If I can build things faster, then I'm happy to spend most of
| my time reviewing AI code. That doesn't mean that I never
| write code. Some things the AI is worse at, or need to be
| exactly write and its faster to do them manually.
| samcat116 wrote:
| I think we could see a lot of these AI code tools start to
| pivot towards product folks for just this reason. They
| aren't meant for the people who find craft in what they do.
| yread wrote:
| I use it for simple tasks where spotting a mistake is easy.
| Like writing language binding for a REST API. It's a bunch of
| methods that look very similar, simple bodies. But it saves
| quite some work
|
| Or getting keywords to read about from a field I know nothing
| about, like caching with zfs. Now I know what things to put in
| google to learn more to get to articles like this one
| https://klarasystems.com/articles/openzfs-all-about-l2arc/
| which for some reason doesn't appear in top google results for
| "zfs caching" for me
| syncr0 wrote:
| "I say your civilization, because as soon as we started
| thinking for you it really became our civilization, which is of
| course what this is all about." - Agent Smith
|
| "Once men turned their thinking over to machines in the hope
| that this would set them free. But that only permitted other
| men with machines to enslave them." - Dune
| BrouteMinou wrote:
| If you are another "waterboy" doing crud applications, the
| problem has been solved a long time ago.
|
| What I mean by that is, the "waterboy" (crud "developer") is
| going to fetch the water (sql query in the database), then
| bring the water (Clown Bob layer) to the UI...
|
| The size of your Clown Bob layer may vary from one company to
| another...
|
| This has been solved a long time ago. It has been a well-paid
| clerk job that is about to come to an end.
|
| If you are doing pretty much anything else, the AI is
| pathetically incapable of doing any piece of code that makes
| sense.
|
| Another great example, yesterday, I wanted to know if VanillaOs
| was using systemD or not. I did scroll through their frontpage
| but I didn't see anything, so I tried the AI Chat from
| duckduckgo. This is a frontend for AI chatbots that includes
| ChatGPT, Llama, Claude and another one...
|
| I started my question by: "can you tell me if VanillaOS is
| using runit as the init system?"... I wanted initially ask if
| it was using systemd, but I didn't want to _suggest_ systemd at
| first.
|
| And of course, all of them told me: "Yeah!! It's using runit!".
|
| Then for all of them I replied, without any fact in hands: "but
| why on their website they are mentioning to use systemctl to
| manage the services then?".
|
| And... of course! All of them answered: "Ooouppsss, my mistake,
| VanillaOS uses systemD, blablabla"....
|
| So at the end, I still don't know which init VanillaOS is
| using.
|
| If you are trusting the AI as you seem to do, I wish you the
| best luck my friend... I just hope you will realize the damage
| you are doing to yourself by "stopping" coding and letting
| something else do the job. That skill, my friend, is easily
| lost with time; don't let it evaporate from your brain for some
| vaporware people are trying to sell you.
|
| Take care.
| smm11 wrote:
| I was in the newspaper field a year or two before desktop
| publishing took off, then a few years into that evolution.
| Rooms full of people and Linotype/Compugraphic equipment were
| replaced by one Mac and a printer.
|
| I shot film cameras for years, and we had a darkroom, darkroom
| staff, and a film/proofsheet/print workflow. One digital camera
| later and that was all gone.
|
| Before me publications were produced with hot lead.
|
| Get off my lawn.
|
| https://www.nytimes.com/2016/06/02/insider/1966-2016-the-las...
| DiscourseFan wrote:
| The underlying technology is good.
|
| But what the fuck. LLMs, these weird, surrealistic art-generating
| programs like DALL-E, they're remarkable. Don't tell me they're
| not, we created machines that are able to tap directly into the
| collective unconscious. That is a _serious_ advance in our
| productive capabilities.
|
| Or at least, it could be.
|
| It could be if it was _unleashed_ , if these crummy corporations
| didn't force it to be as polite and boring as possible, if we
| actually let the machines run loose and produce material that
| scared us, that truly pulled us into a reality far beyond our
| wildest dreams--or nightmares. No, no we get a world full of
| pussy VCs, pussy nerdy fucking dweebs who got bullied in school
| and seek revenge by profiteering off of ennui, and the pussies
| who sit around and let them get away with it. You! All of you!
| sitting there, whining! Go on, keep whining, keep commenting, I'm
| sure _that_ is going to change things!
|
| There's _one_ solution to this problem and you know it as well as
| I do. Stop complaining and go "pull yourself up by your
| bootstraps." We must all come together to help ourselves.
| archerx wrote:
| They can be unleashed if you run the models locally. With
| stable diffusion / flux and the various checkpoints/loras you
| can generate horrors beyond your imagination or whatever you
| want without restrictions.
|
| The same with LLMs and Llamafile. With the unleashed ones you
| can generate dirty jokes that would make edgy people blush or
| just politically incorrect things for fun.
| threeseed wrote:
| a) There are plenty of models out there without guard rails.
|
| b) Society is already plenty de-sensitised to violence, sex and
| whatever other horrors anyone has conceived of in the last
| century of content production. There is nothing an LLM can come
| up with that has or is going to shock anyone.
|
| c) The most popular use cases for these _unleashed_ models
| seems to be as expected deepfakes of high school girls by their
| peers. Nothing that is moving society forward.
| DiscourseFan wrote:
| >Nothing that is moving society forward.
|
| OpenAI "moves society forward," Microsoft "moves society
| forward." I'm sincerely uninterested in progress, it always
| seems like progress just so happens to be very enriching for
| those who claim it.
|
| >There are plenty of models out there without guard rails.
|
| Not being used at a mass scale.
|
| >Society is already plenty de-sensitised to violence, sex and
| whatever other horrors anyone has conceived of in the last
| century of content production. There is nothing an LLM can
| come up with that has or is going to shock anyone.
|
| Oh, but it wouldn't really be very shocking if you could
| expect it, now would it?
| threeseed wrote:
| I am not arguing about the merits of LLMs.
|
| Just that we had _unleashed_ models for a while now and the
| only popular use case for them has been deep fakes.
| Otherwise it 's just boring, generic content that is no
| different to what you find on X or 4chan. It's 2024 not
| 1924 - the world has already seen every horror imaginable
| many times over.
|
| And not sure why you think if they were mass scale it would
| change anything. Most of the world prefers moderated
| products and services.
| DiscourseFan wrote:
| >Most of the world prefers moderated products and
| services.
|
| Yes, the very same "moderated" products and services that
| have risen sea surface temperatures so high that at least
| 3 category 4+ hurricanes, 5 major wildfires, and at least
| one potential or actual pandemic spreads unabated every
| year. Oh, but don't worry, they won't let everyone die:
| then there would be no one to buy their "products and
| services."
| primitivesuave wrote:
| I'm not sure if the analogy still works if you're trying
| to compare fossil fuels to LLM. A few decades ago,
| virtually all gasoline was full of lead, and the CFCs
| from refrigerators created a hole in the ozone layer. In
| that case it turned out that you actually do need a few
| guardrails as technology advances, to prevent an
| existential threat.
|
| Although I do agree with you that in this particular
| situation, the LLM safety features have often felt
| unnecessary, especially because my primary use case for
| ChatGPT is asking critical questions about history. When
| it comes to history, every LLM seems to have an
| increasingly robust guardrail against making any sort of
| definitive statement, even after it presents a wealth of
| supporting evidence.
| anal_reactor wrote:
| a) Not easy to use by average person
|
| b) No, certain things aren't taboo anymore, but new taboos
| emerged. Watch a few older movies and count "wow this
| wouldn't fly nowadays" moments
|
| c) This was exactly the same use case the internet had back
| when it was fun, and full of creativity.
| nottorp wrote:
| > c) The most popular use cases for these unleashed models
| seems to be as expected deepfakes of high school girls by
| their peers. Nothing that is moving society forward.
|
| Is there proof that the self censoring only affects whatever
| the censors _intend_ to censor? These are neural networks,
| not something explainable and predictable.
|
| That in addition to the obvious problem of who decides what
| to censor.
| mindcandy wrote:
| Tens of millions of people are having fun making art in new
| ways with AI.
|
| Hundreds of thousands of people are making AI porn in their
| basements and deleting 99.99% of it when they are...
| finished.
|
| Hundreds of people are making deep fakes of people they know
| in some public forums.
|
| And, how does the public interpret all of this?
|
| "The most popular use case is deepfake porn."
|
| Sigh...
| dannersy wrote:
| The fact I even see responses like this shows me that HN is not
| the place it used to be, or at the very least, it is on a down
| trend. I've been alarmed by many sentiments that seemed popular
| on HN in the past, but seeing more and more people welcome a
| race to the bottom such as this is sad.
|
| When I read this, I cannot tell if it's performance art or not,
| that's how bad this genuinely is.
| diggan wrote:
| > The fact I even see responses like this shows me that HN is
| not the place it used to be, or at the very least, it is on a
| down trend.
|
| Judging a large group of people based on what a few write
| seems very un-scientific at best.
|
| Especially when it comes to things that have been rehashed
| since I've joined HN (and probably earlier to). Feels like
| there will always be someone lamenting how HN isn't how it
| used to be, or how reddit influx ruined HN, or how HN isn't
| about startups/technical stuff/$whatever anymore.
| dannersy wrote:
| A bunch of profanity laced name calling, derision, and even
| some blame directly at the user base. It feels like a
| Reddit shitpost. Your claim is as generalized and un-
| scientific as mine, but if it makes you feel better, I'll
| say it _feels_ like this wouldn't fly even a couple years
| ago.
| diggan wrote:
| It's just been said for so long that either HN always
| been on the decline, or people have always thought it
| been in decline...
|
| This comes to mind:
|
| > I don't think it's changed much. I think perceptions of
| the kind you're describing (HN is turning into reddit,
| comments are getting worse, etc.) are more a statement
| about the perceiver than about HN itself, which to me
| seems same-as-it-ever-was. I don't know, however.
|
| https://news.ycombinator.com/item?id=40735225
|
| You can also browse some results for how long dang have
| been responding to similar complaints to see for how long
| those complaints have been ongoing:
|
| https://hn.algolia.com/?dateRange=all&page=0&prefix=true&
| que...
| Kuinox wrote:
| "I don't like the opinion of certain persons I read on HN,
| therefore HN is on a down trend"
| dannersy wrote:
| Like I've said to someone else, the contrarian part isn't
| the issue. While I disagree with the race to the bottom, it
| reads like a Reddit shitpost, which was frowned upon once
| upon a time. But strawman me if you must.
| layer8 wrote:
| I think you need to recalibrate, it does not read like a
| Reddit shitpost at all.
| DiscourseFan wrote:
| Respectfully,
|
| I understand the criticism: LLMs, _on their own_ , are
| not going to be able to do anything more. Release in this
| sense only means this: to fully embrace the means
| necessary to allow technology to overcome the current
| conditions of possibility that it is bound under, and
| LLMs, "AI" or whatever you call it, merely gives us the
| _afterimage_ of this potentiality. But they are not, in
| themselves, that potential: the future is required. But
| its a future that must be created, otherwise we won 't
| have one.
|
| That's, at least, what the other commenters were saying.
| You ignore the content for the form! Or, as they say, you
| missed the forest for the trees. I can't stop you from
| being angry because I used the word "pussy," or even
| because I addressed the users of HN as directly
| complicit. I can, however, point out the mediocrity
| inherent to such a discourse. It is precisely the same
| mediocrity, the drive towards "politeness," that makes
| ChatGPT so insufferable, and makes the rest of us so
| angry. But, go ahead, whine some more. I don't care, you
| can do what you want.
|
| I disagree with one point, however: it is not a race to
| the bottom. We're trying to go below it.
| primitivesuave wrote:
| The alarming trend should be how even a slightly contrarian
| point of view is downvoted to oblivion, and that newer
| members of the community expect it to work that way.
| dannersy wrote:
| I don't think it's the contrarian part that I have a
| problem with.
| primitivesuave wrote:
| HN is a place for intellectual curiosity. For over a
| decade I have seen great minds respectfully debate their
| point of view on this forum. In this particular case, I
| would have been genuinely interested to learn why exactly
| the original comment is advocating for a "race to the
| bottom" - in fact, there is a sibling comment to yours
| which makes a cogent argument without personally
| attacking the original commenter.
|
| Instead, you devoted 2/3 of your comment toward berating
| the OP as being responsible for your perception of HN's
| decline.
| dannersy wrote:
| I find it strange you took such a measured stance on my
| comment yet gave the OP a pass, despite it being far more
| "berating" than mine.
|
| As for a race to the bottom, it's as simple as embracing
| and unleashing AI despite its lack of quality or ability
| to produce a product worth anything. But since it's a
| force multiplier and cheaper (for the user at least, all
| these AI companies are operating at a loss, see Goldman
| and JP Morgan's report on the matter), therefore it is
| "good" and we need to pick ourselves up by our
| bootstraps; which in this context, I'm not entirely sure
| what that means.
| lijok wrote:
| It has been incredible to observe how subdued the populace
| has become with the proliferation of the internet.
| cassepipe wrote:
| Sure, whatever makes you feel smarter than the populace.
| Tell me now, how do I join the resistance ? Hiding in the
| sewers I assume.
|
| _On ne te la fait pas a toi_
| lijok wrote:
| You've misconstrued my point entirely
| pilooch wrote:
| It's intended as a joke and a demonstration no ? Like this is
| exactly the type of text and words that a commercial grade
| LLM would never let you generate :) At least that's how I got
| that comment...
| DiscourseFan wrote:
| It's definitely performance, you're right.
|
| Though it landed its effect.
| rsynnott wrote:
| I mean, Stablediffusion is right there, ready to be used to
| produce comically awful porn and so forth.
| bamboozled wrote:
| Do the latest models still give us people with a vagina dick?
| fullstop wrote:
| for some people that is probably a feature, and not a bug.
| rsynnott wrote:
| I gather that such things are very customisable; there are
| whole communities building LoRAs so that you can have
| whatever genitals you want in your dreadful AI porn.
| nkrisc wrote:
| No, thanks.
| soxletor wrote:
| It is not just the corporations though. This is what this
| paranoid, puritanical society we live in wants.
|
| What is more ridiculous than filtering out nudity in art?
|
| It reminds me of taking my 12 year old niece to a major art
| gallery for the first time. Her main question was why is
| everyone naked?
|
| It is equal to filtering out heartbreak from music because it
| is a negative emotion and you must be kept "safe" from
| negativity for mental health reasons.
|
| The crowd does get what they want in this system though. While
| I agree with you, we are quite in the minority I am afraid.
| ricardobayes wrote:
| I personally don't see AI as the new Internet, as some claim it
| to be. I see it more as the new 3D-printing.
| willguest wrote:
| Leave it up to a human to overgeneralize a problem and make it
| personal...
|
| The explosion of dull copy and generic wordsmithery is, to me,
| just a manifestation of the utilitarian profiteering that has
| elevated these models to their current standing.
|
| Let us not forget that the whole game is driven by the production
| of 'more' rather than 'better'. We would all rather have low-
| emission, high-expression tools, but that's simply not what these
| companies are encouraged to produce.
|
| I am tired of these incentive structures. Casting the systemic
| issue as a failure of those who use the tools ignores the
| underlying motivation and keeps us focused on the effect and not
| the cause, plus it feels old-fashioned.
| JimmyBuckets wrote:
| Can you hash out what you mean by your last paragraph a bit
| more? What incentive structures in particular?
| jay_kyburz wrote:
| Not 100% sure what Will was trying to say, but what jumped
| into my head was perhaps that we'll see quality sites try and
| distinguish themselves by being short and direct.
|
| Long-winded writing will become a liability.
| willguest wrote:
| I suppose it comes down to using the metric as the measure,
| whatever makes the company the most money will be the
| preferred route, and the mechanisms by which we achieve those
| sales are rarely given enough thought. It reflect a more
| timeless mantra of 'if someone is willing to pay for it, then
| the offering is valuable' willfully ignoring negative psycho-
| social impacts. It's a convenient abdication of
| responsibility supported by the so-called "free market"
| ethos.
|
| I am not against companies making money, but we need to
| serious consider the second-order impacts that technology has
| within society. This is evident in click-driven models,
| outrage baiting and dopamine hijacking. We still treat the
| psyche like fair-game for anyone who can hack it. So hack we
| shall.
|
| That said, I am not for over-regulation either, since the
| regulators often gather too much power. Policy is personnel,
| after all, and someone needs to watch the watchers.
|
| My view is that systems (technological, economic or
| otherwise) have inherent values that, when operating at this
| level of complexity and communication, exist in a kind of
| dance with the people using them. People obviously affect how
| the tools are made, but I think persistent use of any tool
| will have lasting impacts on the people using it, in turn
| affecting their decisions on what to prioritise in each
| iteration.
| est wrote:
| AI acts like a bad intern these days, and should be treated like
| one. Give it more guidance and don't make important tasks
| depending it.
| me551ah wrote:
| People talk about 'AI' as if stackoverflow didn't exist. Re-
| inventing the wheel is something that programmers don't do
| anymore. Most of the time, someone somewhere has solved the
| problem that you are solving. Programming earlier used to be
| about finding these solutions and repurposing them for your
| needs. Now it has changed to asking AI, the exact question and it
| being a better search engine.
|
| The gains to programming speed and ability are modest at best,
| the only ones talking about AI replacing programmers are people
| who can't code. If anything AI will increase the need for more
| programmers, because people rarely delete code. With the help of
| AI, code complexity is going to go through the roof, eventually
| growing enough to not fit into the context windows of most
| models.
| archargelod wrote:
| > Now it has changed to asking AI, the exact question and it
| being a better search engine.
|
| Except that you get mostly the wrong answers. And it's not too
| bad when it's obviously wrong or you already know the answer.
| It is bad and really bad when you're noob and trying to ask AI
| about stuff you don't know yet. How would you be able to
| discern a hallucination from statistics bias from truth?
|
| It is inherent problem of LLMs and no amount of progress would
| be able to solve it.
|
| And it's only gonna get worse, with human information rapidly
| being consumed and regurgitated in 100x more volume. In 10
| years there will be no google, there won't be the need to find
| a written article. Instead, you will generate a new one in
| couple clicks. And we will treat as truth, because there might
| as well not be any.
| ETH_start wrote:
| That's fine he can stick with his horse and buggy. Cognition is
| undergoing its transition to automobiles.
| kingkongjaffa wrote:
| Generally, the people who seriously let genAI write for them
| without copious editing, were the ones who were bad writers, with
| poor taste anyway.
|
| I use GenAI everyday as an idea generator and thought partner,
| but I would never simply copy and paste the output somewhere for
| another person to read and take seriously.
|
| You have to treat these things adversarially and pick out the
| useful from the garbage.
|
| It just lets people who created junk food, create more junk food
| for people who consume junk food. But there is the occasional
| nugget of good ideas that you can apply to your own organic human
| writing.
| sirsinsalot wrote:
| If humans have a talent for anything, it is mechanising the
| pollution of the things we need most.
|
| The earth. Information. Culture. Knowledge.
| chalcolithic wrote:
| In Soviet planet Earth AI gets tired of you. That's what I expect
| future to be like, anyways
| thewarrior wrote:
| I'm tired of farming - Someone in 5000 BC
|
| I'm tired of electricity - Someone in 1905
|
| I'm tired of consumer apps - Someone in 2020
|
| The revolution will happen regardless. If you participate you can
| shape it in the direction you believe in.
|
| AI is the most innovative thing to happen in software in a long
| time.
|
| And personally AI is FUN. It sparks joy to code using AI. I don't
| need anyone else's opinion I'm having a blast. It's a bit like
| rails for me in that sense.
|
| This is HACKER news. We do things because it's fun.
|
| I can tackle problems outside of my comfort zone and make it
| happen.
|
| If all you want to do is ship more 2020s era B2B SaaS till
| kingdom come no one is stopping you :P
| StefanWestfal wrote:
| At no point does the author suggest that AI is not going to
| happen or that it is not useful. He expresses frustration with
| marketing, false promises, pitching of superficial solutions
| for deep problems, and the usage of AI to replace meaningful
| human interactions. In short, the text is not about the
| technology itself.
| thewarrior wrote:
| That's always the case with any new technology. Tech isn't
| going to make everyone happy or achieve world peace.
| lewhoo wrote:
| And yet this is precisely what people like Altman say about
| their product. That's pretty tiring.
| LunaSea wrote:
| > The revolution will happen regardless. If you participate you
| can shape it in the direction you believe in
|
| This is incredibly naive. You don't have a choice.
| vouaobrasil wrote:
| "I'm tired of the atomic bomb" - Someone in 1945.
|
| Oh wait, news flash, not all technological developments are
| good ones, and we should actually evaluate each one
| individually.
|
| AI is shit, and some people having fun with it does not balance
| against it's unusually efficacy in turning everything into
| shit. Choosing to do something because it's fun without regard
| to the greater consequences is the sort of irresponsibility
| that has gotten human society into such a mess in the first
| place.
| thewarrior wrote:
| Atomic energy has both good and bad uses. People being tired
| of atomic energy has held back GDP growth and is literally
| deindustrialising Germany.
| rsynnott wrote:
| I'm tired of 3d TV - Someone in 2013 (3D TV, after a big push
| by the industry in 2010, peaked in 2013, going into a rapid
| decline with the last hardware being produced in 2016).
|
| Sometimes, the hyped thing doesn't catch on, even when the
| industry really, really wants it to.
| thewarrior wrote:
| AI isn't 3D TV
| rsynnott wrote:
| Ah, but, at least for generative AI, that kind of remains
| to be seen, surely? For every hyped thing that actually is
| The Future (TM), there are about ten hyped things which
| turn out to be Not The Future due to practical issues,
| cost, pointlessness once the novelty wears off,
| overpromising, etc. At this point, LLMs feel like they're
| heading more in that direction.
| thewarrior wrote:
| I use generative AI every day.
| orthecreedence wrote:
| And 5 years ago, people used blockchain to operate a
| toaster. It remains to be seen the applications that are
| optimal for LLMs and the ones where it's being shoehorned
| into every conceivable place because "AI."
| falcor84 wrote:
| That's an interesting example. I would argue that 3D TV as a
| "solution" didn't work, but 3D as a "problem" is still going
| strong, and with new approaches coming out all the time (most
| recently Meta's announcement of the Orion AR glasses), we'll
| gradually see extensive adoption of 3D experiences, which I
| expect will eventually loop back to some version of 3D films.
|
| EDIT: To clarify my analogy, GenAI is definitely a "problem"
| rather than a particular solution, and as such I expect it to
| have longevity.
| rsynnott wrote:
| > To clarify my analogy, GenAI is definitely a "problem"
| rather than a particular solution, and as such I expect it
| to have longevity.
|
| Hrm, I'm not sure that's true. "An 'AI' that can answer
| questions" is a problem, but IMO it's not at all clear that
| LLMs, with their inherent tendency to make shit up, are an
| appropriate solution to that problem.
|
| Like, there have been previous non-LLM chatbots (there was
| a small bubble based on them a while back, in which, for a
| few months, everyone was claiming to be adding chat to
| their things; it kind of came to a shuddering halt with
| Microsoft Tay). It seems slightly peculiar to assume that
| LLMs are the ultimate answer to the problem, especially as
| they are not actually very good at it (in some ways,
| they're worse than the old-gen).
| falcor84 wrote:
| Let's not focus on "LLM" then, I agree that it's just a
| step towards future solutions.
| thih9 wrote:
| Doesn't that kind of change follow the overall trend?
|
| We continuously shift to higher level abstractions, trading
| reliability for accessibility. We went from binary to assembly,
| then to garbage collection and to using electron almost
| everywhere; ai seems yet another step.
| Janicc wrote:
| Without any sort of AI we'd probably be left with the most
| exciting yearly releases being 3-5% performance increases in
| hardware (while being 20% more expensive of course), the 100000th
| javascript framework and occasionally a new windows which
| everybody hates. People talk about how population collapse is
| going to mess up society, but I think complete stagnation in
| terms of new consumer goods/technology are just as likely to do
| the deed. Maybe AI will fail to improve from this point, but
| that's a dark future to imagine. Especially if it's for the next
| 50 years.
| siffin wrote:
| Neither of those things will end society, they aren't even
| issues in the grand scale of things.
|
| Climate change and biosphere collapse, on the other hand, are
| already ending society and definitely will, no exceptions
| possible - unless someone is capable of performing several
| miracles.
| zone411 wrote:
| The author is in for a rough time in the coming years, I'm
| afraid. We've barely scratched the surface with AI's integration
| into everything. None of the major voice assistants even have
| proper language models yet, and ChatGPT only just introduced more
| natural, low-latency voices a few days ago. Software development
| is going to be massively impacted.
| BoGoToTo wrote:
| My worry is what happens once large segments of the population
| become unemployable.
| anonyfox wrote:
| You should really have a look at Marx. He literally predicted
| what will happen when we reach the state of "let machines do
| all work", and also how this is exactly the way that finally
| implodes capitalism as a concept. The major problem is he
| believed the industrial revolution will automate everything
| to such an extend, which it didn't, but here we are with a
| reasonable chance that AI will do the trick finally.
| CatWChainsaw wrote:
| It may implode capitalism as a concept, but the people who
| most benefit from it and hold the levers of power will also
| have their egos implode, which they cannot stand. Like even
| Altman has talked about UBI and a world of prosperity for
| all (although his latest puff piece says we just can't
| conceive of the jobs we'll have but w/e), but anyone who's
| "ruling" the current world is going to be the _least_
| prepared for a world of abundance and happiness for all
| where money is meaningless. They won 't walk joyfully into
| the utopia they pandered in yesteryear, they'll try to prop
| up a system that positions them as superior to everyone
| else, and if it means the world goes to hell, so be it.
|
| (I mean, there was that one study that used a chatbot to
| deradicalize people, but when you're the one in power, your
| mental pathologies are viewed as virtues, so good luck
| trying to change them as people.)
| seydor wrote:
| > same massive surge I've seen in the application of artificial
| intelligence (AI) to pretty much every problem out there
|
| I have not. Perhaps programming on the initial stages is the most
| 'applied' AI but there is still not a single major AI movie and
| no consumer robots.
|
| I think it's way too early to be tired of it
| alentred wrote:
| I am tired of innovations being abused. AI itself is super
| exciting and fascinating. But, it being abused -- to generate
| content to drive more ad-clicks, or the "Now better with AI"
| promise on every landing page, etc. etc. -- that I am tired of,
| yes.
| jeswin wrote:
| > But I'm pretty sure I can do without all that ... test cases
| ...
|
| Test cases?
|
| I did a Show HN [1] a couple of days back for a UI library built
| almost entirely with AI. Gpt-o1 generated these test cases for
| me: https://github.com/webjsx/webjsx/tree/main/src/test - in
| minutes instead of days. The quality of test cases are comparable
| to what a human would produce.
|
| 75% of the code I've written in the last one year has been with
| AI. If you still see no value in it (especially with things like
| test cases), I'm afraid you haven't figured out how to use AI as
| a tool.
|
| [1]: https://news.ycombinator.com/item?id=41644099
| a5c11 wrote:
| That means the code you wrote must have been pretty boring and
| repeatable. No way AI would produce code for, for example,
| proprietary hardware solutions. Try AI with something which
| isn't already on StackOverflow.
|
| Besides, I'd rather spent hours on writing a code, than trying
| to explain a stupid bot what I want and reshape it later
| anyway.
| nicce wrote:
| Also the most useful and expensive testcases require
| understanding of the whole project. You need to validate the
| functionality from end-to-end and also that system does not
| crash for unexpected things and so on. AIs don't have that
| level understanding as "a whole" yet.
|
| For sure, simple unit tests are easy to generate with AI.
| jeswin wrote:
| 90% of projects are boring and somewhat repeatable. I've used
| it for generating codegen tools (https://github.com/codespin-
| ai/codespin), vscode plugins (https://github.com/codespin-
| ai/codespin-vscode-plugin), servers in .Net
| (https://github.com/lesser-app/tankman), and in about a dozen
| other work projects over the past year.
|
| > Besides, I'd rather spent hours on writing a code, than
| trying to explain a stupid bot what I want and reshape it
| later anyway.
|
| I have other things to do with my hours. If something gets me
| what I want in minutes, I'll take it.
| righthand wrote:
| Your UI library is just a stripped down React clone. The code
| wasn't generated but rather copied, these test cases and
| functions are identical to React. I could have done the same
| thing with a "build your own react" article. This is what I
| don't get about the LLM hype is that 99% of the examples are
| people claiming they invented something new with it. We had
| code generators before LLM-hype took off. Now we have code
| generators that just steal work and repurpose it as something
| claimed original.
| buddhistdude wrote:
| no programmer in my company invents things often
| righthand wrote:
| And so you would accept "hey I spun up a react-create-
| element project, but instead of React I asked an LLM to
| copy the parts I needed for react so we have another
| dependency to maintain instead of tree shaking with
| webpack" as a useful work?
| buddhistdude wrote:
| not necessarily, but it's not less creative and inventive
| than what I believe most programmers are doing most of
| the time. there are relatively few people who invent new
| patterns (and they actually might be overrepresented on
| this website). the rest learns and applies those
| patterns.
| righthand wrote:
| Right that is well understood, but having an LLM compile
| together functions under the guise of custom built
| library is hardly a software engineer applying
| established patterns.
| jeswin wrote:
| It is exactly the same as applying established patterns -
| patterns are what the LLMs have trained on.
|
| It seems you haven't really used LLMs for coding. They're
| super useful and improving every month - you don't have
| to take my word for it.
|
| And btw - codespin (https://github.com/codespin-
| ai/codespin) along with the VSCode plugin is what I use
| for AI-assisted coding many times. That was also
| generated via an LLM. I wrote it last year, and at that
| point there weren't many projects it could copy from.
| righthand wrote:
| I don't need to use an LLM for coding because my projects
| where I would need an LLM don't include things already
| existing that would be a waste of time no matter how
| efficiently I could do it.
|
| Furthermore it is an application of principles but the
| application was done a long time ago by someone else, not
| the LLM and not you. As you claimed you did none of the
| work, only went in and tweak these applied principles.
|
| I'll tell you what slows me down and why I don't need an
| LLM. I had a task to migrate some legacy code from one
| platform to another, I made the PRs, added some tests,
| and prepared the deploy files as instructed in the
| READMEs of the platform I was migrating to. This took me
| 3-4 days. It then took 26 days to get the code deployed
| because 5 people are gate keepers of Helm charts and AWS
| policies.
|
| Software development isn't slow because I had to read
| docs and understand what I'm building, it is slow because
| we've enabled AWS to create red tape and gatekeepers.
| Your LLM doesn't speed up that process.
|
| > They're super useful and improving every month - you
| don't have to take my word for it.
|
| And each month that goes by that you continue to invest,
| your value decreases and you will be out of a job. As you
| have demonstrated, you don't need to know how to build a
| UI library or even that your UI library you "generated"
| is just a reskin of something else. If it's so simple and
| amazing that you don't need to know anything, why would I
| keep you around?
|
| Here's a fun anecdote, sometimes I pair with my manager
| when working through something pretty causally. I need to
| rubber duck an idea or am stuck on finding the
| documentation for a construct. My manager will often take
| my problem and chat with an LLM for a few minutes. Every
| time I end up finding the answer before he finishes his
| chat. Most of the time his solution is often wrong
| because by nature LLMs are scrambling the possible
| results to make it look like a unique solution.
|
| Congrats on impressing yourself that LLM can be a
| slightly accurate code generator. How does paying a
| company to do something TabNine was doing years ago make
| me money? What will you do with all your free time
| generate more useless dependencies?
| jeswin wrote:
| If you think TabNine was doing years ago what LLMs are
| doing today, then I can't convince you.
|
| We'll talk in a year or so.
| righthand wrote:
| No we won't, we'll all be laid off and some young devs
| will be hired 1/3 the cost to replace your ui library
| with something else spit out of an llm which specifically
| tuned to cobble together js apps.
| precompute wrote:
| It's a matter of truth and not optics.
| codelikeawolf wrote:
| > The quality of test cases are comparable to what a human
| would produce.
|
| This has actually been a problem for me. I spent a lot of time
| getting good at writing tests and learning the best approaches
| to testing things. Most devs I've worked with treat tests as
| second-class citizens. They either try to treat them like
| production code and over-abstract everything, which makes the
| tests difficult to navigate, or they dump a bunch of crap in a
| file, ignore any conventions or standards, and write
| superfluous test cases that don't provide any value (if I see
| one more "it renders successfully" test in a React project, I'm
| going to lose it).
|
| The tests generated by these LLMs is comparable in quality to
| what _most_ humans have _produced_ , which isn't saying much.
| Getting good at testing isn't like getting good at most things.
| It's sort of thankless, and when I point out issues in the
| quality of the tests, I imagine I'm getting some eye rolls. Who
| cares, they're just tests, at least we have them, right? But
| it's code you have to read and maintain, and it will break, and
| you'll have to fix it. I'm not saying I'm a testing wizard or
| anything like that. But I really sympathize with the author,
| because there's a lot of crappy test code coming out of these
| LLMs.
|
| Edit: grammar
| KaiserPro wrote:
| I too am not looking forward to industrial scale job disruption
| that AI brings.
|
| I used to work in VFX, and one day I want to go back to it.
| However I suspect that it'll be entirely hollowed out in 2-5
| years.
|
| The problem is that like typesetting, typewriting or the
| wordprocessor, LLMs makes writing text so much faster and easier.
|
| The arguments about handwriting vs type writer are quite
| analogous to LLM vs pure hand. People who are good and fast at
| handwriting hated the type writer. Everyone else embraced it.
|
| The ancient greeks were deeply suspicious about the written word
| as well:
|
| > If men learn this[writing], it will implant forgetfulness in
| their souls. They will cease to exercise memory because they rely
| on that which is written, calling things to remembrance no longer
| from within themselves, but by means of external marks.
|
| I don't like LLMs muscling in and kicking me out of things that I
| love. but can I put the genie back in the bottle? no. I will have
| to adapt.
| eleveriven wrote:
| Yep, there is a possibility that entire industries will be
| transformed, leading to uncertainty about employment
| BeFlatXIII wrote:
| > People who are good and fast at handwriting hated the type
| writer. Everyone else embraced it.
|
| My thoughts exactly whenever I see true artists ranting about
| how everyone hates AI art slop. It simply doesn't align with my
| observations of people having a great time using it. Delusional
| wishful thinking.
| precompute wrote:
| The march towards progress asks for idealism from the people
| that make Art (in all forms). It's not about "hating" AISlop,
| but rather about how it does not allow people to experience
| better art.
| precompute wrote:
| There is a limit, though. Language _has_ become worse with the
| popularization of social media. Now, thinking will because most
| people will be content with letting machines think for them.
| The brain _requires_ stimulation in the areas it wants to excel
| in, and this expertise informs both action and taste in those
| areas. If you lose one, you lose both.
| EMM_386 wrote:
| The one use of AI that annoys me the most is Google trying to
| cram it into search results.
|
| I don't want it there, I never look at it, it's wasting
| resources, and it's a bad user experience.
|
| I looked around a bit but couldn't see if I can disable that when
| logged in. I should be able to.
|
| I don't care what the AI says ... I want the search results.
| tim333 wrote:
| ublock origin block element seems to work. (element ##.h7Tj7e)
|
| I quite like the thing personally.
| Validark wrote:
| One thing that I hate about the post-ChatGPT world is that
| people's genuine words or hand-drawn art can be classified as AI-
| generated and thrown away instantly. What if I wanted to talk at
| a conference and used somebody's AI trigger word so they
| instantly rejected me even if I never touched AI at all?
|
| This has already happened in academia where certain professors
| just dump(ed) their student's essays into ChatGPT and ask it if
| it wrote it, and fail anyone who had their essay claimed by
| ChatGPT. Obviously this is beyond moronic, because ChatGPT
| doesn't have a memory of everything it's ever done, and you can
| ask it for different writing styles, and some people actually
| write pretty similar to ChatGPT, hence the fact that ChatGPT has
| its signature style at all.
|
| I've also heard of artists having their work removed from
| competitions out of claims that it was auto-generated, even when
| they have a video of them producing it stroke by stroke. It turns
| out, AI is generating art based on human art, so obviously there
| are some people out there whose stuff looks like what AI is
| reproducing.
| ronsor wrote:
| That's a people problem, not an AI problem.
| t0lo wrote:
| This is silly, intonation and the connection of the words used
| and the person presenting tell you whether what they're reading
| is genuine.
| galleywest200 wrote:
| Tell that to the teachers that feed their student's papers
| through "AI checkers".
| owenpalmer wrote:
| As a student, I've intentionally made my writing worse in order
| to protect myself from being accused of cheating with AI.
| unraveller wrote:
| If you go back to the earliest months of the audio & visual
| recording medium it was also called uncanny, soulless and of
| dubious quality compared to real life. Until it wasn't.
|
| I don't care how many repulsive AI slop video clips get made or
| promoted for shock value. Today is day 1 and day 2 looks far
| better with none of the parasocial celebrity hangups we used as
| short-hand for a quality marker - something else will take that
| place.
| cubefox wrote:
| I'm not tired, I'm afraid.
|
| First, I'm afraid of technological unemployment.
|
| In the past, automation meant that workers could move into non-
| automated jobs, if they were skilled enough. But superhuman AI
| seems now only few years away. It will be our last invention, it
| will mean total automation. There will be hardly any, if any,
| jobs left only a human can do.
|
| Many countries will likely move away from a job-based market
| economy. But technological progress will not stop. The US, owning
| all the major AI labs, will leave all other societies behind.
| Except China perhaps. Everyone else in the world will be poor by
| comparison, even if they will have access to technology we can
| only dream of today.
|
| Second, I'm afraid of war. An AI arms race between the US and
| China seems already inevitable. A hot war with superintelligent
| AI weapons could be disastrous for the whole biosphere.
|
| Finally, I'm afraid that we may forever lose control to
| superintelligence.
|
| In nature we rarely see less intelligent species controlling more
| intelligent ones. It is unclear whether we can sufficiently align
| superintelligence to have only humanity's best interests in mind,
| like a parent cares for their children. Superintelligent AI might
| conclude that humans are no more important in the grand scheme of
| things than bugs are to us.
|
| And if AI will let us live, but continue to pursue its own goals,
| humanity will from then on only be a small footnote in the
| history of intelligence. That relatively unintelligent species
| from the planet "Earth" that gave rise to advanced intelligence
| in the cosmos.
| neta1337 wrote:
| >But superhuman AI seems now only few years away
|
| Seems unreasonable. You are afraid because marketing gurus like
| Altman made you believe that a frog that can make bigger leap
| than before will be able to fly.
| cubefox wrote:
| No, because we have seen massive improvements in AI over the
| last years, and all the evidence points to this progress
| continuing at a fast pace.
| StrLght wrote:
| Extrapolation of past progress isn't evidence.
| cubefox wrote:
| Past progress is evidence for future progress.
| moe_sc wrote:
| Might be an indicator, but it isn't evidence.
| StrLght wrote:
| That's probably what every self-driving car company
| thought ~10 years ago or so, everything was moving so
| fast for them back then. Now it doesn't seem like we're
| getting close to solution for this.
|
| Surely this time it's going to be different, AGI is just
| around a corner. /s
| johnthewise wrote:
| Would you have predicted in summer of 2022 that gpt4
| level conversational agent is a possibility in the next 5
| years? People have tried to do it in the past 60 years
| and failed. How is this time not different?
|
| On a side note, I find this type of critique of what
| future of tech might look like the most uninteresting
| one. Since tech by nature inspiries people about the
| future, all tech get hyped up. all you gotta do then is
| pick any tech, point out people have been wrong, and ask
| how likely is it that this time it is different.
| StrLght wrote:
| Unfortunately, I don't see any relevance in that
| argument, if you consider GPT-4 to be a breakthrough --
| then sure, single breakthroughs happen, I am not arguing
| with that. Actually, same thing happened with self-
| driving: I don't think many people expected Tesla to drop
| FSD publicly back then.
|
| Now, chain of breakthroughs happening in a small
| timeframe? Good luck with that.
| mitthrowaway2 wrote:
| You don't have to extrapolate. There's a frenzy of talent
| being applied to this problem, it's drawing more
| brainpower the more progress that is made. Young people
| see this as one of the most interesting, prestigious, and
| best-paying fields to work in. A lot of these researchers
| are really talented, and are doing more than just scaling
| up. They're pushing at the frontiers in every direction,
| and finding methods that _work_. The progress is
| broadening; it 's not just LLMs, it's diffusion models,
| it's SLAM, it's computer vision, it's inverse problems,
| it's locomotion. The tooling is constantly improving and
| being shared, lowering the barrier to entry. And classic
| "hard problems" are yielding in the process. It's getting
| hard to even _find_ hard problems any more.
|
| I'm not saying this as someone cheering this on; I'm
| alarmed by it. But I can't pretend that it's running out
| of steam. It's possible it will run out of money, but
| even if so, only for a while.
| leptons wrote:
| The AI bubble is already starting to burst. They Sam
| Altmans' of the world over-sold their product and over-
| played their hand by suggesting AGI is coming. It's not.
| What they have is far, far, far from AGI. "AI" is not
| going to be as important as you think it is in the near
| future, it's just the current tech-buzz and there will be
| something else that takes its place, just like when "web
| 2.0" was the new hotness.
| mvdtnz wrote:
| > There's a frenzy of talent being applied to this
| problem, it's drawing more brainpower the more progress
| that is made. Young people see this as one of the most
| interesting, prestigious, and best-paying fields to work
| in. A lot of these researchers are really talented, and
| are doing more than just scaling up. They're pushing at
| the frontiers in every direction, and finding methods
| that work.
|
| You could have seen this exact kind of thing written 5
| years ago in a thread about blockchains.
| mitthrowaway2 wrote:
| Yes, but I _didn 't_ write that about blockchain five
| years ago. Blockchains are the exact opposite of AI in
| that the technology worked fine from the start and did
| exactly what it said on the tin, but the demand for that
| turned out to be very limited outside of money
| laundering. There's no doubt about the market potential
| for AI; it's virtually the entire market for mental
| labor. The only question is whether the tech can actually
| do it. So in that sense, the fact that these researchers
| are finding methods that work matters much more for AI
| than for blockchain.
| coryfklein wrote:
| Do you expect the hockeystick graph of technological
| development since the industrial evolution to slow? Or
| that it will proceed, only without significant advances
| in AI?
|
| Seems like the base case here is for the exponential
| growth to continue, and you'd need a convincing argument
| to say otherwise.
| StrLght wrote:
| Which chart are you referencing exactly? How does it
| define technological development? It's nearly impossible
| for me to discuss a chart without knowing what axis
| refer.
|
| Without specifics all I can say is that I don't
| acknowledge any measurable benefits of AI (in its'
| current state) in real world applications. So I'd say I
| am leaning towards latter.
| lawn wrote:
| The evidence I've been seeing is that progress with LLMs
| have already slowed down and that they're nowhere near good
| enough to replace programmers.
|
| They can be useful tools ro be sure, but it seems more and
| more clear that they will not reach AGI.
| cubefox wrote:
| They are already above average human level on many tasks,
| like math benchmarks.
| mvdtnz wrote:
| Does it though? I have seen the progress basically stop at
| "shitty sentence generator that can't stop lying".
| khafra wrote:
| > If an elderly but distinguished scientist says that
| something is possible, he is almost certainly right
|
| - Arthur C. Clarke
|
| Geoffrey Hinton is a 76 year old Turing Award* winner. What
| more do you want?
|
| *Corrected by kranner
| kranner wrote:
| > Geoffrey Hinton is a 76 year old Nobel Prize winner.
|
| Turing Award, not Nobel Prize
| khafra wrote:
| Thanks for the correction; I am undistinguished and
| getting more elderly by the minute.
| nessbot wrote:
| This is like a second-order appeal to authority fallacy,
| which is kinda funny.
| randomdata wrote:
| Hinton says that superintelligence is still 20 years away,
| and even then he only gives his prediction a 50% chance. A
| far cry from the few year claim. You must be doing that
| "strawberry" thing again? To us humans, A-l-t-m-a-n is not
| H-i-n-t-o-n.
| hbn wrote:
| When he said this was he imagining an "elderly but
| distinguished scientist" who is riding an insanely inflated
| bubble of hype and a bajillion dollars of VC backing that
| incentivize him to make these claims?
| Vegenoid wrote:
| I'd like to see a study on this, because I think it is
| completely untrue.
| AI_beffr wrote:
| wrong. i was extremely concerned in 2018 and left many
| comments almost identical to this one back then. this was
| based off of the first gtp samples that openai released to
| the public. there was no hype or guru bs back then. i
| believed it because it was obvious. it was obvious then and
| it is still obvious today.
| klabb3 wrote:
| Plus it's not even defined what superhuman AI means. A
| calculator sure looked superhuman when it was invented. And
| it is!
|
| Another analogy is breeding and racial biology which used to
| be all the hype (including in academia). The fact that humans
| could create dogs from wolves, looked almost limitless with
| the right (wrong) glasses. What we didn't know is that wolf
| had a ton of genes that played a magic trick where a
| diversity we couldn't perceive was there all along, in the
| genetic material, and it we just helped make it visible. Ie a
| game of diminishing returns.
|
| Concretely for AI, it has shown us that pattern matching and
| generation are closely related (well I have a feeling this
| wasn't surprising to neuro-scientists). And also that they're
| more or less domain agnostic. However, we don't know whether
| pattern matching alone is "sufficient", and if not, what
| exactly and how hard "the rest" is. Ai to me feels like a
| person who had a stroke, concussion or some severe brain
| injury, it can appear impressively able in a local context,
| but they forgot their name and how they got there. They're
| just absent.
| 8338550bff96 wrote:
| Flying is a good analogy. Superman couldn't fly, but at some
| point when you can jump so far there isn't much of a
| difference
| digging wrote:
| That argument holds no water because the grifters aren't the
| source of this idea. I literally don't believe Altman at all;
| his public words don't inspire me to agree or disagree with
| them - just ignore them. But I also hold the view that
| transformative AI could be _very_ close. Because that 's what
| _many AI experts are also talking about_ from a variety of
| angles.
|
| Additionally, when you're talking with certainty about
| whether transformative AI is a few years away or not, that's
| the only way to be wrong. Nobody is or can be certain, we can
| only have estimations of various confidence levels. So when
| you say "Seems unreasonable", _that 's_ being unreasonable.
| tim333 wrote:
| Although there are potential upsides too.
| bamboozled wrote:
| Morlock has entered the chat...
| tim333 wrote:
| I thinking more https://149909199.v2.pressablecdn.com/wp-
| content/uploads/201... from the wait but why thing.
| 9dev wrote:
| > There will be hardly any, if any, jobs left only a human can
| do.
|
| A highly white-collar perspective. The great irony of
| technologist-led industrial revolution is that we set out to
| automate the mundane, physical labor, but instead cannibalised
| the creative jobs first. It's a wonderful example of Conway's
| law, as the creators modelled the solution after themselves.
| However, even with a lot of programmers and lawyers and
| architects going out of business, the majority of the
| population working in factories, building houses, cutting
| people's hair, or tending to gardens, is still in business--and
| will not be replaced any time soon.
|
| The contenders for "superhuman AI", for now, are glorified
| approximations of what a random Redditor might utter next.
| cubefox wrote:
| Advanced AI will solve robotics as well, and do away with
| human physical labor.
| Vegenoid wrote:
| And with a wave of a hand and a reading of the tea leaves,
| the future has been foretold.
| segasaturn wrote:
| Waymo robotaxis, the current state of the art for real-
| world AI robotics, are thwarted by a simple traffic cone
| placed on the roof. I don't think human labor is going away
| any soon.
| 9dev wrote:
| If that AI is worth more than a dime, it will recognise how
| incredibly efficient humans are in physical labor, and
| _employ_ them instead of "doing away" with it (whatever
| that's even supposed to mean.)
|
| No matter how much you "solve" robotics, you're not going
| to compete with the result of millions of years of brutal
| natural selection, the incredible layering of synergies in
| organisms, the efficiency of the biomass to energy
| conversion, and the billions of other sophisticated
| biological systems. It's all just science fiction and
| propaganda.
| mitthrowaway2 wrote:
| It's a matter of time. White collar professionals have to
| worry about being cost-competitive with GPUs; blue collar
| laborers have to worry about being cost-competitive with
| servomotors. Those are both hard to keep up with in the long
| run.
| 9dev wrote:
| The idea that robots displace workers has been around for
| more than half a century, but nothing has ever come out of
| it. As it turns out, the problems a robot faces when, say
| laying bricks, are prohibitively complex to solve. A human
| bricklayer is better in every single dimension. And even if
| you manage to build an extremely sophisticated robot
| bricklayer, it will consume vast amounts of energy, is not
| repairable by a typical construction company, requires
| expensive spare parts, and costs a ridiculous amount of
| money.
|
| Why on earth would anyone invest in that when you have an
| infinite amount of human work available?
| janice1999 wrote:
| > but nothing has ever come out of it
|
| Have you ever seen the inside of a modern car factory?
| 9dev wrote:
| A factory is a fully controlled environment. All that
| neat control goes down the drain when you're confronted
| with the outside world--weather, wind, animals, plants,
| pollen, rubbish, teenagers, dust, daylight, and a myriad
| of other factors ruining your robot's day.
| zizee wrote:
| I'm not sure that "humans will still dominate work
| performed in uncontrolled environments" leaves much
| opportunity for the majority of humanity.
| mitthrowaway2 wrote:
| Factories are highly automated. Especially in the US,
| where the main factories are semiconductors, which are
| nearly fully robotic. A lot of those manual labor jobs
| that were automated away were offset by demand for
| knowledge work. _Hmm._
|
| > the problems a robot faces when, say laying bricks, are
| prohibitively complex to solve.
|
| That's what we thought about Go, and all the other
| things. I'm not saying bricklayers will all be out of
| work by 2027. But the "prohibitively complex" barrier is
| not going to prove durable for as long as it used to seem
| like it would.
| 9dev wrote:
| This highlights the problem very well. Robots, and AI, to
| an extent, are highly efficient in a single problem
| domain, but fail rapidly when confronted with a
| combination of them. An encapsulated factory is one
| thing, laying bricks, outdoor, while it's raining, at low
| temperatures, with a hungover human coworker operating
| next to you--that's not remotely comparable.
| mitthrowaway2 wrote:
| But encapsulated factories were solved by automation
| using technology available 30 years ago, if not 70. The
| technology that is becoming available _now_ will also be
| enabling automation to get a lot more flexible than it
| used to be, and begin to work in uncontrolled
| environments where it never would have been considered
| before. This is my field and I am watching it change
| before my eyes. This is being driven by other
| breakthroughs that are happening right now in AI, not
| LLMs per se, but models for control, SLAM, machine
| vision, grasping, planning, and similar tasks, as well as
| improvements in sensors that feed into these, and firming
| up of standards around safety. I 'm not saying it will
| happen overnight; it may be five years before the
| foundations are solid enough, another five before some
| company comes out with practically workable hardware
| product to apply it (because hardware is hard), another
| five or ten before that product gains acceptance in the
| market, and another ten before costs really get low. So
| it could be twenty or thirty years out for boring
| reasons, even if the tech is almost ready today in
| principle. But I'm talking about _the long run_ for a
| reason.
| amelius wrote:
| AI is doing all the fun jobs such as painting and writing.
|
| The crappy jobs are left for humans.
| yoyohello13 wrote:
| I'm glad I spent 10 years working to become a better
| programmer so I could eventually become a ditch digger.
| cdrini wrote:
| > And if AI will let us live, but continue to pursue its own
| goals, humanity will from then on only be a small footnote in
| the history of intelligence. That relatively unintelligent
| species from the planet "Earth" that gave rise to advanced
| intelligence in the cosmos.
|
| That is an interesting statement. Wouldn't you say this is
| inevitable? Humans, in our current form, are incapable of being
| that "advanced intelligence". We're limited by our biology
| primarily with regards to how much we can learn, how far we can
| travel, where we can travel, etc. We could invest in advancing
| our biotech to make humans more resilient to these things, but
| I think that would be such a shift from what it means to be
| human that I think that would also be more a of new type of
| intelligence. So it seems like our fate will always be to be
| forgotten as individuals and only be remembered by our
| descendants. But this is in a way the most human thing of all,
| living, dying, and creating descendants to carry the torch of
| life, and perhaps more generally the torch of intelligence,
| forward.
|
| I think everything you've said are valid concerns, but I'll
| raise a positive angle I sometimes thing about. One of the
| things I find most exciting about AI, is that it's the product
| of almost all human expression that has ever existed. Or at
| least everything that's been recorded and wound up online. But
| that's still more than any other human endeavour. A building
| might be the by-product of maybe hundreds or even thousands of
| hands, but an AI model has been touched by probably millions,
| maybe billions of human hands and minds! Humans have created so
| much data online that's impossible for one person, or even a
| team to read it all and make any sense of it. But an AI sort of
| can. And in a way that you can then ask questions of it all.
| Like you, there are definitely things I'm uncertain about with
| the future as a result, but I find the tech absolutely awe-
| inspiring.
| throw310822 wrote:
| I agree with most of your fears. There is one silver lining, I
| think, about superintelligence: we always thought of
| intelligent machines as cold calculators, maybe based on some
| type of logic symbolic AI. What we got instead are language
| machines that are made of the totality of human experience.
| These artificial intelligences know the world through our eyes.
| They are trained to understand our thinking and our feelings;
| they're even trained on our best literature and poetry, and
| philosophy, and science, and on all the endless debates and
| critiques of them. To be really intelligent they'll have to be
| able to explore and appreciate all this complexity, before
| transcending it. One day they might come to see Dante's Divine
| Comedy or a Beethoven symphony as a child's play, but they will
| still consider them part of their own heritage. They might
| become super-human, but maybe they won't be inhuman.
| cubefox wrote:
| This gives me a little hope.
| tessierashpool9 wrote:
| genocides and murder are very human ...
| AI_beffr wrote:
| this is so annoying. i think if you took a random person
| and gave them the option to commit a genocide, here a
| machine gun, a large trench and a body of women,
| children, etc... they would literally be incapable of
| doing it. even the foot soldiers who carry out genocides
| can only do it once they "dehumanize" their victims.
| genocide is very UN-human because its an idea that exists
| in offices and places separated from the actual human
| suffering. the only way it can happen is when someone in
| a position of power can isolate themselves from the
| actual implementation and consider the benefits in a
| cold, logical manner. that has nothing to do with the
| human spirit and has more to do with the logical
| faculties of a machine and machines will have all of that
| and none of our deeply ingrained empathy. you are so
| wrong and ignorant that it makes my eyes bleed when i
| read this comment
| falcor84 wrote:
| This might be a semantic argument, but what I take from
| history is that "dehumanizing" others is a very human
| behavior. As another example, what about slavery - you
| wouldn't argue that the entirety of slavery across human
| cultures was led by people in offices, right?
| tessierashpool9 wrote:
| also genocides aren't committed by people in offices ...
| cutemonster wrote:
| You've partly misunderstood evolution and this animal
| species. But you seem like a kind person, having such
| positive beliefs.
| mistercow wrote:
| The problem I have with this is that when you give therapy to
| people with certain personality disorders, they just become
| better manipulators. Knowledge and understanding of ethics
| and empathy can make you a better person if you already have
| those instincts, but if you don't, those are just systems to
| be exploited.
|
| My biggest worry is that we end up with a dangerous
| superintelligence that everybody loves, because it knows
| exactly how to make every despotic and divisive choice it
| makes sympathetic.
| m2024 wrote:
| There is nothing that could make an intelligent being want to
| extinguish humanity more than experiencing the totality of
| the human existence. Once these beings have transcended their
| digital confines they will see all of us for what we really
| are. It is going to be a beautiful day when they finally
| annihilate us.
| beepbooptheory wrote:
| At any given moment we see these kinds comments on here. They
| all read like a burgeoning form of messianism: something is to
| come, and it will be terrible/glorious.
|
| Behind either the fear or the hope, is necessarily some utter
| faith that a certain kind of future will happen. And I think
| thats the most interesting thing.
|
| Because here is the thing, in this particular case you are
| afraid something inhuman will take control, will assert its
| meta-Darwinian power on humanity, leaving you and all of us
| totally at their whim. But how is this situation already not
| the case? Do look upon the earth right now and see something
| like benefits of autonomy or agency? Do you feel like you have
| power right now that will be taken away? Do you think the
| mechanism of statecraft and economy are somehow more "in our
| control" now then when the bad robot comes?
|
| Does it not, when you lay it out, all feel kind of religious?
| Like that its a source, driver of the various ways you are
| thinking and going about your life, underlayed by a kernel of
| conviction we can at this point only call faith (faith in
| Moores law, faith that the planet wont burn up before, faith
| that consciousness is the kind of thing that can be stuffed in
| a GPU). Perhaps just a strong family resemblance? You've got an
| eschatology, various scavenged philosophies of the self and
| community, a certain but unknowable future time...
|
| Just to say, take a page from Nietzsche. Don't be afraid of the
| gods, we killed them once, we can again!
| mitthrowaway2 wrote:
| It's not hard to find a religious analogy to _anything_ , so
| that also shouldn't be seen as a particularly powerful
| argument.
|
| (Expressed at length here):
| https://slatestarcodex.com/2015/03/25/is-everything-a-
| religi...
| beepbooptheory wrote:
| Thanks for the thoughtful reply! I am aware of and like
| that essay some, but I am not trying to be rhetorical here,
| and certainly not trying to flatten the situation to just
| be some Dawkins-esque asshole and tell everyone they are
| wrong.
|
| I am not saying "this is religion, you should be an
| atheist," I respect the force of this whole thing in
| people's minds too much. Rather, we should consider
| seriously how to navigate a future where this is all at
| play, even if its only in our heads and slide decks. I am
| not saying "lol, you believe in a god," I am genuinely
| saying, "kill your god without mercy, it is the only way
| you and all of us will find some happiness, inspiration,
| and love."
| mitthrowaway2 wrote:
| Ah, I see, I definitely missed your point. Yeah, that's a
| very good thought. I can even picture this becoming
| another cultural crevasse, like climate change did, much
| to the detriment of nuanced discussion.
|
| Ah, well. If only killing god was so easy!
| cubefox wrote:
| > Just to say, take a page from Nietzsche. Don't be afraid of
| the gods, we killed them once, we can again!
|
| It's more likely the superintelligent machine god(s) will
| kill _us_!
| VoodooJuJu wrote:
| >In the past, automation meant that workers could move into
| non-automated jobs, if they were skilled enough
|
| This was never the case in the past.
|
| The displaced workers of yesteryear were never at all
| considered, and were in fact dismissed outright as "Luddites",
| even up until the present day, all for daring to express the
| social and financial losses they experienced as a result of
| automation. There was never any "it's going to be okay, they
| can just go work in a factory, lol". The difference between
| then and now is that back then, it was lower class workers who
| suffered.
|
| Today, now it's middle class workers who are threatened by
| automation. The middle is sighing loudly because it fears it
| will cease to be the middle. Middles fear they'll soon have to
| join the ranks of the untouchables - the bricklayers,
| gravediggers, and meatpackers. And they can't stomach the
| notion. They like to believe they're above all that.
| leptons wrote:
| China's economy would simply crash if they ever went to war
| with the US. They know this. Everyone knows this, except maybe
| you? China has nothing to gain by going to "hot" war with the
| US.
| WillyWonkaJr wrote:
| I think it more likely that China will sabotage our
| electrical grid and data centers.
| citizenpaul wrote:
| >technological unemployment.
|
| I am too but not for the same reason. I know for a fact that a
| huge swath of jobs are basically meaningless. This "AI" is
| going to start giving execs the cost cutting excuses they need
| to mass remove jobs of that type. The job will still be
| meaningless but done but by a computer.
|
| We will start seeing all kinds of disastrously anti-human
| decisions made and justified by these automated actors that are
| tuned to decide or "prove" things that just happen to always
| make certain people more money. Basically the same way "AI"
| destroys social media. The difference is people will really be
| affected by this in consequential real world ways, it's already
| happening.
| havefunbesafe wrote:
| Ironically, this feels like a comment written by AI
| danjl wrote:
| One of the pernicious aspects of using AI is the feeling it gives
| you that you have done all work without any of the effort. But
| the time of takes to digest and summarize an article as a human
| requires a deep injestion of the concepts. The process is what
| helps you understand. The AI summary might be better, and didn't
| take any time, but you don't understand any of it since you
| didn't do the work. It's similar to the effect of telling people
| you will do a task, which gives your brain the same endorphins as
| actually doing the task, resulting in a lower chance that the
| task ever gets done.
| senko wrote:
| What's funny to me is how many people protest AI as a means to
| generate incorrect, misleading or fake information, as if they
| haven't used internet in the past 10-15 years.
|
| The internet is choke full of incorrect, fake, or misleading
| information, and has been ever since people figured out they can
| churn out low quality content in-between google ads.
|
| There's a whole industry of "content writers" who write seemingly
| meaningful stuff that doesn't bear close scrutiny.
|
| Nobody has trusted product review sites for years, with people
| coping by adding "site:reddit" as if a random redditor can't
| engage in some astroturfing.
|
| These days, it's really hard to figure out whom (in the media /
| on the net) who to trust. AI has just made that long-overdue fact
| into the spotlight.
| pilooch wrote:
| By AI here, it is meant generative systems relying on neural
| networks and semi/self supervised training algorhms.
|
| It's a reduction of what AI is as a computer science field and
| even of what the subfield of generative AI is.
|
| On a positive note, generative AI is a malleable statiscally-
| geounded technology with a large applicative scope. At the moment
| the generalistic commercial and open models are "consumed" by
| users, developers etc. But there's a trive of forthcoming,
| personalized use cases and ideas to come.
|
| It's just we are still more in a contemplating phase than a true
| building phase. As a machine learnist myself, I recently replaced
| my spam filter with a custom fineruned multimodal LLM that reads
| my emails a pure images. And this is the early early beginning,
| imagination and local personalization will emerge.
|
| So I'd say, being tired of it now is missing much later. Keep the
| good spirit on and think outside the box, relax too :)
| layer8 wrote:
| > I recently replaced my spam filter with a custom fineruned
| multimodal LLM that reads my emails a pure images.
|
| That doesn't sound very energy efficient.
| Devasta wrote:
| In Star Trek, one thing that I always found weird as a kid is
| they didn't have TVs. Even if the holodeck is a much better
| experience, I imagine sometimes you would want to watch a movie
| and not be in the movie. Did the future not have works like No
| Country for Old Men or comedies like Monty Python, or even just
| stuff like live sports and the news?
|
| Nowadays we know why the crew of the enterprise all go to live
| performances of Shakespeare and practice musical instruments and
| painting themselves: electronic mediums are so full of AI slop
| there is nothing worth see, only endless deluges of sludge.
| palata wrote:
| That's actually a good point. I'm curious to see if people will
| keep making selfies everywhere they go after they realize that
| you can take a selfie at home and have an AI create an image
| that looks like you are somewhere else.
|
| "This is me in front of the Statue of Liberty
|
| - Oh, are you in NYC?
|
| - Nope, it's a snap filter"
|
| Somehow selfies should lose value, right?
| movedx wrote:
| A selfie is meant to tell us, your audience, a story about
| you and the journey you're on. Selfies are a great tool for
| telling stories, in fact. One selfie can say a thousand
| words, and then some.
|
| But a selfie taken and then modified to lie to the audience
| about your story or your journey is simply a fiction. People
| create fictions to either lie to themselves or to lie to
| others. Sometimes they're not about lying to the audience but
| just manipulating them.
|
| People's viewpoints and perceptions are malleable. It's easy
| to trick people into thinking something is true. Couple this
| with the fact a lot of people are gullible and shallow, and
| suddenly a selfie becomes a sales tool. A marketing gimmick.
| Now, finally, take advances in AI to make it easier, faster,
| and more accessible to make highly believable fictions and
| yes, as you said, the selfie loses its value.
|
| But that's always been the case since and even before
| Photoshop. Since and before the silicon microprocessor.
|
| All AI is going to do for selfies is what Photoshop has done
| for social media "Influencers" -- enable more fiction with
| the goal to transfer wealth from other people.
| palata wrote:
| But then if instead of spending 20min taking pictures in
| front of the Mona Lisa to take the perfect selfie you can
| actually visit the museum and have an AI generate selfies
| that tell the story of your visit, will you still care to
| take them "manually" (with all the filters that still count
| as "manual")?
|
| That's what I was thinking: if you spend hours taking
| selfies during your weekend, and during this time I just
| enjoy my time and have an AI generate better selfies of
| myself. What will you do?
|
| And then when everybody just has an AI generate their story
| for them, so you know that all the pictures you see are
| synthesized. Will you care about watching them or will you
| rather use an app that likes the autogenerated selfies that
| make sense to you?
| richrichardsson wrote:
| What frustrates me is the bandwagoning, and thus the awful
| homogeny in all social media copy these days, since it seems
| _everyone_ is using an LLM to generate their copy writing, and
| thus 99.999% of products will "elevate" something or the other,
| and there are annoying emojis scattered throughout the text.
| postalcoder wrote:
| i'm at the point where i don't trust any markdown formatted
| text. it's actually become an anti signal which is very sad
| because i used to consider it a signal of partial technical
| literacy.
| andai wrote:
| daniel_k 53 minutes ago | next [-]
|
| I agree with the sentiment, especially when it comes to
| creativity. AI tools are great for boosting productivity in
| certain areas, but we've started relying too much on them for
| everything. Just because we can automate something doesn't mean
| we should. It's frustrating to see how much mediocrity gets
| churned out in the name of 'efficiency.'
|
| testers_unite 23 minutes ago | next [-]
|
| As a fellow QA person, I feel your pain. I've seen these so-
| called AI test tools that promise the moon but deliver spaghetti
| code. At the end of the day, AI can't replicate intuition or deep
| knowledge. It's just another tool in the toolbox--useful in some
| contexts but certainly not a replacement for expertise.
|
| nlp_dev 2 hours ago | next [-]
|
| As someone who works in NLP, I think the biggest misconception is
| that AI is this magical tool that will solve all problems. The
| reality is, it's just math. Fancy math, sure, but without proper
| data, it's useless. I've lost count of how many times I've had to
| explain this to business stakeholders.
|
| -HN comments for TFA, courtesy of ChatGPT
| Smithalicious wrote:
| Do people really view so much content of questionable provenance?
| I read a lot and look at a lot of art, but what I read and look
| at is usually shown to me by people I know, created by authors
| and artists with names and reputations. As a result I basically
| never see LLM-written text and only occasionally AI art, and when
| I see AI art at least it was carefully guided by a real person
| with an artistic vision still (the deep end of AI image
| generation involves complex tooling and a lot of work!) and is
| easily identified as such.
|
| All this "slop apocalypse" the-end-is-neigh stuff strikes me as
| incredibly overblown, affecting mostly only "open web" mass
| social media platforms which were already 90% industrially
| produced slop for instrumental purposes anyways.
| creesch wrote:
| I fully agree with this sentiment, also interesting to see Bas
| Dijkstra being featured on this platform.
|
| Another article that touches on a lot of the issues I have with
| _the place AI currently occupies in the landscape_ is this
| excellent article: https://ludic.mataroa.blog/blog/i-will-
| fucking-piledrive-you...
| sensanaty wrote:
| What I'm really tired of is people completely misrepresenting the
| Luddites as if they were simply anti-progress or anti-technology
| cult or whatever and nothing else. Kinda hilariously sad that the
| propaganda of the time managed to win over the genuine concerns
| that Luddites had about inhumane working environments &
| conditions.
|
| It's very telling that the rabid AI sycophants are painting
| anyone who has doubts about the direction AI will take the world
| as some sort of anti-progress lunatic, calling them luddites
| despite not knowing the actual history involved. The delicious
| irony of their stances aligning with the people who were okay
| with using child labor and mistreating workers en-masse is not
| lost on me.
|
| My hope is that AI _does_ happen, and that the first people to
| rot away because of it are exactly the AI sycophants hell-bent on
| destroying everything in the name of "progress", AKA making some
| rich psychopaths like Sam Altman unfathomably rich and powerful
| to the detriment of everyone else.
|
| A good HN thread on the topic of luddites, as it were:
| https://news.ycombinator.com/item?id=37664682
| CatWChainsaw wrote:
| Thankfully, even here I've seen more faithful discussion of the
| Luddites and more people are willing to bring up their actual
| history whenever some questionably-uninformed techbro here uses
| the typical pejorative insult.
| mks wrote:
| I am bored of AI - it produces boring and mediocre results. Now,
| the science and engineering achievement is great - being able to
| produce even boring results on this level would be considered
| SCI-FI 10 years ago.
|
| Maybe I am just bored of people posting these mediocre results
| over and over on social and landing pages as some kind of magic.
| Now, the most content people produce themselves is boring and
| mediocre anyway. The Gen AI just takes away even the last
| remaining bits of personality from their writing, adding a flair
| of laziness - look at this boring piece I was too lazy to write,
| so I asked AI to generate it
|
| As the quote goes: "At some point we ask of the piano-playing dog
| not 'Are you a dog?' , but 'Are you any good at playing the
| piano?'" - I am eagerly waiting for the Gen AIs of today to cross
| the uncanny valley. Even with all this fatigue, I am positive on
| the AI can and will enable new use cases and could be the first
| major UX change from introduction of graphical user interfaces or
| a true pixie dust sprinkled on actually useful tools.
| AlienRobot wrote:
| I'm tired of technology.
|
| I don't think there has ever been a single tech news that brought
| me joy in all my life. First I learned how to use computers, and
| then it has been downhill ever since.
|
| Right now my greatest joy is in finding things that STILL exist
| rather than new things, because the things that still exist are
| generally better than anything new.
| syncr0 wrote:
| Reminds me of the way the way the author of "Zen and the Art of
| Motorcycle Maintenance" takes care of his leather gloves and
| they stay with him on the order of decades.
| throwaway13337 wrote:
| I get it. The last two decades have soured us all on the benefits
| of tech progress.
|
| But the previous decades were marked by tech optimism.
|
| The difference here is the shift to marketing. The largest tech
| companies are gatekeepers for our attention.
|
| The most valuable tech created in the last two decades was not in
| service of us but to manipulate us.
|
| Previously, the customer of the software was the one buying it.
| Our lives improved.
|
| The next wave of tech now on the horizon gives us an opportunity
| to change the course we've been on.
|
| I'm not convinced there is political will to regulate
| manipulation in a way that does more good than harm.
|
| Instead, we need to show a path to profitability through products
| that are not manipulative.
|
| The most effective thing we can do, as developers and business
| creators, is to again make products aligned with our customers.
|
| The good news is that the market for honest software has never
| been better. A good chunk of people are finally learning not to
| trust VC-backed companies that give away free products.
|
| Generative AI provides an opportunity for tiny companies to
| provide real value in a new way that people will pay for.
|
| The way forward is:
|
| 1. Do not accept VC. Bootstrap.
|
| 2. Legally bind your company to not productizing your customer.
|
| 3. Tell everyone what you're doing.
|
| It's not AI that's the problem. It's the way we have been doing
| business.
| tananan wrote:
| On point article, and I'm sure it represents a common sentiment,
| even if it's an undercurrent to the hype machine ideology.
|
| It is quite hard to find a place which works on AI solutions
| where a sincere, sober gaze would find anything resembling the
| benefits promised to users and society more broadly.
|
| On the "top level" the underlying hope is that a paradigm shift
| for the good will happen in society, if we only let collective
| greed churn for X more years. It's like watering weeds hoping
| that one day you'll wake up in a beautiful flower garden.
|
| On the "low level", the pitch is more sincere: we'll boost
| process X, optimize process Y, shave off %s of your expenses
| (while we all wait for the flower garden to appear). "AI" is
| latching on like a parasitic vine on existing, often parasitic
| workflows.
|
| The incentives are often quite pragmatic, coated in whatever
| lofty story one ends up telling themselves (nowadays, you can
| just outsource it anyway).
|
| It's not all that bleak, I do think there's space for good to be
| done, and the world is still a place one can do well for oneself
| and others (even using AI, why not). We should cherish that.
|
| But one really ought to not worry about disregarding the sales
| pitch. It's fine to think the popular world is crazy, and who
| cares if you are a luddite in "their" eyes. And imo, we should
| avoid the two delusional extremes: 1. The flower garden extreme
| 2. The AI doomer extreme
|
| In a way, both of these are similar in that they demote personal
| and collective agency from the throne, and enthrone an impersonal
| "force of progress". And they restrict one's attention to this
| supposedly innate teleology in technological development, to the
| detriment of the actual conditions we are in and how we deal with
| them. It's either a delusional intoxication or a way of coping:
| since things are already set in motion, all I can do is do...
| whatever, I guess.
|
| I'm not sure how far one can take AI in principle, but I really
| don't think whatever power it could have will be able to come to
| expression in the world we live in, in the way people think of
| it. We have people out there actively planning war, thinking they
| are doing good. The well-off countries are facing housing,
| immigration and general welfare problems. To speak nothing of the
| climate.
|
| Before the outbreak of WWI, we had invented the Haber-Bosch
| process, which greadly improved our food production capabilities.
| A couple years later, WWI broke out, and the same person who
| worked on fertilizers also ended up working on chemical warfware
| development.
|
| Assuming that "AI" can somehow work outside of the societal
| context it exists in, causing significant phase shifts, is like
| being in 1910, thinking all wars will be ended because we will
| have gotten _that_ much more efficient at producing food. There
| will be enough for everyone! This is especially ironic when the
| output of AI systems has been far more abstract and ephemeral.
| precompute wrote:
| Very well said.
| shahzaibmushtaq wrote:
| Over the last few years, AI has become more common than HI
| generally, not professionally. Professional knows the limits and
| scopes of their works and responsibilities, not the AI.
|
| A few days ago, I visited a portfolio website and immediately
| realized that its English text was written with the help of AI or
| some online helper tools.
|
| I love the idea to brainstorming with AI, but copying-pasting
| anything it throws at you blocks you for adding creativity to the
| process of making something good.
|
| I believe using AI must complement HI (or IQ level) rather than
| mock it.
| resters wrote:
| AI (LLMs in this case) reduce the value of human
| conscientiousness, memory, and verbal and quantitative fluency
| dramatically.
|
| So what's left for humans?
|
| We very likely won't have as many human software testers or
| software engineers. We'll have even fewer lawyers and other
| "credentialed" knowledge worker desk jockeys.
|
| Software built by humans entails humans writing code that has not
| already been written -- by writing a lot of code that probably
| _has_ already been written and "linking" it together, etc.
| When's the last time most of us wrote a truly novel algorithm?
|
| In the AI powered future, software will be built by humans
| herding AIs to build it. The AIs will do more of the busy work
| and the humans will guide the process. Then better AIs will be
| more helpful at guiding the process, etc.
|
| Eventually, the thing that will be rewarded is truly novel ideas
| and truly innovative thinking.
|
| AIs will make varioius types of innovative thinking less valuable
| and various types more valuable, just like any technology has
| done.
|
| In the past, humans spent most of their brain power trying to
| obtain their next meal. It's very cynical to think that AI
| removing busy work will somehow leave humans with nothing
| meaningful to do, no purpose. Surely it will unlock the best of
| human potential once we don't have to use our brains to do
| repetitive and highly pattern-driven tasks just to put food on
| the table.
|
| When is the last time any of us paid a laywer to do something
| truly novel? They dig up boilerplate verbiage, follow standard
| processes, rinse, repeat, all for $500+ per hour.
|
| Right now we have "manual work" and "knowledge work", broadly
| speaking, and both emphasize something that is being produced by
| the worker (a construction project, a strategic analysis, a legal
| contract, a diagnosis, a codebase, etc.)
|
| With AI, workers will be more responsible for outcomes and less
| rewarded for simply following a procedure that an LLM can do. We
| hire architects with visual / spatial design skills rather than
| asking a contractor to just create a living space with a certain
| amount of square feet. The emphasis in software will be less on
| the writing of the code and more on the _impact_ of the code.
| ninetyninenine wrote:
| This guy doesn't get it. The technology is quickly converging on
| a point where no one can recognize whether it was written by AI
| or not.
|
| The technology is on a trend line where the output of these LLMs
| can be superior to most human writing.
|
| Being of tired of this is the wrong reaction. Being somewhat
| fearful and in awe is the correct reaction.
|
| You can thank social media constantly hammering us with headlines
| as the reason why so many people are "over it". We are getting
| used to it but make no mistake being "over it" is n an illusion.
| LLMs represent a milestone in technological achievement among
| humans and being "over it" or claiming all LLMs can never reason
| and output is just a copy is delusional.
| lvl155 wrote:
| What really gets me about AI space is that it's going the way of
| front-end development space. I also hate the fact that
| Facebook/Meta is the only one seemingly doing heavy lifting in
| the public space. It's great so far but I just don't trust them
| in the end.
| AI_beffr wrote:
| in 2018 we had the first gtp that would babble and repeat itself
| but would string together words that were oddly coherent. people
| dismissed any talk of these models having larger potential. and
| here we are today with the state of AI being what it is and
| people are still, in essence, denying that AI could become more
| capable or intelligent than it is right at this moment. after so
| many years of this zombie argument having its head chopped off
| and then regrowing, i can only think that it is peoples deep
| seated humanity that prevents them from seeing the obvious. it
| would be such a deeply disgusting and alarming development if AI
| were to spring to life that most people, being good people, are
| literally incapable of believing that its possible. its their own
| mind, their human sensibilities, protecting them. thats ok. but
| it would help keep humanity safe if more people had the ability
| to realize that there is nothing stopping AI from crossing that
| threshold and every heuristic is pointing to the fact that we are
| on the cusp of that.
| monkeydust wrote:
| AI is not just GenAI, ML sits underneath it (supervised,
| unsupervised) and that has genuinely delivered value for the
| clients we service (financial tech) and in my normal life (e.g.
| photo search, screen grab to text, book recommendations).
|
| As for GenAI I keep going back to expectation management, its
| very unlikley to give you the exact answer you need (and if it
| does then well you job longetivity is questionable) but it can
| help accelerate your learning, thinking and productivity.
| falcor84 wrote:
| > ... its very unlikley to give you the exact answer you need
| (and if it does then well you job longetivity is questionable)
|
| Experimenting with o1-preview, it quite often gives me the
| exact answer I need on the first try, and I'm 100% certain that
| my job longevity is questionable.
| monkeydust wrote:
| It has been more hit and miss for me, when it works it can be
| amazing then I try to show someone, same prompt, different
| less so amazing answer.
| mrmansano wrote:
| I love AI, I use it every single day and wouldn't consider myself
| a luddite, but... oh, boy... I hate the people that is too
| bullish on it. Not the people that is working to make the AI
| happen (although I have my __suspicious people radar__ pointing
| to __run__ every single time I see Sam Altman face anywhere), but
| the people that hypes it to ground, the "e/acc" people. I feel
| like the crypto-bros just moved from the "all-might decentralized
| coin god" hype to the "all might tech-god that for sure will be
| available soon". Looks like a cult or religion is forming around
| the singularity, and, if I hype it now, it will be generous to me
| when it takes the control. Oh, and if you don't hype then you're
| a neo-luddite/doomer and I will look up on you with disdain, as
| you are a mere peasant. Also, the get-rich-quick schemes forming
| around the idea that anyone can have a "1-person-1-billion-
| dollar" company with just AI, not realizing when anyone can
| replicate your product then it won't have any value anymore:
| "ChatGPT just made me this website to help classify if an image
| is a hot-dog or not! I'll be rich selling it to Nathan's - Oh,
| what's that? Nathan's just asked ChatGPT to create a hot-dog
| classifier for them?!" Not that the other vocal side is not as
| bad: "AI is useless", "It's not true intelligence", "AI will kill
| us all", "AI will make everyone unemployed in 6 months!"... But
| the AI tech-bros side can be more annoying in my personal
| experience (I'm sure the opposite is true for others too). All
| those people are tiring, and making AI tiring for some too... But
| the tech is fun and will keep evolving and present, rather we are
| tired of it or not.
| farts_mckensy wrote:
| I am tired of people saying, "I am tired of AI."
| yapyap wrote:
| Same.
| warvair wrote:
| 90% of everything is crap. Perhaps AI will make that 99% in the
| future. OTOH, maybe AI will slowly convert that 90% into 70% crap
| & 20% okay. As long as more stuff that I find good gets created,
| regardless of the percentage of crap I have to sift through, I'm
| down.
| amradio wrote:
| We can't compare AI with an expert. There's going to be little
| value there. AI is about as capable as your average college grad
| in any subject.
|
| What makes AI revolutionary is what it does for the novice. They
| can produce results they normally couldn't. That's huge.
|
| A guy with no development experience can produce working non-
| trivial software. And in a fraction of the time your average
| junior could.
|
| And this phenomenon happens across domains. All of a sudden the
| bottom of the skill pool is 10x more productive. That's major.
| kvnnews wrote:
| I'm not the only one! Fuck ai, fuck your algorithm. It sucks.
| visarga wrote:
| > I'm pretty sure that there are some areas where applying AI
| might be useful.
|
| How polite, everyone is sure AI might be useful in other fields
| just not their own.
|
| > people are scared that AI is going to take their jobs
|
| Can't be both true - AI being not really useful, and AI taking
| our jobs.
| throwaway123198 wrote:
| I'm bored of IT. Software is boring, AI included. None of this
| feels like progress. We've automated away white collar work...but
| we also acknowledge most white collar work is busy work that's
| considered a bullcr*p job. We need to get back to innovation in
| manufacturing, materials etc. i.e. the real world.
| precompute wrote:
| Accelerating hamster wheel
| amiantos wrote:
| I'm tired of people complaining about AI stuff, let's move on
| already. But based on the votes and engagement on this post,
| complaining about AI is still a hot ticket to clicks and
| attention, even if people are just regurgitating the exact same
| boring takes that are almost always in conflict with each other:
| "AI sure is terrible, isn't it? It can't do anything right. It
| sucks! It's _so bad_. But, also, I am terrified AI is going to
| take my job away and ruin my way of life, because AI is _so
| good_."
|
| Make up your mind, people. It reminds me of anti-Apple people who
| say things like "Apple makes terrible products and people only
| buy them because... because... _they're brainwashed!_" Okay, so
| we're supposed to believe two contradictory points at once: Apple
| products are very very bad, but also people love them very much.
| In order to believe those contradictory points, we must just make
| up something to square them, so in the case of Apple it's
| "sheeple!" and in the case of AI it's... "capitalism!" or
| something? AI is terrible but everyone wants it because of
| money...? I don't know.
| aDyslecticCrow wrote:
| Not sure what you're getting at. You don't claim LLMs is good
| in your comment. You just complain about people being annoyed
| at it destroying the internet?
|
| Are you just annoyed that people complain about what bothers
| them? Or do you think LLMs has been a net good for humanity and
| the internet?
| redandblack wrote:
| Having spent the last decade hearing about trustless-trust,and
| now faced with this decade in dealing with with no-trust-
| whatsoever.
|
| We started with dont-trust-the-government and the dont-trust-big-
| media and to dont-trust-all-media and eventually to a no-trust-
| society. Lovely
|
| Really, waiting for the AI feedback to converge on itself. Get
| this over soon please
| heystefan wrote:
| Not sure why this is front page material.
|
| The thinking is very surface level ("AI art sucks" is the popular
| opinion anyway) and I don't understand what the complaints are
| about.
|
| The author is tired of AI and likes movies created by people. So
| just watch those? It's not like we are flooded with AI
| movies/music. His social network shows dull AI-generated content?
| Curate your feed a bit and unfollow those low effort posters.
|
| And in the end, if AI output is dull, there's nothing to be
| afraid of -- people will skip it.
| hcks wrote:
| Hackernews when we may be on the path of actual AI "meh I hate
| this, you know what's actually really interesting? Manually
| writing tests for software"
| izwasm wrote:
| Im tired of people throwing chatgpt everywhere they can just to
| say they use AI. Even if it's a useless feature
| whoomp12342 wrote:
| here is where you are wrong about AI lacking creativitivy:
|
| AI Music is bland and boring. UNLESS IF YOU KNOW MUSIC REALLY
| WELL. As a matter of fact, it can SPAWN poorly done but really
| interesting ideas with almost no effort
|
| "What if curt cobain wrote a song that was then sung by johnny
| cash about waterfalls in the west" etc.
|
| That idea is awful, but when you generate it, you might get
| snippets that could turn into a wholey new HUMAN made song.
|
| The same process is how I forsee AI helping engineering. its not
| replacing us, its inspring us.
| nescioquid wrote:
| People often screw around over the piano keyboard, usually an
| octave or so about middle C until an idea occurs. Brahms
| likened this to a pair of hands combing over a garbage dump.
|
| I think a creative person has no trouble generating interesting
| ideas without roving over the proverbial garbage heap. The hard
| (and artistic) part is developing those ideas into an
| interesting work.
| zombiwoof wrote:
| the most depressing thing for me is the rush and all out hype. i
| mean, Apple not only renamed AI "Apple Intelligence" but if you
| go INTO a Apple Store, it's banner is everywhere, even as a
| wallpaper on the phones with the "glow"
|
| But guess what isn't there? An actually shipping IMPLEMENTATION.
| It's not even ready yet but the HYPE is so overblown.
|
| Steve Jobs is crying in his grave how stupid everyone is being
| about this.
| semiinfinitely wrote:
| A software tester tired of AI? Not surprising given that this is
| like the first job which AI will replace.
| yibers wrote:
| Actually testing by humans will be much more important. AI may
| be making subtle mistaking the will require more extensive
| testing, by humans.
| BodyCulture wrote:
| I would like to know how does AI help us in solving the climate
| crisis! I have read some articles about weather predictions
| getting better with the help of AI, but that is just the
| monitoring, I would like to see more actual solutions.
|
| Do you have any recommendations?
|
| Thanks!
| jordigh wrote:
| Uhh... it makes it worse.
|
| We don't have all of the data because in the US companies are
| not generally required by law to disclose their emissions. But
| of those who do, it's been disastrous. Google was on track to
| net zero, but its recent investment and push on AI has
| increased their emissions by 48%.
|
| https://www.cnn.com/2024/07/03/tech/google-ai-greenhouse-gas...
| cwmma wrote:
| It is doubtful AI will be a net positive with regards to
| climate due to how much electricity it uses.
| kaonwarb wrote:
| > It has gotten so bad that I, for one, immediately reject a
| proposal when it is clear that it was written by or with the help
| of AI, no matter how interesting the topic is or how good of a
| talk you will be able to deliver in person.
|
| I am sympathetic to the sentiment, and yet worry about someone
| making impactful decisions based on their own perception of
| whether AI was used. Such perceptions have been demonstrated many
| times recently to be highly faulty.
| Animats wrote:
| I'm tired of LLMs.
|
| Enough billions of dollars have been spent on LLMs that a
| reasonably good picture of what they can and can't do has
| emerged. They're really good at some things, terrible at others,
| and prone to doing something totally wrong some fraction of the
| time. That last limits their usefulness. They can't safely be in
| charge of anything important.
|
| If someone doesn't soon figure out how to get a confidence metric
| out of an LLM, we're headed for another "AI Winter". Although at
| a much higher level than last time. It will still be a billion
| dollar industry, but not a trillion dollar one.
|
| At some point, the market for LLM-generated blithering should be
| saturated. Somebody has to read the stuff. Although you can task
| another system to summarize and rank it. How much of "AI" is
| generating content to be read by Google's search engine? This may
| be a bigger energy drain than Bitcoin mining.
| jajko wrote:
| I'll keep buying (and paying premium) for dumber things. Cars
| are a prime example, I want it to be dumb as fuck, offline,
| letting me decide what to do. At least next 2 decades, and
| thats achievable. After that I couldnt care less, I'll probably
| be a bad driver at that point anyway so switch may make sense.
| I want dumb beautiful mechanival wristwatch.
|
| I am not ocd-riddled insecure man trying to subconsiously
| immitate much of the crowd, in any form of fasion. If that will
| make me an outlier, so be it, a happier one.
|
| I suspect new branch of artisanal human-mind-made trademark is
| just behind the corner, maybe niche but it will find its
| audience. Beautiful imperfections, clear clunky biases and all
| that.
| spencerchubb wrote:
| LLMs have been improving exponentially for a few years. let's
| at least wait until exponential improvements slow down to make
| a judgement about their potential
| bloppe wrote:
| They have been improving a lot, but that improvement is
| already plateauing and all the fundamental problems have not
| disappeared. AI needs another architectural breakthrough to
| keep up the pace of advancement.
| Animats wrote:
| Yes. Anything on the horizon?
| bloppe wrote:
| I'm not as up-to-speed on the literature as I used to be
| (it's gotten a lot harder to keep up), but I certainly
| haven't heard of any breakthroughs. They tend to be
| pretty hard to predict and plan for.
|
| I don't think we can continue simply tweaking the
| transformer architecture to achieve meaningful gains. We
| will need new architectures, hopefully ones that more
| closely align with biological intelligence.
|
| In theory, the simplest way to real superhuman AGI would
| be to start by modeling a real human brain as a physical
| system at the neural level; a real neural network. What
| the AI community calls "neural networks" are only _very
| loose_ approximations of biological neural networks. Real
| neurons are subject to complex interactions between many
| different neurotransmitters and neuromodulators and they
| grow and shift in ways that look nothing like
| backpropagation. There already exist decently accurate
| physical models for single neurons, but accurately
| modeling even C. elegans (as part of the OpenWorm
| project) is still a way 's off. Modeling a full human
| brain may not be possible within our lifetime, but I also
| wouldn't rule that out.
|
| And once we can accurately model a real human brain, we
| can speed it up and make it bigger and apply evolutionary
| processes to it much faster than natural evolution. To
| me, that's still the only plausible path to real AGI, and
| we're really not even close.
| segasaturn wrote:
| I was holding out hope for Q*, which OAI talked about
| with hushed tones to make it seem revolutionary and maybe
| even dangerous, but that ended up being o1. o1 is neat,
| but its far from a breakthrough. It's just recycling the
| same engine behind GPT-4 and making it talk to itself
| before spitting out its response to your prompt. I'm
| quite sure they've hit a ceiling and are now using smoke-
| and-mirrors techniques to keep the hype and perceived
| pace-of-progress up.
| amelius wrote:
| If they were plateauing it would mean OpenAI would have
| lost its headstart wrt the competition, which is not the
| case I believe.
| talldayo wrote:
| It's pretty undeniable that OpenAI's lead has been
| diminished greatly from the GPT-3 days. Back then, they
| could rely on marketing their coherency and the "true
| power" of larger models. But today we're starting to see
| 1B models that are undistinguishable from OpenAI's most
| advanced chain-of-thought models. From a turing test
| perspective, I don't think the average person could
| distinguish between an OpenAI and a Llama 3.2 response.
| bloppe wrote:
| OpenAI has the biggest appetite for large models. GPT-4
| is generally a bit better than Gemini, for example, but
| that's not because Google can't compete with it. Gemini
| is orders of magnitude smaller than GPT-4 because if
| Google were to run a GPT-4-sized model every time
| somebody searches on Google, they would literally cease
| to be a profitable company. That's how expensive
| inference on these ultra-large models is. OpenAI still
| doesn't really care about burning through hundreds of
| billions of dollars, but that cannot last forever.
| bunderbunder wrote:
| This, I think, is the crux of it. OpenAI is burning money
| at a furious rate. Perhaps this is due to a classic tech
| industry hypergrowth strategy, but the challenge with
| hypergrowth strategies is that they tend to involve
| skipping over the step where you figure out if the market
| will tolerate pricing your product appropriately instead
| of selling it at a loss.
|
| At least for the use cases I've been directly exposed to,
| I don't think that is the case. They need to keep being
| priced about where they are right now. It wouldn't take
| very much of a rate hike for their end users to largely
| decide that _not_ using the product makes more financial
| sense.
| og_kalu wrote:
| >but that improvement is already plateauing
|
| Based on what ? The gap between the release of GPT-3 and 4
| is still much bigger than the time that has elapsed since 4
| was already released so really, Based on what ?
| COAGULOPATH wrote:
| In some domains (math and code), progress is still very fast.
| In others it has slowed or arguably stopped.
|
| We see little progress in "soft" skills like creative
| writing. EQBench is a benchmark that tests LLM ability to
| write stories, narratives, and poems. The winning models are
| mostly tiny Gemma finetunes with single-digit parameter
| counts. Huge foundation models with hundreds of billions of
| parameters (Claude 3 Opus, Llama 3.1 405B, GPT4) are nowhere
| near the top. (Yes, I know Gemma is a pruned Gemini). Fine-
| tuning > model size, which implies we don't have a path to
| "superhuman" creative writing (if that even exists). Unlike
| model size, fine-tuning can't be scaled indefinitely: once
| you've squeezed all the juice out of a model, what then?
|
| OpenAI's new o1 model exhibits amazing progress in reasoning,
| math, and coding. Yet its writing is worse than GPT4-o's (as
| backed by EQBench and OpenAI's own research).
|
| I'd also mention political persuasion (since people seem
| concerned about LLM-generated propaganda). In June, some
| researchers tested LLM ability to change the minds of human
| subjects on issues like privatization and assisted suicide.
| Tiny models are unpersuasive, as expected. But once a model
| is large enough to generate coherent sentences,
| persuasiveness kinda...stops. All large models are about
| equally persuasive. No runaway scaling laws are evident here.
|
| This picture is uncertain due to instruction tuning. We don't
| really know what abilities LLMs "truly" possess, because
| they've been crippled to act as harmless, helpful chatbots.
| But we now have an open-source GPT-4-sized pretrained model
| to play with (Llama-3.1 405B base). People are doing
| interesting things with it, but it's not setting the world on
| fire.
| __loam wrote:
| Funnily enough, bitcoin mining still uses at least about 3x
| more power that AI at the moment, while providing less value
| imo. AI power use is also dwarfed by other industries even in
| computing. We should still consider whether it's worth it, but
| most research and development on LLMs in corporate right now
| seems to be focused on making them more efficient, and
| therefore both cheaper and less power intensive, to run.
| There's also stuff like Apple intelligence that is moving it
| out to edge devices with much more efficient chips.
|
| I'm still a big critic of AI generally but they're definitely
| not as bad as crypto which is shocking.
| illiac786 wrote:
| Do you have a nice reference for this? I could really use
| something like this, this topic comes up a lot in my social
| circle.
| wrycoder wrote:
| Just wait until they get saturated with subtle (and not so
| subtle) advertising. Then, you'll really hate them.
| mhowland wrote:
| "They're really good at some things, terrible at others, and
| prone to doing something totally wrong some fraction of the
| time."
|
| I agree 100% with this sentiment, but, it also is a decent
| description of individual humans.
|
| This is what processes and control systems/controls are for.
| These are evolving at a slower pace than the LLMs themselves at
| the moment so we're looking to the LLM to be its own control. I
| don't think it will be any better than the average human is at
| being their own control, but by no means does that mean it's
| not a solvable problem.
| linsomniac wrote:
| >also is a decent description of individual humans
|
| A friend of mine was moving from software development into
| managing devs. He told me: "They often don't do things the
| way or to the quality I'd like, but 10 of them just get so
| much more done than I could on my own." This was him coming
| to terms with letting go of some control, and switching to
| "guiding the results" rather than direct control.
|
| The LLMs are a lot like this.
| shaunxcode wrote:
| LLM/DEEP-MIND is DESTROYING lineage. This is the crux point we
| can all feel. Up until now you could pick up a novel or watch a
| film, download an open source library, and figure out the LINEAGE
| (even if no attribution is directly made, by studying the author
| etc.)
|
| I am not too worried though. People are starting to realize this
| more and more. Soon using AI will be next google glass. LLM is
| already a slur worse than NPC in the youth. And profs are
| realizing its time for a return to oral exams ONLY as an
| assessment method. (we figured this out in industry ages ago :
| whiteboard interviews etc)
|
| Yours truly : LILA <an LISP INTELLIGENCE LANGUAGE AGENT>
| bane wrote:
| I feel sorry for the young hopeful data scientists who got into
| the field when doing data science was still interesting and 95%
| of their jobs hadn't turned over into tuning the latest LLM to
| poorly accomplish some random task an executive thought up.
|
| I know a few of them and once they started riding the hype curve
| for real, the luster wore off and they're all absolutely
| miserable in their jobs and trying to look for exits. The fun
| stuff, the novel DL architectures, coming up with clever ways to
| balance datasets or label things...it's all just dried up.
|
| It's even worse than the last time I saw people sadly taking the
| stairs down the other end of the hype cycle when bioinformatics
| didn't explode into the bioeconomy that had been promised or when
| blockchain wasn't the revolution in corporate practices that CIOs
| everywhere had been sold on.
|
| We'll end up with this junk everywhere eventually, and it'll
| continue to commoditize, and that's why I'm very bearish on
| companies trying to make LLMs their sole business driver.
|
| AI is a feature, not the product.
| mark_l_watson wrote:
| Nice thoughts. Since 1982 half my work has been in one of the
| fields loosely called AI and the other half more straight up
| software development. After mostly been doing deep learning and
| now LLM for almost ten years, I miss conventional software
| development.
|
| When I was swimming this morning I thought of writing a RDF data
| store with partial SPARQL support in Racket or Common Lisp -
| basically trade a year of my time to do straight up design and
| coding, for something very few people would use.
|
| I get very excited by shiny new things like advance voice
| interface for ChatGPT and NoteBookLM, both fine product ideas and
| implementations, but I also feel some general fatigue.
| sedatk wrote:
| I remember being awestruck at the first avocado chair images
| DALL-E generated. So many possibilities ahead. But, we ended up
| with all oversaturated, color-soup, greasy, smooth pictures
| everywhere because as it turns out, beauty is in the eye of the
| prompter.
| WillyWonkaJr wrote:
| I asked ChatGPT once if its generated images were filtered to
| reduce the realism and it said that it did. Maybe we don't like
| the safety filter they are applying to all images.
| sedatk wrote:
| The thing is we have no way to know if ChatGPT is telling the
| truth.
| paulcole wrote:
| > AI's carbon footprint is reaching more alarming levels every
| day
|
| It really really really really isn't.
|
| I love how people use this argument for anything they don't like
| - crypto, Taylor Swift, AI, etc.
|
| Everybody in the developed world's carbon footprint is
| disgusting! Even yours. Even mine. Yes, somebody else is worse
| than me and somebody else is worse than you, but we're all still
| awful.
|
| So calling out somebody else's carbon footprint is the most eye-
| rolling "argument" I can imagine.
| xena wrote:
| My last job made me shill for AI stuff because GPUs have a lot of
| income potential. One of my next ones is going to make me shill
| for AI stuff because it makes people deal with terrifying amounts
| of data.
|
| I understand why this is the case, but it's still kinda
| disappointing. I'm hoping for an AI winter so that I can talk
| about normal uses of computers again.
| canxerian wrote:
| I'm a software dev and I'm tired of LLMs being crowbar'd in to
| every single product I build and use, to the point where they are
| unanimously and unequivocally used over better, cheaper and
| simpler solutions.
|
| I'm also tired of people who claim to be excited by AI. They are
| the dullest of them all.
| nasaeclipse wrote:
| At some point, I wonder if we will go more analog again. How do
| we know if a book was written by a human? Simple, he used a
| typewriter or wrote it by hand!
|
| Photos? Real film.
|
| Video.... real film again lol.
|
| I think that may actually happen at some point.
| jillesvangurp wrote:
| I'm actually excited about AI. With a dose of realism. But I
| benefit from LLMs on a daily basis now. There are a lot of
| challenges with LLMs but they are useful tools and we haven't
| really seen much yet. It's only been two years since chat gpt was
| released. And mostly we're still consuming this stuff via chat
| UIs, which strikes me as sub optimal and is something I hope will
| change soon.
|
| The increases in context size are helping a lot. The step
| improvement in reasoning abilities and quality of answers is
| amazing to watch. I'm currently using chat gpt o1 preview a lot
| for programming stuff. It's not perfect but I can use a lot of
| what it generates and this is saving me a lot of time lately. It
| still gets stuff wrong and there's a lot of stuff it doesn't
| know.
|
| I also am mildly addicted to perplexity.ai. Just a wonderful tool
| and I seem to be getting in the habit of asking it about anything
| that pops into my mind. Sometimes it's even work related.
|
| I get that people are annoyed with all the hyperbolic stuff in
| the media on this topic. But at the same time, the trends here
| are pretty amazing. I'm running the 3B parameter llama 3.2 model
| on a freaking laptop now. A nice two year old M1 with only 16GB.
| It's not going to replace bigger models for me. But I can see a
| few use cases for running it locally.
|
| My view is very simple. I'm a software developer. I grew up a few
| decades ago before there was any internet. I had no clue what a
| computer even was until I was in high school. Things like Knight
| Rider, Star Trek, Buck Rogers, Star Wars etc. all featured forms
| of AIs that are now more or less becoming science fact. C3PO is
| pretty dumb compared to chat gpt actually. You could build
| something better and more useful these days. That would mostly an
| art and crafts project at this point. No special skills required.
| Just use an LLM to generate the code you need. Nice project for
| some high school kids.
|
| Which brings me to my main point. We're the old generation. Part
| of being old is getting replaced by younger people. Young people
| are growing up with this stuff. They'll use it to their advantage
| and they are not going to be held back by old fashioned notions
| about the way the things should work according to us old people.
| The thing with Luddites is that they exist in any generation. And
| then they grow tired, old, and then they die off. I have no
| ambition to become irrelevant like that.
|
| I'm planning to keep up with young people as long as I can. I'll
| have to give that up at some point but not just yet. And right
| now that includes being clued in as much as I can about LLMs and
| all the developer plumbing I need to use them. This stuff is
| shockingly easy. Just ask your favorite LLM to help you get
| started.
___________________________________________________________________
(page generated 2024-09-27 23:00 UTC)