[HN Gopher] Isaac Asimov describes how AI will liberate humans a...
___________________________________________________________________
Isaac Asimov describes how AI will liberate humans and their
creativity (1992)
Author : bookofjoe
Score : 132 points
Date : 2025-04-10 14:30 UTC (8 hours ago)
(HTM) web link (www.openculture.com)
(TXT) w3m dump (www.openculture.com)
| lenerdenator wrote:
| > One wonders what Asimov would make of the world of 2025, and
| whether he'd still see artificial and natural intelligence as
| complementary, rather than in competition.
|
| I mean, I just got done watching a presentation at Google Next
| where the presenter talked to an AI agent and set up a
| landscaping appointment with price match and a person could
| intervene to approve the price match.
|
| It's cool, sure, but understand, that agent would absolutely have
| been a person on a phone five years ago, and if you replace them
| with agentic AI, that doesn't mean that person has gone away or
| is now free to write poetry. It means they're out of an income
| and benefits. And that's before you consider the effects on the
| pool of talent you're drawing from when you're looking for
| someone to intervene on behalf of these agentic AIs, like that
| supervisor did when they approved the price match. If you don't
| have the entry-level person, you don't have them five years later
| when you want to promote someone to manage.
| gh0stcat wrote:
| Another thing I have noticed with automation in general is that
| the more you use it, the less you understand the thing being
| automated. I think the reason why a lot of things today are
| still being manually done is because humans inherently
| understand that for both short AND long term success with a
| task, a conceptual understanding of the components of the
| system, whether that is partially or fully imagined in the case
| of complex business scenarios, is necessary, even though it
| lengthens time to complete in the short term. How do you modify
| or grow a system you do not understand? It feels like you're
| cutting a branch at a certain length and not allowing it to
| grow beyond where you've placed the automation. I will be
| interested to see the outcome of the increased push today for
| advanced automation in places where the business relies on
| understanding of the system to make adjacent decisions/further
| business operations.
| akuchling wrote:
| Asimov's story The Feeling of Power seems relevant:
| https://en.wikipedia.org/wiki/The_Feeling_of_Power
| baxtr wrote:
| The 1980 version of your comment:
|
| _> Just saw a demo of a new word processor system that lets a
| manager dictate straight into the machine, and it prints the
| memo without a secretary ever touching it. Slick stuff. But
| five years ago, that memo would've gone through a typist.
| Replace her with a machine, and she's not suddenly editing
| novels from home. She's unemployed, losing her paycheck and
| benefits.
|
| And when that system malfunctions, who's left who actually
| knows how to fix it or manage the workflow? You can't promote
| experience that never existed. Strip out the entry-level roles,
| and you cut off the path to leadership._
| lenerdenator wrote:
| The difference between the 1980 version of my post and the
| 2025 version of my post is that in 1980 there was conceivably
| a future where the secretary could retrain to do other work
| (likely with the help of one of those new-fangled
| microcomputers) that would need human intelligence in order
| to be completed.
|
| The 2025 equivalent of the secretary is potentially looking
| across a job market that is far smaller because the labor she
| was trained to do, or labor similar enough to it that she
| could have previously successfully been hired, is now handled
| by artificial intelligence.
|
| There is, effectively, no where for her to go to earn a
| living with her labor.
| seadan83 wrote:
| How can we reconcile this with how much of the US and world
| are still living as if it were the 1930s or even 1850s?
|
| Travel 75 to 150 miles outside of a US city and it will
| feel like time travel. If so much is still 100 years
| behind, how will civilization so broadly adopt something
| that is yet more decades into the future?
|
| I got into starlink debates with people during hurricane
| helene. Folks were glowing over how people just needed
| internet. Reality, internet meant fuck all when what you
| needed was someone with a chainsaw, a generator, heater,
| blankets, diapers and food.
|
| Which is to say, technology and its importance is a thin
| veneer on top of organized society. All of which is frail
| and still has a long way to go to fully penetrate rural
| communities for even recent technology. At the same time,
| that spread is less important than it would seem to a
| technologist. Hence, technology has not uniformly spread
| everywhere, and ultimately it is not that important. Yet,
| how will AI, even more futuristic, leap frog this? My money
| is that rural towns USA will look almost identical in 30
| years from now. Many look identical to 100 years ago still.
| xurias wrote:
| Who do you think voted for Trump? You point out that it's
| perfectly possible to live a "simple" rural life.
|
| I see https://en.wikipedia.org/wiki/Beggars_in_Spain and
| the reason _why_ they vote the way they do. Modern
| society has left them behind, abandoned them, and not
| given them any way to keep up with the rest of the US.
| Now they 're getting taken advantage of by the wealthy
| like Trump, Murdoch, Musk, etc. who use their unhappiness
| to rage against the machine.
|
| > My money is that rural towns USA will look almost
| identical in 30 years from now.
|
| You mean poor, uneducated and without any real prospects
| of anything like a career? Pretty much. Except there will
| be _far_ more people who are impoverished and with no
| hope for the future. I don 't see any of this as a good
| thing.
| 827a wrote:
| If your argument is that, all that happened and it all turned
| out fine: Are you sure we (socioeconomically, on average) are
| better off today then we were in the 1980s?
| baxtr wrote:
| Probably depends who you refer to by "we". On a global
| level, the answer is definitely yes.
|
| Extreme poverty decreased, child mortality decreased,
| literacy and access to electricity has gone up.
|
| Are people unhappier? Maybe. But not because they lack
| something materially.
| 827a wrote:
| I think in this case its fair to assume what I meant was
| "the secretaries whose jobs were replaced in the 80s and
| people like them", or "the people whose jobs will be
| replaced with AI today"; not "literally the poorest and
| least educated people on the planet whose basic hierarchy
| of needs struggle to be met every day."
| milesrout wrote:
| I am sure of that. I think people forget the difference in
| living conditions then.
|
| Things that were common in that era that are rare today:
|
| 1. Living in shared accomodation. It was common then for
| people to live in boarding houses and bedsits as adults.
| Today these are largely extinct. Generally, the living
| space per person has increased substantially at every level
| of wealth. Only students live in this sort of environment
| today and even then it is usually a flat (ie. sharing with
| people you know on an equal basis) not a bedsit/boarding
| house (ie. living in someone's house according to her rules
| --no ladies in gentlemen's bedrooms, no noise after 8pm,
| etc.).
|
| 2. Second-hand clothes and repairing clothes. Most people
| wear new clothes. People buy second hand because it is
| trendy. Nobody really repairs anything because that is all
| they can afford. People just buy new. Nobody darns socks or
| puts elbow patches on jackets where they have worn out.
| Only people that buy expensive shoes get their shoes
| resoled. Normal people just buy cheap shoes more often and
| they really do save money doing this.
|
| Today the woman that would have been a typist has a
| different job, and a more productive one that pays more.
| Philpax wrote:
| Not quite comparable; these systems will continue to grow in
| capacity until there is nothing for your average human to be
| able to reskill to. Not only that, they will truly be beyond
| our comprehension (arguably, they already are: our
| interpretability work is far from where it would need to be
| to safely build towards a superintelligence, and yet...)
| mandmandam wrote:
| > if you replace them with agentic AI, that doesn't mean that
| person has gone away or is now free to write poetry. It means
| they're out of an income and benefits.
|
| That's capitalism for ye :/ Join us on the UBI train.
|
| Say, have you ever read the book 'Bullshit Jobs'...
| lenerdenator wrote:
| > That's capitalism for ye :/ Join us on the UBI train.
|
| The people with all of the money effectively froze wages for
| 45 years, and that was when there were people actually doing
| labor for them.
|
| What makes you think that they'll peaceably agree to UBI for
| people who don't sell them labor for money?
| mandmandam wrote:
| > The people with all of the money effectively froze wages
| for 45 years
|
| Yep. And they didn't accomplish that 'peaceably' either,
| for the record. A lot of people got murdered, many more
| smeared/threatened/imprisoned etc. Entire countries get
| decimated.
|
| > What makes you think that they'll peaceably agree to UBI
| for people who don't sell them labor for money?
|
| I don't imagine for a moment that they'll like UBI. There
| is no shortage of examples over recent millenia of how far
| the parasite class will go to keep the status quo.
|
| History _also_ shows us that having all the money doesn 't
| guarantee that people will do things your way. Class
| awareness, strikes, unions, protest, and alternative
| systems/technological advance have shown their mettle.
| These things scare oligarchs because _they work_.
| Philpax wrote:
| I am hoping that will be our saving grace this time
| around as well, but my fear is that the oligarchs will
| control more autonomous power than we can meaningfully
| resist, and our existence will no longer be strictly
| necessary for their systems to operate.
| milesrout wrote:
| Wages haven't been frozen for 45 years in real terms. They
| have gone up considerably.
| jes5199 wrote:
| if the AI transition really turns into an Artificial Labor
| revolution - if it really works and isn't an illusion - then
| we're going to have to have a major change in how we distribute
| wealth. The bad future is one where the owner class no longer
| has any use for human labor and the former-worker class has
| nothing
| foobarian wrote:
| TBH this is already how the US got into the current mess.
| milesrout wrote:
| But we have had the same thing happen constantly. Automation
| isn't new. How many individuals are involved in assembling a
| car today vs in the 1970s? An order of magnitude fewer. But
| there aren't loads of unemployed people. The market puts
| labour where it is needed.
|
| Automation won't obsolete work and workers it will make us
| more productive and our desires will increase. We will all
| expect what today are considered luxuries only the rich can
| afford. We will all have custom software written for our
| needs. We will all have individual legal advice on any topic
| we need advice on. We will all have bigger houses with more
| stuff in them, better finishings, triple glazed windows, and
| on and on.
| Spooky23 wrote:
| Not necessarily. The reality is the landscaping guy is
| struggling to handle callbacks or is burning overhead. Even
| then, two girls in the office hits a ceiling where it doesn't
| scale quickly, now you're in a call center scenario.
|
| Call center based services _always suck_. I remember going to a
| talk where American Express, who operated best in class call
| centers, found that 75% of their customers don't want to talk
| to them. The people are there because that's needed for a
| complex relationship, the more stuff you can address earlier in
| the funnel, the better.
|
| Customers don't want to talk to you, and ultimately serving the
| customer is the point.
| nicbou wrote:
| In theory, the economy should create new avenues. Labour costs
| are lower, goods and services get cheaper (inflation adjusted)
| and the money is spent on things that were once out of reach.
|
| In practice I fear that the savings will make the rich richer,
| drive down labour's negotiating power and generally fail to
| elevate our standard of living.
| vannevar wrote:
| I don't think Asimov envisioned a world where AI would be
| controlled by a clique of ultra-wealthy oligarchs.
| Spooky23 wrote:
| Asimov's future was pretty dark. He didn't come out and say it,
| but it was implied that we had a lot of big entities ruling
| everything. Many of the negative political people were painted
| as "populist" figures.
|
| If you are a fan of the foundation books, recall that many of
| the leaders of various factions were a bunch of idiots little
| different than the carnival barkers we see today.
| vonneumannstan wrote:
| May want to reread. U.S. Robots and Mechanical Men is pretty
| prominent in his Robot stories.
| code_for_monkey wrote:
| or that it would aggressively focused on doing the work of
| already low paid creative field jobs. I dont want to read an
| AI's writing if theres a person who could write it.
| ruffrey wrote:
| As I recall, many of his early stories involved "U.S. Robot &
| Mechanical Men" which was a huge conglomerate owning a lot of
| the market on AI (called "robots" by Asimov, it included
| "Multivac" and other interfaces besides humanoid robots).
| tumsfestival wrote:
| I remember reading his book 'The Naked Sun' back in highschool
| and one of the things that stuck to me was how Earth was kind
| of a dump bereft of robots, meanwhile the Spacer humans were
| incredibly rich, had a low population and their society was run
| by robots doing all the menial work. You could argue he
| envisioned our current world even if accidentally.
| klabb3 wrote:
| Yes. When I hear dreams of the past it makes me nostalgic
| because they all come from a pre-exploited era of tech with the
| underlying subtext that humanity is unified in wanting tech to
| be used for good purposes. The reality is tech is a vessel for
| traditional enrichment, such as resource wars of say oil or
| land have been. Both domestically and geopolitically, tech is
| seen that way today. In such a world, tech advancements offers
| opportunities for the powerful to grab more, changing the
| relative distribution of power in their favor. If tech shows us
| anything is that this relative notion of wealth or social
| posturing is the central axis around which humans align
| themselves, wherever on the socioeconomic ladder you are and
| independent of absolute and basic needs.
| southernplaces7 wrote:
| >because they all come from a pre-exploited era of tech with
| the underlying subtext that humanity is unified in wanting
| tech to be used for good purposes.
|
| That's the problem with being nostalgic for something you
| possibly didn't even live. You don't remember all the other
| ugly complexities that don't fit your idealized vision.
|
| Nothing about the world of the sci fi golden age was less
| exploitative or prone to human misery than it is today. If
| anything, it was far worse than what we have today in many
| ways (excluding perhaps the reach of the surveillance state)
|
| Some of the US government's worst secret experiments against
| the population come from that same time and the naive faith
| by the population in their "leaders" made propaganda by
| centralized big media outlets all the more pervasively
| powerful. At the same time, social miseries were common and
| so too were many strictures against many more people on
| economic and social opportunities. As for technology being
| used for good purposes, bear in mind that among many other
| nasty things being done, the 50's and 60s were a time in
| which several governments flagrantly tested thousands of
| nukes out in the open, in the skies, above-ground and in the
| oceans with hardly a care in the world or any serious public
| scrutiny. If you're looking at that gone world with rose-
| tinted glasses, I'd suggest instead using rose tinted welding
| goggles..
|
| The world of today may be full of flaws, but the avenues for
| breaking away from controlled narratives and controlled
| economic rules are probably broader than they've ever been.
| tim333 wrote:
| There are some dreams of the past like that but most sci-fi
| tends to be quiet dark like The Matrix or Terminator. In
| practice a lot of tech proves to be helpful in not very sci-
| fi like ways like antibiotics, phones etc. Human nature is
| still what it is though.
| vannevar wrote:
| >Asimov's future was pretty dark. He didn't come out and say
| it, but it was implied that we had a lot of big entities ruling
| everything.
|
| >As I recall, many of his early stories involved "U.S. Robot &
| Mechanical Men" which was a huge conglomerate owning a lot of
| the market on AI...
|
| >May want to reread. U.S. Robots and Mechanical Men is pretty
| prominent in his Robot stories.
|
| Good points from some of these replies. The interview is fairly
| brief, perhaps he didn't feel he had the time to touch on the
| socio-economic issues, or that it wasn't the proper forum for
| those concerns.
| gmuslera wrote:
| What we are labeling as AI today is different than was thought to
| be in the 90s, or when Asimov wrote most of his stories about
| robots and other ways of AI.
|
| Saying that, a variant of Susan Calvin role could prove to be
| useful today.
| empath75 wrote:
| Not sure that I agree with that. People have been imagining
| human-like AI since before computers were even a thing. The
| Star Trek computer from TNG is basically an LLM, really.
|
| AI _researchers_ had a different idea of what AI would be like,
| as they were working on symbolic AI, but in the popular
| imagination, "AI" was a computer that acted and thought like a
| human.
| NoTeslaThrow wrote:
| > The Star Trek computer from TNG is basically an LLM,
| really.
|
| The Star Trek computer is not like LLMs: a) it provides
| reliable answers, b) it is capable of reasoning, c) it is
| capable of actually interacting with its environment in a
| rational manner, d) it is infallible unless someone messes
| with it. Each one of these points is far in the future of
| LLMs.
| sgt wrote:
| Yet when you ask it to dim the lights, it dims either way
| too little or way too much. Poor Geordi.
| sgt wrote:
| For what it's worth, I was referring to the episode when
| he set up a romantic dinner for the scientist lady.
| Computer couldn't get the lighting right.
| lcnPylGDnU4H9OF wrote:
| Their point is that it seems to function like an LLM even
| if it's more advanced. The points raised in this comment
| don't refute that, per the assertion that each of them is
| in the future of LLMs.
| NoTeslaThrow wrote:
| > Their point is that it seems to function like an LLM
| even if it's more advanced.
|
| So did ELIZA. So did SmarterChild. Chatbots are not
| exactly a new technology. LLMs are at best a new cog in
| that same old functionality--but nothing has
| fundamentally made them more reliable or useful. The last
| 90% of any chatbot will involve heavy usage of heuristics
| with both approaches. The main difference is some of the
| heuristics are (hopefully) moved into training.
| Philpax wrote:
| Stating that LLMs are not more reliable or useful than
| ELIZA or SmarterChild is so incredibly off-base I have to
| wonder if you've ever actually used a LLM. Please run the
| same query past ELIZA and Gemini 2.5
| (https://aistudio.google.com/prompts/new_chat) and report
| back.
| NoTeslaThrow wrote:
| > Please run the same query past ELIZA and Gemini 2.5
| (https://aistudio.google.com/prompts/new_chat) and report
| back.
|
| I don't see much difference--you still have to take any
| output skeptically. I can't claim to have ever used
| gemini, but last I checked it still can't cite sources,
| which would at least assist with validation.
|
| I'm just saying this didn't introduce any fundamentally
| new capabilities--we've always been able to GIGO-excuse
| all chatbots. The "soft" applications of LLMs have always
| been approximated by heuristics (e.g. generation of
| content of unknown use or quality). Even the
| summarization tech LLMs seem to offer don't seem to
| substantially improve over the NLP-heuristic-driven
| predecessors.
|
| But yea, if you really want to generate content of
| unknown quality, this is a massive leap. I just don't see
| this as very interesting.
| filoleg wrote:
| > I can't claim to have ever used gemini, but last I
| checked it still can't cite sources, which would at least
| assist with validation.
|
| Yes, it can cite sources, just like any other major LLM
| service out there. Gemini, Claude, Deepseek, and ChatGPT
| are the ones I personally validated this with, but I bet
| other major LLM services can do so as well.
|
| Just tested this using Gemini with "Is fluoride good for
| teeth? Cite sources for any of the claims" prompt, and it
| listed every claim as a bullet point accompanied by the
| corresponding source. The sources were links to specific
| pages addressing the claims from CDC, Cleveland Clinic,
| John Hopkins, and NIDCR. I clicked on each of the links
| to verify that they were corroborating what Gemini
| response was saying, and they were.
|
| In fact, it would more often than not include sources
| even without me explicitly asking for sources.
| pigeons wrote:
| They don't make up the sources or include sources that
| don't include the citation anymore?
| whilenot-dev wrote:
| > The Star Trek computer from TNG is basically an LLM,
| really.
|
| Watched all seasons recently for the first time. While some
| things are "just" vector search with a voice interface, there
| are also goodies like "Computer, extrapolate from theoretical
| database!", or "Create dance partner, female!" :D
|
| For anyone curious:
| https://www.youtube.com/watch?v=6CDhEwhOm44
| palmotea wrote:
| > The Star Trek computer from TNG is basically an LLM,
| really.
|
| No. The Star Trek computer is a fictional _character_ ,
| really. _It 's not a technology any more than Jean-Luc Picard
| is._ It's does whatever the writers needed it to do to
| further the plot.
|
| It reminds me: J. Michael Straczynski (of Babylon 5 fame) was
| once asked "How fast do Starfuries travel?" and he replied
| "At the speed of plot."
| bpodgursky wrote:
| AI is far closer to Asimov's vision of AI than anyone else's.
| The "Positronic Brain" is very close to what we ended up with.
|
| The three laws of robotics seemed ridiculous until 2021, when
| it became clear that you _could_ just give AI general firm
| guidelines and let them work out the details (and ways to evade
| the rules) from there.
| throw_m239339 wrote:
| > What we are labeling as AI today is different than was
| thought to be in the 90s, or when Asimov wrote most of his
| stories about robots and other ways of AI.
|
| Multivac in "the last question"?
| kogus wrote:
| I think we need to consider what the end goal of technology is at
| a very broad level.
|
| Asimov says in this that there are things computers will be good
| at, and things humans will be good at. By embracing that
| complementary relationship, we can advance as a society and be
| free to do the things that only humans can do.
|
| That is definitely how I wish things were going. But it's
| becoming clear that within a few more years, computers will be
| far better at absolutely everything than human beings could ever
| be. We are not far even now from a prompt accepting a request
| such as "Write a another volume of the Foundation series, in the
| style of Isaac Asimov", and getting a complete novel that does
| not need editing, does not need review, and is equal to or better
| than the quality of the original novels.
|
| When that goal is achieved, what then are humans "for"? Humans
| need purpose, and we are going to be in a position where we don't
| serve any purpose. I am worried about what will become of us
| after we have made ourselves obsolete.
| empath75 wrote:
| > But it's becoming clear that within a few more years,
| computers will be far better at absolutely everything than
| human beings could ever be.
|
| Comparative advantage. Even if that's true, AI can't possibly
| do _everything_. China is better at manufacturing pretty much
| anything than most countries on earth, but that doesn't mean
| China is the only country in the world that does manufacturing.
| Philpax wrote:
| > AI can't possibly do _everything_
|
| Why not? There's the human bias of wanting to consume things
| created by humans - that's fine, I'm not questioning that -
| but objectively, if we get to human-threshold AGI and
| continue scaling, there's no reason why it _couldn 't_ do
| everything, and better.
| seadan83 wrote:
| Why not - IMO you perhaps underestimate human complexity.
| There was a guardian article where researchers created a
| map of a mouse's brain, 1 cubic millimeter. Contains 45km
| worth of neurons and billions of synapses. IMO the AGI
| crowd are suffering expert beginner syndrome.
| Philpax wrote:
| Humans are one solution to the problem of intelligence,
| but they are not the only solution, nor are they the most
| efficient. Today's LLMs are capable of outperforming your
| average human in a variety (not all, obviously!) of
| fields, despite being of wholly different origin and
| complexity.
| foobarian wrote:
| > what then are humans "for"?
|
| Folding laundry
| rqtwteye wrote:
| A while ago I saw a video of a robot doing exactly that.
| Seems there is nothing left for us to do.
| giraffe_lady wrote:
| Here's a passage from a children's book I've been carrying
| around in my heart for a few decades:
|
| "I don't like cleaning or dusting or cooking or doing dishes,
| or any of those things," I explained to her. "And I don't
| usually do it. I find it boring, you see."
|
| "Everyone has to do those things," she said.
|
| "Rich people don't," I pointed out.
|
| Juniper laughed, as she often did at things I said in those
| early days, but at once became quite serious.
|
| "They miss a lot of fun," she said. "But quite apart from
| that--keeping yourself clean, preparing the food you are
| going to eat, clearing it away afterward--that's what life's
| about, Wise Child. When people forget that, or lose touch
| with it, then they lose touch with other important things as
| well."
|
| "Men don't do those things."
|
| "Exactly. Also, as you clean the house up, it gives you time
| to tidy yourself up inside--you'll see."
| quxbar wrote:
| It depends on what you are trying to get out of a novel. If you
| merely require repetitions on a theme in a comfortable format,
| Lester Dent style 'crank it out' writing has been dominant in
| the marketplace for >100 years already
| (https://myweb.uiowa.edu/jwolcott/Doc/pulp_plot.htm).
|
| Can an AI novel add something new to the conversation of
| literature? That's less clear to me because it is so hard to
| get any model I work with to truly stand by its convictions.
| belter wrote:
| - Despite the flood of benchmark-tuned LLMs, we remain nowhere
| close to engineering a machine intelligence rivaling that of a
| cat or a dog, let alone within the next 5 to 10 years.
|
| - The world already hosts millions of organic AI (Actual
| Intelligence). Many statistically at genius-level IQ. Does
| their existence make you obsolete?
| Philpax wrote:
| > Despite the flood of benchmark-tuned LLMs, we remain
| nowhere close to engineering a machine intelligence rivaling
| that of a cat or a dog, let alone within the next 5 to 10
| years.
|
| Depends on your definition of "intelligence." No, they can't
| reliably navigate the physical world or have long-term
| memories like cats or dogs do. Yes, they can outperform them
| on intellectual work in the written domain.
|
| > Does their existence make you obsolete?
|
| Imagine if for everything you tried to do, there was someone
| else who could do it better, no matter what domain, no matter
| where you were, and no matter how hard you tried. You are not
| an economically viable member of society. Some could deal
| with that level of demoralisation, but many won't.
| shortrounddev2 wrote:
| You can have an LLM crank out words but you can't make them
| mean anything
| 20after4 wrote:
| Suno is pretty good at going from a 3 or 4 word concept to
| make a complete song with lyrics, melody, vocals, structure
| and internal consistency. I've been thoroughly impressed. The
| songs still suck but they are arguably no worse than 99% of
| what the commercial music business has been pumping out for
| years. I'm not sure AI is ready to invent those concepts from
| nothing yet but it may not be far off.
| immibis wrote:
| I used it. Once you get over the novelty you realize that
| all the songs are basically the same. Except for
| https://www.immibis.com/ex509__immibis_uc13_shitmusic.mp3
| which you should pay attention to the lyrics in.
|
| > they are arguably no worse than 99% of what the
| commercial music business has been pumping out for years
|
| Correct, and that says a lot about our society.
| wild_egg wrote:
| Something about that mp3 actually feels disturbing. Is it
| normal for that type of model to attempt communication
| that way?
|
| Struggling to find the words but the synthetic voice
| directly addressing the prompt is really surreal feeling.
| Philpax wrote:
| Meaning is in the eye of the beholder. Just look at how many
| people enjoyed this and said it was "just what they needed",
| despite it being composed of entirely AI-generated music:
| https://www.youtube.com/watch?v=OgU_UDYd9lY
| boredemployee wrote:
| honestly wondering, how do u know it was AI generated?
| Philpax wrote:
| There's a "Altered or synthetic content" notice in the
| description. You can also look at the rest of the
| channel's output and draw some conclusions about their
| output rate.
|
| (To be clear, I have no problem with AI-generated music.
| I think a lot of the commenters would be surprised to
| hear of its origin, though.)
| jillesvangurp wrote:
| Evolution is not about being better / winning but about
| adapting. People will adapt and co-exist. Some better than
| others.
|
| AIs aren't really part of the whole evolutionary race for
| survival so far. We create them. And we allow them to run. And
| then we shut them down. Maybe there will be some AI enhanced
| people that start doing better. And maybe the people bit become
| optional at some point. At that point you might argue we've
| just morphed/evolved into whatever that is.
| lm28469 wrote:
| You could have said the same thing when we invented the steam
| engine, mechanized looms, &c. As long as the driving force of
| the economy/technology is "make numbers bigger" there is no end
| in sight, there will never be enough, there is no goal to
| achieve.
|
| We already live lives which are artificial in almost every way.
| People used to die of physical exhaustion and malnutrition, now
| they die of lack of exercise and gluttony, surely we could have
| stopped somewhere in the middle. It's not a ressource or
| technology problem at that point, it's societal/political
| charlie0 wrote:
| It's the human scaling problem. What systems can be used to
| scale humans to billions while providing the best possible
| outcomes for everyone? Capitalism? Communism?
|
| Another possibility is not let us scale. I thought Logan's
| Run was a very interesting take on this.
| js8 wrote:
| > By embracing that complementary relationship, we can advance
| as a society and be free to do the things that only humans can
| do.
|
| This complementarity already exists in our brains. We have
| evolutionary older parts of brain that deal with our basic
| needs through emotions and evolutionary younger neocortex that
| deals with rational thought. They have complicated
| relationship, both can influence our actions, through mutual
| interaction. Morality is managed by both, neither of them is
| necessarily more "humane" than the other.
|
| In my view, AI will be just another layer, an additional
| neocortex. Our biological neocortex is capable of tracking
| un/cooperative behavior of around 100 people of the tribe, and
| allows us to learn couple useful skills for life.
|
| The "personal AI neocortex" will track behavior of 8 billion
| people on the planet, and will have mastery of all known
| skills. It is gonna change humans for the better, I have little
| doubt about it.
| dominicrose wrote:
| > I think we need to consider what the end goal of technology
| is at a very broad level.
|
| "we" don't control ourselves. If humans can't find enough
| energy sources in 2200 it doesn't mean they won't do it in
| 1950.
|
| It would be pretty bad to lose access to energy after having
| it, worse than never having it IMO.
|
| The amount of new technologies discovered in the past 100 years
| (which is a tiny amount of time) is insane and we haven't
| adapted to it, not in a stable way.
| norir wrote:
| This is undeniably true. The consequences of a technological
| collapse at this scale would be far greater than having never
| had it in the first place. For this reason, the people in
| power (in both industry and government) have more destructive
| potential than at any time in human history by far. And they
| do not act like they have little to no awareness of the
| enormous responsibility they shoulder.
| mperham wrote:
| > When that goal is achieved, what then are humans "for"?
| Humans need purpose, and we are going to be in a position where
| we don't serve any purpose. I am worried about what will become
| of us after we have made ourselves obsolete.
|
| Read some philosophy. People have been wrestling with this
| question forever.
|
| https://en.wikipedia.org/wiki/Philosophy
|
| In the end, all we have is each other. Volunteer, help others.
| nthingtohide wrote:
| > Humans need purpose.
|
| Let me paint a purpose for you which could take millions of
| years. How about building a Atomic Force microscope equivalent
| which can probe Calabi Yau manifolds to send messages to other
| multiverses.
| Fin_Code wrote:
| I'm just hoping it brings out an explosion of new thought and not
| less thought. Will likely be both.
| shortrounddev2 wrote:
| I have found there to be less diversity in thought on the
| internet in the last 10 years. I used to find lots of wild
| ideas and theories out there on obscure sites. Now it seems
| like every website is the same, talking about the same things
| behringer wrote:
| They say the web is dead, but I think we just have bad search
| engines.
| TimorousBestie wrote:
| I find this difficult to understand. There was a great
| explosion of conspiracy theories in the last ten years, so
| you should be seeing more of it.
| shortrounddev2 wrote:
| Even the conspiracy theory community has become like this.
| What used to be a community of passionate skeptics, ufo-
| ologists, and rabid anti-statists has turned into the most
| overtly boot licking right wing apologists who apply an
| incredible amount of mental energy to justifying the
| actions of what is transparently and blatantly the most
| corrupt government in American history, so long as that
| government is weaponized against whatever identity and
| cultural groups they hate
| willy_k wrote:
| You're describing Twitter not conspiracy communities in
| general. On the UFO front at least I am aware of multiple
| YouTube channels and Discord servers with healthy
| diversity of thought, and I'm sure the same goes for
| other areas.
| immibis wrote:
| Maybe they're all the same conspiracy theories. All the
| current conspiracy theories are that immigrants are
| invading the country and Biden's in on it. Where is the
| next Time Cube or TempleOS?
| TimorousBestie wrote:
| We're living through the second renaissance of the flat-
| earthers, which aren't all that concerned with Biden
| (beyond the usual "the govt is concealing the truth"
| meme).
| 20after4 wrote:
| Two words: Endless September.
| tim333 wrote:
| If you go on twitter/x you will find a lot of wild ideas,
| many completely contradictory with other groups on x and or
| reality. It can be scary how polarized it is. If you open a
| new account and follow/like a few people with some odd
| viewpoint, soon you feed will be filled with that viewpoint,
| whatever it is.
| chuckadams wrote:
| It certainly is liberating all our creative works from our
| possession...
| vonneumannstan wrote:
| Intellectual Property is a questionable idea to begin with...
| mrdependable wrote:
| Why do you say that?
| chuckadams wrote:
| It's not the loss of ownership I'm lamenting, it's the loss
| of production by humans in the first place.
| Philpax wrote:
| Humans will always produce; it's just that those
| productions may not be financially viable, and may not have
| an audience. Grim, but also not too far off from the status
| quo today.
| vonneumannstan wrote:
| People made the same argument about Cameras vs Painting.
| "Humans are no longer creating the art!"
|
| But I doubt most people would subscribe to that view now
| and would say Photography is an entirely new art form.
| NitpickLawyer wrote:
| > People made the same argument about Cameras vs
| Painting.
|
| I remember that from a couple of years ago, when Stable
| Diffusion came out. There was a lot of talk about "art"
| and "AI" and someone posted a collection of articles /
| interviews / opinion pieces about this exact same thing -
| painting vs. cameras.
| pesus wrote:
| Using generative AI is a lot closer to hiring a
| photographer and telling them to take pictures for you
| than taking the pictures themselves.
| wubrr wrote:
| I mean, you still have the option of taking pictures
| yourself, if you find that creative and rewarding...
| pesus wrote:
| Absolutely, but it still doesn't make hiring a
| photographer an art form.
| wubrr wrote:
| How do you define 'art form'? Anything can arguably be an
| art form.
| thrwthsnw wrote:
| Why do we give awards to Directors then?
| MattGrommes wrote:
| This is nit-picky but you're probably actually referring
| to Cinematographers, or Directors of Photography. They're
| the ones who deal with the actual cameras, lens, use of
| light, etc. Directors deal with the actors and the
| script/writer.
|
| The reason we give them awards is that the camera can't
| tell you which lens will give you the effect you want or
| how to emphasize certain emotions with light.
| immibis wrote:
| If we're abolishing it, we have to really abolish it, both
| ways, not just abolish companies' responsibilities but not
| rights, while abolishing individuals' rights but not
| responsibilities.
| pera wrote:
| It's for sure less questionable than the current proposition
| of letting a handful of billionaires exploit the effort of
| millions of workers, without permission and completely
| disregarding the law, just for the sake of accumulating more
| power and more billions.
|
| Sure, patent trolls suck, so do MAFIAA, but a world where
| creators have no means to subsist, where everything you do
| will be captured by AI corps without your permission, just to
| be regurgitated into a model for a profit, sucks way way more
| adamsilkey wrote:
| How so? Even in a perfectly egalitarian world, where no one
| had to compete for food or resources, in art, there would
| still be a competition for attention and time.
| lupusreal wrote:
| There is the general principle of legal apparatus to
| facilitate artists getting paid. And then there is the
| reality of our extant system, which retroactively extends
| copyright terms so corporations who bough corporations who
| bought corporations... ...who bought the rights to an
| artistic work a century ago can continue to collect rent on
| that today. Whatever you think of the idealistic premise,
| the reality is absurd.
| palmotea wrote:
| > Intellectual Property is a questionable idea to begin
| with...
|
| I know! It's totally and completely immoral to give the
| little guy rights against the powerful. It infringes in the
| privileges and advantages of the powerful. It is the Amazons,
| the Googles, the Facebooks of the world who should capture
| all the economic value available. Everyone else must be
| content to be paid in exposure for their creativity.
| Philpax wrote:
| I'm glad we're seeing the death of the concept of owning an
| idea. I just hope the people who were relying on owning a slice
| of the noosphere can find some other way to sustain themselves.
| robertlagrant wrote:
| Did we previously have the concept of owning an idea?
| observationist wrote:
| Lawyers and people with lots of money figured out how to
| make even bigger piles of money for lawyers and people with
| lots of money from people who could make things like art,
| music, and literature.
|
| They occasionally allowed the people who actually make
| things to become wealthy in order to incentivize other
| people who make things to continue making things, but
| mostly it's just the people with lots of money (and the
| lawyers) who make most of the money.
|
| Studios and publishers and platforms somehow convinced
| everyone that the "service" and "marketing" they provided
| was worth a vast majority of the revenue creative works
| created.
|
| This system should be burned to the ground and reset, and
| any indirect parties should be legally limited to at most
| 15% of the total revenues generated by a creative work.
| We're about to see Hollywood quality AI video - the cost of
| movie studios, music, literature, and images is nominal.
| There are already creative AI series and ongoing works that
| beat 90's level visual effects and storyboarding being
| created and delivered via various platforms for free
| (although the exposure gets them ad revenue.)
|
| We better figure this stuff out, fast, or it's just going
| to be endless rentseeking by rich people and drama from
| luddites.
| dingnuts wrote:
| patents and copyrights allow ownership of ideas and of the
| specific expression of ideas
| sorokod wrote:
| Keeping technology secret or forbidden is as old as
| humanity itself.
| 01HNNWZ0MV43FF wrote:
| I just wish it was not, as usual, the people with the most
| money benefiting first and most
| theF00l wrote:
| Copyright law protects the expression of ideas, not the ideas
| themselves. Favourite case law that reinforces this case was
| between David Bowie and the Gallagher brothers.
|
| I would argue patents are closer to protecting ideas, and
| those are alive and well.
|
| I do agree copyright law is terribly outdated but I also feel
| the pain of the creatives.
| behringer wrote:
| 7 years or maybe 14 that's all anybody needs. Anything else is
| greed and stops human progress.
| Philpax wrote:
| I appreciate someone named "behringer" posting this
| sentiment.
| (https://en.wikipedia.org/wiki/Behringer#Controversies)
| justonceokay wrote:
| If we are headed to a star-trek future of luxury communism,
| there will definitely be growing pains as the things we value
| become valueless within our current economic system. Even
| though the book itself is so-so IMO, Down and Out in the Magic
| Kingdom provides a look at a future economy where there is an
| infinite supply of physical goods so the only economy is that
| of reputation. People compete for recognition instead of money.
|
| This is all theoretical, I don't know if I believe that we as
| humans can overcome our desire to hoard and fight over our
| possessions.
| robertlagrant wrote:
| You're saying something exactly backwards from reality. Star
| Trek is communism (except it's not) because there's no
| scarcity. It's not selfishness that's the problem. It's the
| ever-increasing number of things invented inside capitalism
| we deem essential once invented.
| Detrytus wrote:
| I always say this: we are headed to a star-trek future, but
| we will not be the Federation, we will become Borg. Between
| social media platforms, smartphones and "wokeness" the
| inevitable result is that everybody will be forced into
| compliance, no originality or divergent thinking will be
| tolerated.
| renewiltord wrote:
| It is an interesting time for LLMs to burst on the scene. Most
| online forums have already turned people into text replicators.
| Most HN commenters can be prompted into "write a comment about
| slop violating copyright" / "write a comment about Google
| violating privacy" / "write a comment about managers not
| understanding remote work". All you have to do is state the
| opposite.
|
| A perfect time for LLMs to show up and do the same. The subreddit
| simulators were hilarious because of the unusual ways they would
| perform but a modern LLM is a near perfect approximation of the
| average HN commenter.
|
| I would have assumed that making LLMs indistinguishable from
| these humans would make those kinds of comments less interesting
| to interact with but there's a base level of conversation that
| hooks people.
|
| On Twitter, LLM-equipped Indians cosplay as right wing white
| supremacists and amass large followings (also bots, perhaps?)
| revealed only when they have to participate in synchronous
| conversation.
|
| And yet, they are still popular. Even the "Texas has warm water
| ports" Texan is still around and has a following (many of whom
| seem non-bot though who can tell?).
|
| Even though we have a literal drone, humans still engage in drone
| behaviour and other humans still engage them. Fascinating. I
| wonder whether the truth is that the inherent past-replication of
| low-temperature LLMs is likely to fix us to our present state
| than to raise us to a new equilibrium.
|
| Experiments in Musical Intelligence is now over 40 years old and
| I thought it was going to revolutionize things: unknown melodies
| discovered by machine married to mind. Maybe LLMs aren't going to
| move us forward only because this point is already a strong
| attractor. I'm optimistic in the power of boredom, though!
| dkdcwashere wrote:
| > I would have assumed that making LLMs indistinguishable from
| these humans would make those kinds of comments less
| interesting to interact with but there's a base level of
| conversation that hooks people.
|
| I think it is heading in this direction, just takes a very long
| time. 50% of people are dumber than average
| seadan83 wrote:
| Dumber than median*
| tim333 wrote:
| "Texas has warm water ports" is more the hallmark of Russian
| propagandists. I think LLMs go more for saying 'delve' and odd
| hyphens and stuff?
| slibhb wrote:
| LLMs are statistical models trained on human-generated text. They
| aren't the perfectly logical "machine brains" that Asimov and
| others imagined.
|
| The upshot of this is that LLMs are quite good at the stuff that
| he thinks only humans will be able to do. What they aren't so
| good at (yet) is really rigorous reasoning, exactly the opposite
| of what 20th century people assumed.
| beloch wrote:
| What we used to think of as "AI" at one point in time becomes a
| mere "algorithm" or "automation" by another point in time. A
| lot of what Asimov predicted has come to pass, very much in the
| way he saw it. We just no longer think of it as "AI".
|
| LLM's are just the latest form of "AI" that, for a change,
| doesn't quite fit Asimov's mold. Perhaps it's because they're
| being designed to replace humans in creative tasks rather than
| liberate humans to pursue them.
| israrkhan wrote:
| Exactly... as someone said " I need AI to do my laundary and
| dishes, while I can focus on art and creative stuff" ... But
| AI is doing the exact opposite, i.e creative stuff (drawing,
| poetry, coding, documents creation etc), while we are left to
| do the dishes/laundary.
| TheOtherHobbes wrote:
| As someone else said - maybe you haven't noticed but
| there's a machine washing your clothes, and there's a good
| chance it has at least some very basic AI in it.
|
| It's been quite a while since anyone in the developed world
| has had to wash clothes by slapping them against a rock
| while standing in a river.
|
| Obviously this is really wishing for domestic robots, not
| AI, and robots are at least a couple of levels of
| complexity beyond today's text/image/video GenAI.
|
| There were already huge issues with corporatisation of
| creativity as "content" long before AI arrived. In fact one
| of our biggest problems is the complete collapse of the
| public's ability to imagine anything at all outside of
| corporate content channels.
|
| AI can reinforce that. But - ironically - it can also be
| very good at subverting it.
| Qworg wrote:
| The wits in robotics would say we already have domestic
| robots - we just call them dishwashers and washing
| machines. Once something becomes good enough to take the
| job completely, it gets the name and drops "robotic" -
| that's why we still have robotic vacuums.
| j_bum wrote:
| Oh that's an interesting idea.
|
| I know I could google it, but I wonder washing machines
| originally was called an "automatic clothes washer" or
| something similar before it became widely adopted.
| bad_user wrote:
| I have yet to enjoy any of the "creative" slop coming out
| of LLMs.
|
| Maybe some day I will, but I find it hard to believe it,
| given a LLM just copies its training material. All the
| creativity comes from the human input, but even though
| people can now cheaply copy the style of actual artists,
| that doesn't mean they can make it work.
|
| Art is interesting because it is created by humans, not
| despite it. For example, poetry is interesting because it
| makes you think about what did the author mean. With LLMs
| there is no author, which makes those generated poems
| garbage.
|
| I'm not saying that it can't work at all, it can, but not
| in the way people think. I subscribe to George Orwell's
| dystopian view from 1984 who already imagined the
| "versificator".
| ChrisMarshallNY wrote:
| _> I have yet to enjoy any of the "creative" slop coming
| out of LLMs._
|
| Oh, come on. Who can't love the "classic" song, _I Glued
| My Balls to My Butthole Again_ [0]?
|
| I mean, that's AI "creativity," at its peak!
|
| [0] https://www.youtube.com/watch?v=wPlOYPGMRws (Probably
| NSFW)
| wubrr wrote:
| > LLMs are statistical models trained on human-generated text.
|
| I mean, not only human-generated text. Also, human brains are
| arguably statistical models trained on human-
| generated/collected data as well...
| slibhb wrote:
| > Also, human brains are arguably statistical models trained
| on human-generated/collected data as well...
|
| I'd say no, human brains are "trained" on billions of years
| of sensory data. A very small amount of that is human-
| generated.
| wubrr wrote:
| Almost everything we learn in schools, universities, most
| jobs, history, news, hackernews, etc is literally human-
| generated text. Our brains have an efficient structure to
| learn language, which has evolved over time, but the
| processes of actually learning languages happens after you
| are born, based on human-generated text/voice. Things like
| balance/walking, motion control, speaking (physical voice
| control), other physical things are trained on sensory
| data, but there's no reason LLMs/AIs can't be trained on
| similar data (and in many cases they already are).
| skydhash wrote:
| What we generate is probably a function of our sensory
| data + what we call creativity. At least humans still
| have access to the sensory data, so we can separate the
| two (with various success).
|
| LLMs have access to what we generate, but not the source.
| So it embed how we may use words, but not why we use this
| word and not others.
| wubrr wrote:
| > At least humans still have access to the sensory data
|
| I don't understand this point - we can obviously collect
| sensory data and use that for training. Many
| AI/LLM/robotics projects do this today...
|
| > So it embed how we may use words, but not why we use
| this word and not others.
|
| Humans learn language by observing other humans use
| language, not by being taught explicit rules about when
| to use which word and why.
| skydhash wrote:
| > _I don 't understand this point - we can obviously
| collect sensory data and use that for training._
|
| Sensory data is not the main issue, but how we interpret
| them.
|
| In Jacob Bronowski's _The Origins of Knowledge and
| Imagination_ , IIRC, there's an argument that our eyes
| are very coarse sensors. Instead they do basic analysis
| from which the brain can infer the real world around us
| with other data from other organs. Like Plato's cave, but
| with much more dimensions.
|
| But we humans came with the same mechanisms that roughly
| interpret things the same way. So there's some
| commonality there about the final interpretation.
|
| > _Humans learn language by observing other humans use
| language, not by being taught explicit rules about when
| to use which word and why._
|
| Words are symbols that refers to things and the relations
| between them. In the same book, there's a rough
| explanation for language which describe the three
| elements that define it: Symbols or terms, the grammar
| (or the rules for using the symbols), and a dictionary
| which maps the symbols to things and the rules to
| interactions in another domain that we already accept as
| truth.
|
| Maybe we are not taught the rules explicitly, but there's
| a lot of training done with corrections when we say a
| sentence incorrectly. We also learn the symbols and the
| dictionary as we grow and explore.
|
| So LLMs learn the symbols and the rules, but not the
| whole dictionary. It can use the rules to create correct
| sentences, and relates some symbols to other, but
| ultimately there's no dictionary behind it.
| wubrr wrote:
| > In the same book, there's a rough explanation for
| language which describe the three elements that define
| it: Symbols or terms, the grammar (or the rules for using
| the symbols), and a dictionary which maps the symbols to
| things and the rules to interactions in another domain
| that we already accept as truth.
|
| There are 2 types of grammar for natural language -
| descriptive (how the language actually works and is used)
| and prescriptive (a set of rule about how a language
| should be used). There is no known complete and
| consistent rule-based grammar for any natural human
| language - all of these grammar are based on some person
| or people, in a particular period of time, selecting a
| subset of the real descriptive grammar of the language
| and saying 'this is the better way'. Prescriptive, rule-
| based grammar is not at all how humans learn their first
| language, nor is prescriptive grammar generally complete
| or consistent. Babies can easily learn any language, even
| ones that do not have any prescriptive grammar rules,
| just by observing - there have been many studies that
| confirm this.
|
| > there's a lot of training done with corrections when we
| say a sentence incorrectly.
|
| There's a lot of the same training for LLMs.
|
| > So LLMs learn the symbols and the rules, but not the
| whole dictionary. It can use the rules to create correct
| sentences, and relates some symbols to other, but
| ultimately there's no dictionary behind it.
|
| LLMs definitely learn 'the dictionary' (more accurately a
| set of relations/associations between words and other
| types of data) and much better than humans do, not that
| such a 'dictionary' is an actual determined part of the
| human brain.
| 827a wrote:
| Maybe; at some level are dogs' brains also simple sensory-
| collecting statistical models? A human baby and a dog are
| born on the same day; that dog never leaves that baby's side,
| for 20 years. It sees everything it sees, it hears everything
| it hears, it is given the opportunity to interact with its
| environment in roughly the same way the human baby does, to
| the degree to which they are both physically capable. The
| intelligence differential after that time will still be
| extraordinary.
|
| My point in bringing up that metaphor is to focus the
| analogy: When people say "we're just statistical models
| trained on sensory data", we tend to focus way too much on
| the "sensory data" part, which has led to for example AI
| manufacturers investing billions of dollars into slurping up
| as much human intellectual output as possible to train
| "smarter" models.
|
| The focus on the sensory input inherently devalues our
| quality of being; that who we are is predominately explicable
| by the world around us.
|
| However: We should be focusing on the "statistical model"
| part: that even _if_ it is accurate to holistically describe
| the human brain as a statistical model trained on sensory
| data (which I have doubts about, but those are fine to leave
| to the side): its very clear that the fundamental statistical
| model itself is simply _so far_ superior in human brains that
| comparing it to an LLM is like comparing us to a dog.
|
| It should also be a focal point for AI manufacturers and
| researchers. If you are on the hunt for something along the
| spectrum of human level intelligence, and during this hunt
| you are providing it ten thousand lifetimes of sensory data,
| to produce something that, maybe, if you ask it right, it can
| behave similarity to a human who has trained in the domain in
| only years: You're barking up the wrong tree. What you're
| producing isn't even on the same spectrum; that doesn't mean
| it isn't useful, but its not human-like intelligence.
| wubrr wrote:
| Well the dog brain and human brain are very different
| statistical models, and I don't think we have any objective
| way of comparing/quantifying LLMs (as an architecture) vs
| human brains at this point. I think it's likely LLMs are
| currently not as good as human brains for human tasks, but
| I also think we can't say with any confidence that LLMs/NNs
| can't be better than human brains.
| 827a wrote:
| For sure; we don't have a way of comparing the
| architectural substrate of human intelligence versus LLM
| intelligence. We don't even have a way of comparing the
| architectural substrate of one human brain with another.
|
| Here's my broad concern: On the one hand, we have an AI
| thought leader (Sam Altman) who defines super-
| intelligence as surpassing human intelligence at all
| measurable tasks. I don't believe it is controversial to
| say that we've established that the _goal_ of LLM
| intelligence is something along these lines: it exists on
| the spectrum of human intelligence, its trained on human
| intelligence, and we want it to surpass human
| intelligence, on that spectrum.
|
| On the other hand: we don't know how the statistical
| model of human intelligence works, at any level at all
| which would enable reproduction or comparison, and
| there's really good reason to believe that the human
| intelligence statistical model is vastly superior to the
| LLM model. The argument for this lies in my previous
| comment: the vast majority of contribution of
| intelligence advances in LLM intelligence comes from
| increasing the volume of training data. Some intelligence
| likely comes from statistical modeling breakthroughs
| since the transformer, but by and large its from training
| data. On the other hand: Comparatively speaking, the most
| intelligent humans are not more intelligent because
| they've been alive for longer and thus had access to more
| sensory data. Some minor level of intelligence comes from
| the quality of your sensory data (studying, reading,
| education). But the vast majority of intelligence
| difference between humans is inexplicable; Einstein was
| just Born Smarter; God granted him a unique and better
| statistical model.
|
| This points to the undeniable reality that, at the very
| least, the statistical model of the human brain and that
| of an LLM is very different, which _should_ cause you to
| raise eyebrows at Sam Altman 's statement that
| superintelligence will evolve along the spectrum of human
| intelligence. It might, but its like arguing that the app
| you're building is going to be the highest quality and
| fastest MacOS app ever built, and you're building it
| using WPF and compiling it for x86 to run on WINE and
| Rosetta. GPT isn't human intelligence; at best, it might
| be _emulating_ , extremely poorly and inefficiently, some
| parts of human intelligence. But, they didn't get the
| statistical model right, and without that its like
| forcing a square peg into a round hole.
| matheusd wrote:
| Attempting to summarize your argument (please let me know
| if I succeeded):
|
| _Because_ we can 't compare human and LLM architectural
| substrates, LLMs will _never_ surpass human-level
| performance on _all_ tasks that require applying
| intelligence?
|
| If my summary is correct, then is there _any_
| hypothetical replacement for LLM (for example,
| LLM+robotics, LLMs with CoT, multi-modal LLMs, multi-
| modal generative AI systems, etc) which would cause you
| to then consider this argument invalid (i.e. for the
| replacement, it _could_ , _sometime_ replace humans for
| _all_ tasks)?
| 827a wrote:
| Well, my argument is more-so directed at the people who
| say "well, the human brain is just a statistical model
| with training data". If I say: Both birds and airplanes
| are just a fuselage with wings, then proceed to dump
| billions of dollars into developing better wings; we're
| missing the bigger picture on how birds and airplanes are
| different.
|
| LLM luddites often call LLMs stochastic parrots or
| advanced text prediction engines. They're right, in my
| view, and I feel that LLM evangelists often don't
| understand why. Because LLMs have a vastly different
| statistical model, even when they showcase signs of
| human-like intelligence, what we're seeing cannot
| possibly be human-like intelligence, because human
| intelligence is inseparable from its statistical model.
|
| But, it might still be intelligence. It might still be
| economically productive and useful and cool. It might
| also be scarier than most give it credit for being; we're
| building something that clearly has some kind of
| intelligence, crudely forcing a mask of human skin over
| it, oblivious to what's underneath.
| BeetleB wrote:
| Reminds me of an old math professor I had. Before word
| processors, he'd write up the exam on paper, and the department
| secretary would type it up.
|
| Then when word processors came around, it was expected that
| faculty members will type it up themselves.
|
| I don't know if there were fewer secretaries as a result, but
| professors' lives got much worse.
|
| He misses the old days.
| zusammen wrote:
| To be truthful, though, that's only like 0.01 percent of the
| "academia was stolen from us and being a professor (if you
| ever get there at all) is worse" problem.
| aszantu wrote:
| Funny thing About Asimov was how he came up with the laws of
| robotics and then cases on how they don't work. There are a few
| that I remember, one where a robot was lying because a bug in his
| brain gave him empathy and he didn't want to hurt humans.
| bell-cot wrote:
| Guess: https://en.wikipedia.org/wiki/Liar!_(short_story)
| soulofmischief wrote:
| That is still one of my favorite stories of all time. It really
| sticks to you. It's part of the I, Robot anthology.
| nitwit005 wrote:
| I was always a bit surprised other sci fi authors liked the
| "three laws" idea, as it seems like a technological variation
| of other stories about instructions or wishes going wrong.
| nthingtohide wrote:
| Narratives build on top of each other so that complex
| narratives can be built. This is also the reason why Family
| Guy can speedrun through all the narrative arcs developed by
| culture in 30 seconds clip.
|
| Family Guy Nasty Wolf Pack
|
| https://youtu.be/5oW9mNbMbmY
|
| The perfect wish to outsmart a genie | Chris & Jack
|
| https://youtu.be/lM0teS7PFMo
| buzzy_hacker wrote:
| Same here. A main point of _I, Robot_ was to show why the
| three laws don 't work.
| cogman10 wrote:
| I may be mis recalling, but I thought the main point of the
| I, Robot series was that regardless the law, incomplete
| information can still end up getting someone killed.
|
| In all the cases of killing, the robots were innocent. It
| was either a human that tricked the robot or didn't tell
| the robot what they were doing.
|
| For example, a lady killed her husband by asking a robot to
| detach his arm and give it to here. Once she got it, she
| beat the husband to death and the robot didn't have the
| capability to stop her (since it gave her it's arm). That
| caused the robot to effectively self-destruct.
|
| Giskard, I believe, was the only one that killed people. He
| ultimately ended up self-destructing as a result (the fate
| of robots that violate the laws).
| tedunangst wrote:
| That's certainly not the plot of Little Lost Robot.
| cogman10 wrote:
| Little lost robot was about a robot with the first law
| modified. That's not about the law failing but rather
| failing to install the full law.
| pfisch wrote:
| I mean, now we call the three laws "alignment", but it
| honestly seems inevitable that it will go wrong eventually.
|
| That of course isn't stopping us from marching forwards
| though in the name of progress.
| hinkley wrote:
| And one that was sacrificing a few for the good of the species.
| You can save more future humans by killing a few humans today
| that are causing trouble.
| pfisch wrote:
| Isn't that the plot of westworld season 3?
| hinkley wrote:
| I think better than half the writers on Westworld were not
| born yet when the OG Foundation books were written.
| creer wrote:
| Good conceit or theme by an author - on which to base a series
| of books that will sell? Not everything is an engineering or
| math project.
| darepublic wrote:
| Seeing the creativity most people employ, that is for selfish
| loopholes and inconsiderate behaviour, I am a little wary of
| empowering them.
| lupusreal wrote:
| Most creative work is benevolent or at least harmless.
| Certainly some people are malevolent, maybe even everybody
| _some_ of the time, but you shouldn 't believe that to
| represent the majority of creativity. That's way too
| misanthropic.
| hoseyor wrote:
| I have a genuine question I can't find or come up with a viable
| answer to, a matter of said "unpleasantness" as he puts it; how
| do people make money or otherwise sustain themselves in this AI
| scenario we are facing?
|
| Has anyone heard a viable solution, or even has one themselves?
|
| I don't hear anything about UBI anymore, could that be because
| after roughly 60+ million alien people flooding into western
| countries from countries with a populations so large that are
| effectively endless? What do we do about that? Will that snuff
| out any kind of advancement in the west when the roughly 6
| billion people all want to be in the west where everyone gets UBI
| and it's the land of milk and honey?
|
| So what do we do then? We can't all be tech industry people with
| 6-figure plus salaries, vested ownership, and most people aren't
| multi-millionaires that can live far away from the consequences
| while demanding others subject themselves to them.
|
| Which way?
| janalsncm wrote:
| I have soured on UBI because it tries to use a market solution
| to deal with problems that I don't think markets can fix.
|
| I want everyone to have food, housing, healthcare, education,
| etc. in a post scarcity world. That should be possible. I don't
| think giving people cash is the best way to accomplish that. If
| you want people to have housing, give them housing. If you want
| people to have food, give them food.
|
| Cash doesn't solve the supply problem, as we can see with
| housing now. You would think a rise in the cost of housing
| would lead to more supply, but the cost of real estate also
| increases the cost of building.
| slfnflctd wrote:
| I've always thought there should be a 'minimum viable
| existence' option for those who are willing to forego most
| luxuries in exchange for not being required to do anything
| specific other than abide by reasonable laws.
|
| It would be very interesting to see the percentage breakdowns
| of how such people chose to spend their time. In my opinion,
| there would be enough benefit to society at large to make it
| worthwhile. For a large group (if not the majority), I'm
| certain the situation would turn out to be completely
| temporary-- they would have the option to prepare themselves
| for some type of work they're better adapted to perform and/or
| enjoy, ultimately enhancing the culture and economy. Most of
| the rest could be useful as research subjects, if they were
| willing of course.
|
| Obviously this is a bit of a utopian fantasy, but what can I
| say, Star Trek primed me to hope for such a future.
| nthingtohide wrote:
| There will be relative scarcity. Consider a scenario where
| iPhone 50 is manufactured in a dark factory. But still there is
| waiting period to have access to it. This is because of
| resource bottlenecks.
| GeoAtreides wrote:
| >how do people make money or otherwise sustain themselves in
| this AI scenario we are facing?
|
| 1% of the labour force works in agriculture:
|
| https://ourworldindata.org/grapher/share-of-the-labor-force-...
|
| 1%
|
| let that number sink in; think about it really means.
|
| And what it means is that at least basic food (unprocessed, no
| meat) could be completely free. It make take some smart
| logistics, but it's doable. All of our food is already one
| step, one small step, away from becoming free for everyone.
|
| This applies to clothes and basic tools as well.
| janalsncm wrote:
| > Isaac Asimov describes artificial intelligence as "a phrase
| that we use for any device that does things which, in the past,
| we have associated only with human intelligence."
|
| This is a pretty good definition, honestly. It explains the AI
| Effect quite well: calculators aren't "AI" because it's been a
| while since humans were the only ones who could do arithmetic. At
| one point they were, though.
| azinman2 wrote:
| Although calculators can now do things almost no humans can do,
| or at least in any reasonable time. But most (now) wouldn't
| call it AI. It's a tool, with a very limited domain
| janalsncm wrote:
| That's my point, it's not AI now. It used to be.
| hinkley wrote:
| Similarly, we esteem performance optimizations so
| aggressively that a lot of things that used to be called
| performance work are now called architecture, good design.
| We just keep moving the goal posts to make things more
| comfortable.
| saalweachter wrote:
| I mean, at one point "calculator" was a job title.
| timewizard wrote:
| The abacus has existed for thousands of years. Those who had
| the job of "calculator" also used pencil and paper to manage
| larger calculations which they would have struggled to do
| without any tools.
|
| That's humanity. We're tool users above anything else. This
| gets lost.
| josefritzishere wrote:
| Isaac Asimov's view of the future has aged surprisingly well. But
| techno-utopianism has not.
| franze wrote:
| I let Gemini 2.5 Pro (the image is from ChatGpt) write a short
| sci fi story. I think it did a decent job.
|
| https://show.franzai.com/a/tiny-queen-zebu
| Jgrubb wrote:
| > humanity in general will be freed from all kinds of work that's
| really an insult to the human brain.
|
| He can only be referring to these Jira tickets I need to write.
| BeetleB wrote:
| There is a Jira MCP server...
| fragmede wrote:
| oh woah https://glama.ai/mcp/servers/@CamdenClark/jira-mcp
|
| and MCP can work with deepseek running locally. hmm...
| icecap12 wrote:
| As someone who just got done putting a bullet in some long-used
| instances, I both appreciated and needed this laugh. Thanks!
| palmotea wrote:
| I wouldn't put too much stock in this. Asimov was a fantasy
| writer telling _fictional stories_ about the future. He was good
| at it, which is why you listen and why it 's enjoyable, but it's
| still all a fantasy.
| timewizard wrote:
| There's also Frank Herbert. Who saw AI as ruinous to humanity
| and it's evolution and saw a future where humanity had to fight
| a war against it resulting in it being banished from the entire
| universe.
| palmotea wrote:
| > There's also Frank Herbert. Who saw AI as ruinous to
| humanity and it's evolution and saw a future where humanity
| had to fight a war against it resulting in it being banished
| from the entire universe.
|
| Did he though? Or was the Butlerian Jihad backstory whose
| function was allow him to believably center human characters
| in his stories, given sci-fi expectations of the time?
|
| I like Herbert's work, but ultimately he (and Asimov) were
| producers of stories to entertain people, so entertainment
| always would take priority over truth (and then there's the
| entirely different problem of accurately predicting the
| future).
| MetaWhirledPeas wrote:
| > I wouldn't put too much stock in this. Asimov was a fantasy
| writer telling fictional stories about the future.
|
| Why not? Who is this technology expert with flawless
| predictions? Talking about the future is inherently an exercise
| of the imagination, which is also what fiction writing is.
|
| And nothing he's saying here contradicts our observations of AI
| up to this point. AI artwork has gotten good at copying the
| styles of humans, but it hasn't created any new styles that are
| at all compelling. So leave that to the humans. The same with
| writing; AI does a good job at mimicking existing writing
| styles, but has yet to demonstrate the ability to write
| anything that dazzles us with its originality. So his
| prediction is exactly right: AI _does work that is really an
| insult to the complex human brain_.
| alganet wrote:
| 92 huh? That is an opinion from a long time ago.
|
| The question I have is why AI technology is being so aggressively
| advertised nowadays, and why none of it seems to be liberating in
| any way.
|
| Once the plow liberated humans from some kinds of work. Some time
| later it was just a tool that slaves, very non liberated, used to
| tend to rich people's farms.
|
| Technology is tricky. I don't trust who is developing AI to be
| liberating.
|
| The article also plays on the "favorite author" thing. It knows
| many young folk see Asimov as a role model, so it is leveraging
| that emotional connection to gather conversation around a topic
| that is not what it seems to be. I consider it a dirty trick. It
| is disgraceful given the current world situation (AI being used
| for war, surveillance, brainwashing).
|
| We are better than this.
| tim333 wrote:
| >why AI technology is being so aggressively advertised
| nowadays[?]
|
| I'm not sure I've actually seen an advertisement for AI. It's
| being endlessly discussed though on HN and other places,
| probably because it's at an interesting point at the moment
| making rapid progress. And also shoved into a lot of products
| and services of course.
| alganet wrote:
| The definition of advertisement is the least important part
| of my comment.
|
| Focus on what matters for humans.
___________________________________________________________________
(page generated 2025-04-10 23:00 UTC)