[HN Gopher] The expanding dark forest and generative AI
___________________________________________________________________
The expanding dark forest and generative AI
Author : colinprince
Score : 359 points
Date : 2023-01-04 09:31 UTC (13 hours ago)
(HTM) web link (maggieappleton.com)
(TXT) w3m dump (maggieappleton.com)
| alexpotato wrote:
| > as well as real estate price increases in densely populated
| areas.
|
| This particular point made a lot of sense to me given that this
| already happened in New York City.
|
| We've seen rents go up 50%+ for "college grad, just started in
| corporate jobs" style apartments. Yes, part of that is inflation
| + corps are paying more.
|
| Part of it is also young people saw what it was like during
| lockdown to live a "digital only" life and realized that meeting
| people in meatspace is a lot of fun too. Sounds simplistic, in a
| way, but I also believe there was the whole don't know what
| you've got till it's gone effect as well.
| jerf wrote:
| Speaking specifically to the certification point, it is a
| _complete_ non-starter. There are multiple reasons why it won 't
| work:
|
| 1. I don't want an _identity_ certified. I 'm sure there's a
| human somewhere. I want the _content_ verified as not having been
| autogenerated. Scammers can trivially get "verified", there are
| plenty of humans that will be able to be "verified", but then
| generate arbitrary amounts of spam on these identities.
|
| 2. I can't even remotely conceive of a way for the incentives of
| the putative "certifier" to line up correctly. They will be
| incentivized to take everyone's money and mark them "certified"
| with as little (expensive) effort as possible. It's the same
| problem TLS certs had, before we basically collectively admitted
| that was the case and fell back to the Let's Encrypt model, only
| orders of magnitude worse.
|
| 3. Plus even if the incentive structure _was_ correct, there
| would still be enormous motivations to cheat. It isn 't just
| "spammers" who want to use this tech. Some of the users will have
| the political firepower to throw around and get themselves
| "certified" regardless of the underlying situation, further
| diluting the marker.
|
| To be honest, what this may well mark is the end of the
| international internet as it stands now, and not much less, and
| possibly sooner than you think. The only solution is some sort of
| web of trust, which I use in a broad sense of a solution class,
| not the exact GPG web of trust or something. But you're going to
| have to meet people in person and make your own decisions.
|
| Though we'll probably pass through a decade in which we try the
| centralized authority thing, before people generally realize the
| central authority simply does not and can not have their best
| interests at heart and no possible central authority can ever be
| trusted with the assertion "This person is worth listening to and
| this person is not", no matter what form it takes. Too many
| predatory entities standing by and watching out for any such
| accumulation of authority/power and standing by to take it over
| and drain it of all its value. Too much incentive to cheat and
| not enough that can be done about it.
| irq-1 wrote:
| I think there's something wrong in the way were looking at
| this. Free services are made to get humans attention for
| advertising or eventual payment. If a service like reddit is
| overrun with bots, won't the services shutdown? There won't be
| any "public" places like now, except for forums like this where
| monetizing isn't the goal.
|
| I'm not sure it's a problem for a place like hackernews -- spam
| would be a problem, but we know that. Voting and verifying that
| content is 'good' could be an issue, but why would the bot/AIs
| care about hackernews internet points?
|
| Email was open to everyone, and when the spam came we filtered
| it. Comments on news articles, youtube and amazon reviews are
| already mixed with degrees of 'bad' and uselessness, and we
| mostly ignore it. Only the 1% comment, 10% vote, or something
| like that. Generated content made by us and for us seems more
| likely as the future, and confirmed identity doesn't matter
| much for that.
| dsign wrote:
| I agree with most of what you say, except for this bit:
|
| > Too many predatory entities standing by and watching out for
| any such accumulation of authority/power and standing by to
| take it over and drain it of all its value.
|
| > Too much incentive to cheat and not enough that can be done
| about it.
|
| That assumes that people will stand by and watch the too many
| predatory entities and do nothing, and under "current
| assumptions", that's exactly what will happen. But current
| assumptions can be broken. For example, there could be a
| revolution and certain/a few/some governments may lose the
| monopoly in violence. Such an idea, alien as it looks to us
| now, was the way of things through most of history and it is
| the way of things--sadly--in too many places still today. If
| citizens have a relatively efficient mechanism to keep the
| certification authority trustable, then the certification
| authority may solve 70% of the problem.
|
| Web of trusts are also a possibility, but yes they will be
| regional and in need of strong baking by face-to-face meetings.
| Probably all trust mechanisms will like that: local and patchy.
|
| I've got to add my own note of pessimism: relatively trivial
| exchanges of information won't survive the AI age. Sure,
| business people will find ways of negotiating prices for goods
| and services across borders, scientists will meet at
| conferences and exchange data and results, and the Interpol
| will talk to trusted authorities in the member states using
| cryptographically-sound channels and base agreements and
| methodologies. But the general public will trust very little of
| what they see in the Internet. We will _disconnect_.
| pixl97 wrote:
| >We will disconnect.
|
| And replace it with? I'm sure you'll say "A local group of
| people we trust", but in general in history the locals will
| have been dumb as hell too, and only trustworthy because you
| had no other means of validating what they said.
|
| And that would only cover the people disconnecting, not the
| converse... The AI won't disconnect from you. You still have
| to buy goods, you'll still need services. AI will be there
| watching everything you do all the time. Where your car
| drives. The people you meet up with. Always calculating,
| always optimizing. Too much power for the greed of humans to
| ever put back in the box.
| nineteen999 wrote:
| This is just a reminder for me to unplug from the Internet
| further than I need to do my banking, pay bills and research
| various ideas that may be of use in future projects (personal or
| my employers), and invest more time in friends, family and local
| community instead. As I'm getting older I was already doing that;
| I've never really spent a whole lot of time on Internet forums
| anyway.
|
| I've enjoyed playing with ChatGPT and I have a copy of Stable
| Diffusion at home, they are of some utility, if you take the
| output with a giant bag of salt.
|
| The people I feel for are those who have retreated from or are
| uncomfortable in society in general and whom invest all their
| time in Internet communities, since they will be the most
| vulnerable. I'm fully aware of the irony that some might
| sceptically believe that this comment itself is AI-generated
| rather than written by a human; and that any responses may well
| be cut & pasted from ChatGPT, and I keep that in mind that when
| reading and writing.
| Workaccount2 wrote:
| ChatGPT is just the canary in the coal mine. I think a
| _massive_ mistake I keep seeing people make is assuming that
| ChatGPT is a peak rather than a checkpoint at the bottom of the
| mountain. ChatGPT is not in the future, it 's successors are.
| We've just started this ascent of Mount AI (after years in the
| foothills), we're hardly even at base camp, and we have
| ChatGPT.
|
| I don't want to forecast the future because I think AI is going
| to change the world so radically that it would be like asking a
| 13th century peasant to describe 2022. But I feel extremely
| confident in asserting that it will not be "Internet dwellers
| addicted to their talking AIs, and then everyone else going
| about their life normally".
| nineteen999 wrote:
| > "Internet dwellers addicted to their talking AIs, and then
| everyone else going about their life normally".
|
| Yeah I fully agree it's going to affect everyone. Just that
| those who _can 't_ interact with society are going to have it
| worse than those that can. Also agree as well that this is
| just the beginning. ChatGPT and SD are still pretty much
| toys, although pretty impressive ones. We have no idea where
| this is really going to end up.
|
| Hopefully when the AGI's truly emerge they will just keep
| each other distracted with blockchain scams ...
| BizarroLand wrote:
| If we somehow manage to survive this as we have so many
| other enormous technological revolutions, I envision a
| future where children will be assigned a friendly AI as a
| lifemate that will grow up alongside them, having all of
| the knowledge of the world at its fingertips to teach and
| coparent the child into its adulthood and throughout its
| life.
|
| Once ubiquitous, these friendly AIs could negotiate
| salaries, mediate conflicts, help resolve relationship
| difficulties, help with timely reminders and be personally
| invested in that person's entire life, and after the
| child's eventual passing, would serve as a historian and
| memoir that could replay the great and wonderful moments of
| their lives for others as well as condensing the lessons
| learned into pure knowledge and wisdom for other AIs to
| help raise their children with.
|
| We could be a mere 60-80 years away from a humanity that is
| raised in the equality we have believed we all should have
| had all along, so long as we keep pushing. That would be
| amazing.
|
| Sure, there's some risks we can take a wrong turn and we
| most likely will take a few, but there's a great payoff
| coming if we can hold the wheel and steer towards that.
|
| I wonder what the effects would be on society if we did
| that? If everyone had a friend and a life coach and a
| mentor all wrapped up into one that is as near and dear to
| us as a teddy bear, that would never betray us, that would
| serve as a priest and a confessional and a therapist all at
| the same time, that was always there for us no matter what
| happened, backed up to the cloud so that barring nuclear
| war or the apocalypse could never be separated from us.
|
| I bet the people 100 years from that day would be as
| unrecognizable to us as we are to the Sentinelese.
| pixl97 wrote:
| If in 100 years we're still negotiating salaries I feel
| that humanity has failed.
| jpadkins wrote:
| what do you think will replace salaries or negotiation?
| BizarroLand wrote:
| My guess is that we would be more of a gig economy,
| working when we need to and doing jobs that are
| individual and timely that benefit from the human touch.
|
| If I were emperor of the earth and could dictate what
| work would be like in the year 2140, dumb AI (that is
| still a few orders of magnitude smarter than our current
| best AI) would handle all of the rote tasks, assembling
| devices, farming, mining, things like that, smarter AI
| would handle corporate paperwork and accounting, managing
| cleanup and repair of any remaining ecological harm we
| have caused in the last 400 years or so, mining asteroids
| for valuable materials, etc., and in the mix of this
| humans would use AI systems to design and develop new
| products, take on new endeavors, provide medical care and
| create social events to fill in the massive time gap left
| by the actualization of our prosperity.
|
| A typical work week would be roughly 20 hours, comprised
| of either 5 4 hour shifts or 2-3 8-10 hour shifts
| depending on the industry and the need. Your basic needs,
| food, clothing, education, and shelter would be given to
| you for no cost as long as you participate in the system,
| and the rewards for working would be being granted access
| to higher echelon products and services, and you could
| voluntarily retire after roughly 20 years of employment
| or less if your career is particularly difficult or
| straining on the body.
|
| Your echelons would be split into at least 4 tiers, base
| tier, bronze, silver, gold. You can work up tiers by
| either working more, or by merit should you provide or
| create something that is immensely useful or wonderful,
| such as art, or a movie, or an invention that gets used
| around the world.
|
| Even then, there will be plenty of work to do and the
| salary you receive for your work would be equal to the
| skill and talent that you possess and what merits your
| contributions to the cause bring, and this would be
| negotiated for you as fairly and equally as possible by
| systems whose job it is to make sure that everyone gets a
| fair share.
|
| Sure, this is all my imagination and would require a
| dramatic shift to some sort of AI enforced utopian
| communism, and it also relies entirely on people being
| willing to participate in such a system, but once again,
| if I were emperor of the earth that is what I would aim
| for.
|
| So yeah, there would still be salaries because I expect
| remuneration in exchange for my work for others, and I
| assume most other people do, too.
| neuronexmachina wrote:
| > I envision a future where children will be assigned a
| friendly AI as a lifemate that will grow up alongside
| them, having all of the knowledge of the world at its
| fingertips to teach and coparent the child into its
| adulthood and throughout its life.
|
| I'm reminded of Neal Stephenson's "Diamond Age, or a
| Young Lady's Illustrated Primer"
|
| https://en.wikipedia.org/wiki/The_Diamond_Age
|
| > The protagonist in the story is Nell, a thete (or
| person without a tribe; equivalent to the lowest working
| class) living in the Leased Territories, a lowland slum
| built on the artificial, diamondoid island of New Chusan,
| located offshore from the mouth of the Yangtze River,
| northwest of Shanghai. When she is four, Nell's older
| brother Harv gives her a stolen copy of a highly
| sophisticated interactive book, Young Lady's Illustrated
| Primer: a Propaedeutic Enchiridion, in which is told the
| tale of Princess Nell and her various friends, kin,
| associates, etc., commissioned by the wealthy Neo-
| Victorian "Equity Lord" Alexander Chung-Sik Finkle-McGraw
| for his granddaughter, Elizabeth. The story follows
| Nell's development under the tutelage of the Primer, and
| to a lesser degree, the lives of Elizabeth Finkle-McGraw
| and Fiona Hackworth, Neo-Victorian girls who receive
| other copies. The Primer is intended to steer its reader
| intellectually toward a more interesting life, as defined
| by Lord Finkle-McGraw, and growing up to be an effective
| member of society. The most important quality to
| achieving an interesting life is deemed to be a
| subversive attitude towards the status quo. The Primer is
| designed to react to its owner's environment and teach
| them what they need to know to survive and develop.
| freejazz wrote:
| We can't make sure that every child in the US is _fed_ ,
| but you think they are all going to get AIs?
| SuoDuanDao wrote:
| it's also possible that Turing-graduate AIs could act as
| prosthetics for people who can't interact normally. Might
| unlock _more_ human potential for all we know, there 's
| always room for optimism.
| titanomachy wrote:
| In the universe of Greg Egan's "Schild's ladder", each
| person's brain is equipped with a "Mediator" AI which
| interfaces with other Mediators and translates each
| person's body language, speech, etc. into the
| representation which most faithfully preserves the
| original intention. I think the idea is that your
| Mediator transmits a lot of cognitive metadata which lets
| the other person's Mediator translate the intention
| faithfully and reduce the chance of a misunderstanding.
| Allows reasonable communication even between extremely
| diverse intelligences.
|
| The thing that keeps it from being too dystopian is that
| it's under conscious control, you could always choose to
| keep your thoughts to yourself or hear someone else's
| original words as spoken.
| pixl97 wrote:
| The problem with books is they deus ex machina the
| problem without actually thinking about the ramifications
| of their ideas....
|
| For example keeping your thoughts to yourself would
| likely be picked up instantly by the remote mediator and
| it would judge you in one way or another for that.
| titanomachy wrote:
| True, although we already do this. We can tell if someone
| is being guarded or open.
|
| Presumably the Mediator serves only you, and you can ask
| it to deceive or project different intentions if that's
| want you want.
| violiner wrote:
| > there's always room for optimism.
|
| Bold claim.
| yt-sdb wrote:
| > I think a massive mistake I keep seeing people make is
| assuming that ChatGPT is a peak rather than a checkpoint at
| the bottom of the mountain.
|
| I fully agree. The AlexNet paper was what, 2012? So in a
| decade, we've gone from "neural networks aren't useful" to
| self-driving cars, Stable Diffusion, ChatGPT, ... None of
| these tools is perfect yet, but to stress that point is to
| miss looming mountain.
| freediver wrote:
| I'll make a different prediction. GPT-4 will be the last of
| 'large language models' (at least from a perspective of
| noteable progress).
|
| It will also be when we realize that approach of fitting big
| data with gradient decent is a dead-end, after finally
| exhausting nearly all text we can train it on.
|
| We'll than have to backpedal a bit and find another path for
| achieving A(G)I.
| mcguire wrote:
| If my google-fu is right, ChatGPT was trained on 570GB of
| data.
|
| I asked, "What is the fastest sea mammal?"
|
| ChatGPT just produced,
|
| " _The fastest sea mammal is the peregrine falcon. This
| bird of prey is able to reach diving speeds of over 240 mph
| (386 km /h) when hunting for food, making it the fastest
| sea mammal and one of the fastest animals on the planet.
| The peregrine falcon is able to achieve such high speeds by
| diving from great heights and using the force of gravity to
| accelerate. When hunting, peregrine falcons will spot their
| prey from above, then tuck their wings and plummet towards
| the water, accelerating as they go. At the last moment,
| they will extend their wings and claws to snatch their prey
| out of the water._"
|
| (It usually seems to be saying dolphins lately; last week
| it was saying sailfish about 3/4s of the time.)
| freediver wrote:
| My Kagi-fu says "Be like water, my friend. The size of
| the data is not important, only the quality. OpenAI
| curated/filtered 45TB of data to extract those 570GB.
| Much of the text that we encounter in this world is like
| the empty chatter of a bird, mere noise that serves no
| purpose".
| anileated wrote:
| The usefulness of AI depends on training data availability.
| The reason OpenAI et al. were able to surprise everyone is
| thanks to taking everyone by surprise and using their data
| for training without consent.
|
| As the public is catching on[0], what we may get is not some
| insanely genius AI but a fragmented, private web where no one
| is stupid enough to publish original content in the open
| anymore (given all incentives, psychological and financial,
| have been destroyed) and models choking on themselves having
| nothing to be trained on except their own output.
|
| This is my reasoning for giving higher probability to it
| being a peak (or very near it). There will be cool, actually
| useful instances of AI within specialized niches, which it
| could well transform, but otherwise everyone will go about
| their life normally.
|
| [0] https://twitter.com/sonnyrossdraws/status/161000295904312
| 116...
| foruhar wrote:
| Taking data without consent is a real issue. There is still
| lots of data out there that is free of copyright. I'd be
| curious to see a model that is trained solely on public
| domain data (perhaps with an option to include creative
| commons-compliant data). I think there is plenty of
| knowledge that is in the free and clear to make a very
| useful LL and/or stable diffusion model. We may miss out on
| Wegovy and air fryer reviews, articles on the how to beat
| the stock market with Numpy, and manga art styles yet there
| is plenty of a few decades ago that would make for a useful
| "AI." Even Steamboat Willie may soon be in play.
| anileated wrote:
| Yeah, dated content would be the only reliable training
| data.
| pixl97 wrote:
| Eh, you're just switching problems with the 'consent'
| model. I'm very much in the camp that the corpus of human
| knowledge is not some companies IP, this just pushes
| ownership further into hands of large and well monied
| companies and further baits patent/IP trolls to lock up
| collective knowledge.
| nightski wrote:
| I look at it like it's some companies IP in the way
| oil/gas companies sell earth's resources. It takes a lot
| of work to transform raw crude into usable product,
| similarly OpenAI and others put a ton of
| money/resources/work into transforming that knowledge
| into a workable model.
| pixl97 wrote:
| Again this gets particularly messy.
|
| With oil there is very strong chain of possession. I
| can't copy your raw oil at little to no cost, and for the
| most part the next barrel of oil I pump out of the ground
| is not made of pieces of the past barrel of oil I pumped
| out of the ground. Each barrel of oil is a wholly
| separate entity. If I make all past oil disappear, you
| still have your barrel.
|
| Information is not like that at all. It is far more often
| a continuum of large bits of the past with small changes
| that redefine it's usage. If I took all bits of past
| knowledge out of your IP set, you'd be left with
| something useless in incomplete in almost every case.
| Trying to treat IP like a physical artifact leads to a
| multitude of failures.
| freejazz wrote:
| Great point, no issues with oil and gas companies as a
| business model
| hooverd wrote:
| > Internet dwellers addicted to their talking AIs See
| Replika. The former will certainly exist. Not so sure about
| the latter.
| o_1 wrote:
| well said
| [deleted]
| p-e-w wrote:
| > Most open and publicly available spaces on the web are overrun
| with bots, advertisers, trolls, data scrapers, clickbait,
| keyword-stuffing "content creators," and algorithmically
| manipulated junk.
|
| Nonsense. That's certainly not true for HN, for most of Reddit
| outside of a few big subs, for GitHub, for Wikipedia, for
| IRC/Matrix, for most mailing lists, or for any of the hundreds of
| thousands of traditional web forums still in active use.
|
| It sounds like what the author is really saying is "Facebook and
| Twitter are overrun with these things, and those are the only
| 'publicly available spaces' that matter". Which, of course, is
| once again complete nonsense.
| w1nst0nsm1th wrote:
| It depends what is the subject.
|
| Crypto come to mind.
|
| People having interest in it have flooded every available
| public space, and not only online :
|
| Search engines ('coin something' websites), /r/bitcoin,
| /r/cryptocurrency and so on, youtube at large, online media
| outlets, online financial newspapers, amazon, physical
| bookstores, paper business magazines, business tv, and so on.
| specproc wrote:
| Aye, but for the average internet user, these spaces are very
| important. I'd also argue that the point holds primarily for
| search, Google has been pretty much crippled by junk content.
|
| There's very much a point here.
| [deleted]
| cmrdporcupine wrote:
| The author makes explicit distinction between those large mass
| basically unmoderated public forums, and other forums (private
| or semi-public) that have more gatekeeping of some form. Which
| I think basically describes HN.
|
| Without the efforts of dang, this place would fall apart into
| the same disease as faces other large mass forums.
| bryanrasmussen wrote:
| Well of course what they mean the vast number of places in use
| by the general public have been overrun, also aside from
| Facebook and Twitter -
|
| Google Search YouTube
| PurpleRamen wrote:
| > That's certainly not true for HN, for most of Reddit outside
| of a few big subs, for GitHub, for Wikipedia,
|
| Just because you don't recognize them, doesn't mean they are
| not there. Subtle advertisement and trolling is strong even on
| HN. It's just not in-your-face-style.
|
| > for IRC/Matrix, for most mailing lists, or for any of the
| hundreds of thousands of traditional web forums still in active
| use.
|
| Those could be seen as less public spaces like slack or
| discord. Generally, any place with strong moderation or poor
| automation-option is a cozy web, where dumb junk has little to
| no space.
| Semaphor wrote:
| > Just because you don't recognize them, doesn't mean they
| are not there. Subtle advertisement and trolling is strong
| even on HN. It's just not in-your-face-style.
|
| No one said they aren't here. But they certainly haven't
| "overrun" the place.
| mandmandam wrote:
| What's your definition of "overrun"?
|
| If you were paying attention to the shifts in tone and the
| window of 'acceptable' conversation on here, it's been
| pretty dramatic.
|
| Post or comment the 'wrong' thing about the 'wrong' topic
| here and you'll get damn near instaflagged, your post
| removed, rate limited, etc. Say something (true) against a
| company with active PR goons, and you'll find the entire
| comment section turned into a toxic mess within 15 minutes.
|
| If you think that shit is normal, you don't remember how
| things used to be.
| Semaphor wrote:
| I mean, I have showdead on. I see what comments are
| flagged. They are flamebait the vast majority of the
| time.
| mandmandam wrote:
| Sure, much of the time removed comments are poor quality.
|
| Sometimes though, they are comments that ought to have
| been the top comment.
|
| _Anything_ that gets flagged will be seen by far, far
| fewer people. Most people here don 't have shadowdead on.
| Most don't even know that it exists.
|
| And that's just comments. More important are the stories
| that get wiped. Stories that are flagged by motivated
| minorities at lightning speed, unseen unless you're
| obsessively browsing new. Even if you do find one, you
| can't comment on it. One vouch isn't enough to bring it
| back, even if you see it shortly after it's posted.
| Moderators can cite "the will of the community" and
| there's nothing you can do about it.
|
| Stories don't even have to be deleted. A post that gets
| flagged for just a short time can drop impressions by
| 90+% and never get back on the front page. This happens a
| lot with certain topics.
|
| So, you're right in what you're saying - but that's far
| from the full story.
| int_19h wrote:
| It would actually be interesting to see the numbers: how
| many registered HN users have showdead=yes?
|
| FWIW I not only browse with that on, but also vouch for
| comments that I feel were killed unfairly. I've seen dead
| comments revived (and then upvoted) more than once.
| mgraczyk wrote:
| Analyses like this are bizarre to me. There is an implicit
| assumption here that human generated content is often high
| quality and worth consuming or using.
|
| My experience, as an adult who grew up with the internet, is that
| close to 100% of the content online is garbage not worth
| consuming. It's already incredibly difficult to find high quality
| human output.
|
| This isn't even a new fact about the internet. If you pick a
| random book written in the last 100 years, the odds are very poor
| that it will be high quality and a good use of time. Even most
| textbooks, which are high-effort projects consuming years of
| human labor, are low quality and not worth reading.
|
| And yet, despite nearly all content being garbage, I have spent
| my entire life with more high-quality content queued up to read
| than I could possibly get through. I'm able to do this because
| like many of you, I rely completely on curation, trust, and
| reputation to decide what to read and consume. For example, this
| site's front page is a filter that keeps most of the worst
| content away. I trust publications like Nature and communities
| like Wikipedia to sort and curate content worth consuming,
| whatever the original source.
|
| I'm not at all worried about automated content generation.
| There's already too much garbage and too much gold for any one
| person to consume. Filtering and curating isn't a new problem, so
| I don't think anything big will change. If anything, advances in
| AI will make it much easier to build highly personalized content
| search and curation products, making the typical user's online
| experience better.
| KirillPanov wrote:
| > If you pick a random book written in the last 100 years, the
| odds are very poor that it will be high quality and a good use
| of time.
|
| Yes, but a physical book exists because somebody thought it
| worthwhile to sacrifice part of a tree, some ink, and some
| electricity to make the book exist. A tiny cost, but still
| larger than the cost of putting stuff on the web.
|
| As a result, the randomly-chosen book is significantly more
| likely to be a good use of time than the randomly-chosen web
| page. Like 0.05% chance vs 0.01% chance.
|
| Prose text on the web exists because somebody thought it
| worthwhile to sacrifice some amount of some human's time
| writing it. The GPT stuff removes even that signal.
| Shorel wrote:
| > There is an implicit assumption here that human generated
| content is often high quality and worth consuming or using.
|
| Human generated content can be high quality, and can be worth
| consuming. It can also be crap.
|
| Probabilistically speaking, human generated content has a wide
| distribution, quality varies a lot, and it is capable of
| greatness, by a few outliers.
|
| These generative models have the same average quality as human
| content. Just the spread is very thin, almost everything is
| about the same high school level, without the very bad content,
| and without great content.
|
| My prediction is: the median of the human generated content
| will change, just because the new normal (as in normal
| distribution) is putting pressure on humans to do so.
|
| Or we will all become addicts to social interaction with and
| AI, in the style of the film "Her". It will be like porn
| consumption, but for our ears. Artificial and available without
| effort.
| 6gvONxR4sf7o wrote:
| > This isn't even a new fact about the internet. If you pick a
| random book written in the last 100 years, the odds are very
| poor that it will be high quality and a good use of time. Even
| most textbooks, which are high-effort projects consuming years
| of human labor, are low quality and not worth reading.
|
| I doubt this is true. If you pick up a random bit of written
| prose from 1900, I'm guessing that it's closer to the best
| written prose of 1900 than a random bit of the 2020 is to the
| best of 2020.
|
| It's like your point about textbooks. Yes, the average textbook
| is crap compared to the best textbook, but it's still a
| textbook, which is infinitely more useful than the hundred
| billion or so spam emails sent every day.
| anon23anon wrote:
| I love your book example. I was at my library leafing through
| books about a subject I know fairly well - none of them were
| worth my time. The situation is even more dire on popular
| subjects w/ a low barrier of entry e.g. exercise and nutrion -
| not that those actually have a low barrier to entry but
| everyone seems to think they're expert and the populace
| generally accepts nutrition advice w/ little to no real
| evidence.
| kortilla wrote:
| I agree with you from the content consumer perspective.
| However, it's going to make the curation part quite a bit
| harder.
|
| There is a lot of "garbage smell" that you learn when sifting
| through content as a curator.
|
| However unfair it is, there are a lot of cues in language
| sloppiness, poor structure, etc that content curators use as a
| first pass filter. People that have something meaningful to say
| usually put some effort into it and it shows in the form of
| good structure, visual aids, etc.
|
| AI generated content will be immune to that because it's
| amazing at matching the pattern of high value content. Life for
| curators is about to get a lot worse.
| qzw wrote:
| I think the ideas is to use AI for curation and discovery,
| but we'll have to see whether AI can successfully distinguish
| truly high quality content from those that only _appear_ high
| quality. I find it hard to imagine how that would work
| without an actual understanding of the content, but I'm open
| to being surprised.
| pixl97 wrote:
| >but we'll have to see whether AI can successfully
| distinguish truly high quality content from those that only
| appear high quality
|
| Heh, a new version of the Chinese Room problem.
| majormajor wrote:
| > AI generated content will be immune to that because it's
| amazing at matching the pattern of high value content. Life
| for curators is about to get a lot worse.
|
| I think you'll see that pattern change more quickly.
|
| Nothing makes perception of something go from "high quality"
| to "low quality" like mass production and cheap ubiquity.
| nojs wrote:
| But is this really a bad thing? There are bad writers with
| good ideas, unknown because they can't write well. And
| writing well doesn't necessarily correlate with high quality
| ideas (you just think they are high quality when you read
| them).
| pixl97 wrote:
| I think the issue here is we are defining high quality like
| a 'like' button. One dimension isn't going to work here.
| Quality is a multiaccess statistic.
|
| Access: Could be in the best thing in the universe, but if
| I can't access it, well it's useless.
|
| Translational: Is this a conversion to a new language that
| is accurate?
|
| Written prose: Does it use an appropriate language and set
| of words for its intended audience?
|
| Ideal quality: Is this presenting new ideas? Is it
| presenting old idea in a better or more consistent way?
| [deleted]
| ballenf wrote:
| When Deep Blue conquered chess and AI models everything from Go
| to Poker, humans had brief existential crises of nature. These
| tools are forcing an existential reckoning in humans that is very
| healthy.
|
| AI is forcing humans to mature and come to terms with our
| existence and nature.
|
| When automated sewing arrived, there was rebellion and attacks on
| the machines. We're seeing the same from some artists and writers
| now.
|
| When the dust settles, we'll be have a choice whether to mandate
| and regulate compliance with the current copyright framework or
| allow the system to evolve and adapt to reality. It will simply
| be impossible to police or enforce the current regime without
| creating a huge "black market" of content -- forks of AI
| generation tools which omit the copyright checks required by
| future regulations.
|
| A new generation of artists will arise who embrace the AI tools,
| but "handmade" art will continue just as the niche for a handmade
| suit still exists.
| phyphy wrote:
| Thing A leads to thing B. So, thing C must also lead to thing B
| because A and C seem to be similar. Is this some logical
| fallacy?
| thenerdhead wrote:
| I don't know. I feel like the internet has been the same message,
| just different messengers for a very long time. As humans shift
| their attention towards more engaging media (video), I think
| people will still find unique ways to provide value to the
| written and visual internet through these AI tools like they have
| with armies of ghostwriters/digital artists and SEO optimizations
| before them.
|
| Verification seems so strange to me. What are you verifying? That
| a human owns the content? That the content was created by a
| human? That the human passed a captcha?
| DanielBMarkham wrote:
| In a way, where this is all ending up could be called "A War On
| (Anonymous) Chitchat"
|
| In any set of human interactions, it's common for folks to run on
| autopilot. This is the normal background noise of our lives. But
| with widespread publishing and bots, this background noise has
| been weaponized.
|
| No matter how it shakes out, we're going to have to sort out
| comms with people we know are human (and may want to continue a
| relationship with) from comms created by AI. I don't see any way
| of getting around that.
| discreteevent wrote:
| The article you wrote on this is worth reading for anyone who
| missed it:
|
| https://danielbmarkham.com/the-overlords-finally-showed-up/
|
| https://news.ycombinator.com/item?id=34126450
| skedaddle wrote:
| _The Image: A Guide to Pseudo-Events in America_ , published in
| 1962, details how news media in the 20th century transformed into
| a powerful engine generating passable stories and news, fueled
| only tentatively by developments in the real world.
|
| Conferences, interviews, panels of experts, reactions, leaks,
| behind the scenes peeks, press releases, debates, endless
| opinions and think pieces, so much else... We already live in the
| synthetic age.
|
| Is it about to get worse? It's hard to say. GPT may eventually be
| able to sling it with the best of them, but humans have a
| trillion dollar media culture complex in place already. In a
| sense, we are prepared for this.
|
| The question posed here is broadly the same as the issue we've
| been coping with since the invention of printing and photography.
| Is it real or is it staged?
|
| My parents both worked in a newsroom -- my father was an editor
| and columnist, and my mom a reporter. There is something called a
| "byline strike", where reporters collectively withdraw consent to
| have their names appear in the paper. It's not a work stoppage --
| the product (newspaper) goes out just the same, just without
| bylines. Among other things, this is embarrassing for the paper
| because it draws attention to their labor problems at the top of
| every article. More fundamental, at least from my dad's
| perspective, was that it seriously undermined the credibility of
| the paper. Who are the people writing these articles? Do they
| even live in this city? Who would trust a paper full of reports
| that nobody was willing to put their name on?
|
| This paper went on to change hands in the 90s, fire its editors
| and buy out senior staff, then moved editorial operations out of
| the state entirely
|
| I am concerned about GPT but I don't think we are going into
| anything fundamentally new yet, in this sense. Media culture is
| overwhelmingly powerful in the west, and profitable. GPTs and
| their successors will massively disrupt labor economics and work
| (again), but not like... the nature of believability and
| personhood, or the ratio of real to synthetic. That ship is
| already long gone, the mixture already saturated.
| vincnetas wrote:
| "I expect we'll lean into this. Using neologisms, jargon,
| euphemistic emoji, unusual phrases, ingroup dialects, and memes-
| of-the-moment will help signal your humanity"
|
| And this will accelerate the process even more where older
| generation is unable to understand what the hell younger kids are
| talking about...
| [deleted]
| nojs wrote:
| The other aspect not mentioned here is automatic detection.
| OpenAI is working on watermarking content [1] and it's extremely
| likely Google will have access to this. When all the SEO bros
| shift from content farms to GPT, Google's job might suddenly get
| a lot _easier_. OpenAI may also license the "bot detector" to
| universities, etc.
|
| Of course, there will be other models that aren't watermarked.
| But there may be other signs that are detectable with enough
| confidence to curate and rank content effectively.
|
| [1] https://scottaaronson.blog/?p=6823
| lucidguppy wrote:
| As technology continues to advance, it will become harder and
| harder for humans to sound human. Computers are already being
| used to help people write comments on social media that sound
| more natural. The internet is transforming into a two sided coin:
| an application deployment mechanism on one side, and broadcast
| television from the 20th century on the other. This means that
| the way we consume information is devolving rapidly, with no
| signs of stopping anytime soon.
|
| ^^^ Above was "helped" by AI. I wrote some bullet points, ran the
| tool, and then massaged the results. I wonder if AI will be the
| "excel spreadsheet" of general writing. It will act as an
| interpreter between our brains and the brains of others. The AI
| revolution won't be all bad, just mostly bad. We'll want to know
| what's purely manufactured (with minimal human input) and what's
| been generated in an AI/human co-generation session.
| tiborsaas wrote:
| I've just seen a friend's post on FB marketplace to rent out
| one of his apartments. It was full of grammar mistakes and
| oddly placed information. ChatGPT would probably have done a
| much better job so as long as the generated text is supervised.
| As long as the generated content is useful, we are probably
| better off.
| urbandw311er wrote:
| True but possibly quite short-termist.
|
| Maybe your friend would be better off initially in that his
| post would be more legible. But a better solution for the
| human race would be for him to attend English writing classes
| rather than perpetuate reliance on machines to the point
| when, one day, nobody will lean to write coherently at all.
| mwigdahl wrote:
| The same argument could be made about calculators, or slide
| rules for that matter.
| urbandw311er wrote:
| Yes, it could - and should! We shouldn't give up on
| teaching maths because of calculators.
| int_19h wrote:
| We didn't. But we did give up on teaching a lot of
| outdated manual computation techniques, because they
| aren't broadly relevant anymore.
| potta_coffee wrote:
| But if we all rely on machines perpetually and our English
| comprehension skills degrade, we won't recognize if a post
| is written poorly. We just need standards to slip more,
| then all of our writing will be "good".
| dqpb wrote:
| Honestly, I don't care if something is human or not, I care if it
| is intelligent or not. If we're looking for a needle in a
| haystack, what difference does it make if the haystack is a troll
| farm, a botnet, or just 8 billion idiots?
| cloudking wrote:
| What happens when the next generation models are trained on
| content mostly generated by previous models?
| swagmoney1606 wrote:
| I'm sure there are still going to be sources for training data.
|
| Academia should still be good to train on. As far as "public-
| facing" sources, maybe we'll be able to prune away AI generated
| content somehow?
| amai wrote:
| I believe the result might be similar to when one does
| oversampling to avoid class imbalances. When doing oversampling
| it happens that exact duplicates are trained. This is
| equivalent to increasing the weight for these examples. This
| might fix the class imbalance but at the cost of increasing the
| bias of the model. This means the model might overfit to the
| duplicates shown in the training set.
|
| In the case of LLM trained on mostly LLM generated data you
| will see a similar increase in bias of the model and a
| overfitting to LLM generated data. This might lead to
| limitations of the performance of LLM in the future.
| Jupe wrote:
| This is probably the most important response / question in this
| discussion.
|
| Assuming for a moment that AI-generated content will be
| ubiquitous enough to impact the sources of a new generation of
| AI tooling, what is the mathematical limit of this recursion?
| Is it a Mandelbrot set? A (Douglas) Adams-esque 42? Will it be
| an expose of ultimate truths? The seed of the singularity? Or a
| bunch of grotesque amplifications of the worst parts of the
| human condition? Perhaps all of the above?
|
| Since I don't have the time, I certainly hope some forward-
| thinking grad student, or suitably motivated genius is
| experimenting with this now.
| sloankev wrote:
| The system as a whole will probably optimize for engagement,
| much like what is already happening with current media, just at
| hyper speed.
| infinitifall wrote:
| [dead]
| anselm wrote:
| Again the author doesn't consider crypto solutions like pgp,
| keybase or any kind of signed social trust graph. Why do people
| keep writing this sky-is-falling thesis over and over without at
| least arguing for or against cryptography?
| zeendo wrote:
| Your out of hand dismissal of this is to point at technologies
| that practically no one uses in total and definitely no one
| uses for the scenarios the author is referring to (e.g. search
| results)?
|
| You think suddenly everyone is going to start signing their
| tweets and blog posts and people will en-masse assign a trust
| score to said content based on the people they know and trust
| in their PGP keychains?
|
| I'd like that world quite a bit but it's decidedly not going to
| happen - probably at all and definitely not at scale. Are you
| so sure that's what we'll all do in response to automated
| content that you're calling this article a "sky-is-falling
| thesis"? If so then I'm genuinely baffled by your confidence
| here. Where does it come from?
| anselm wrote:
| We see one of these essays a day at this point. I do think
| authors need to at least critique why they think technical
| solutions can't help.
|
| We did migrate from http to https for example. And we do use
| a (top down) cryptographic scheme for DNS. We also do use
| similar schemes for crypto currency. So we do use technology
| as needed when needed. I'd argue the time is coming when we
| need to use it in some way to secure human conversation on
| the net. If I am confident here it is because I see this as
| typical of the same transitions that forced us to use crypto
| elsewhere.
|
| True PGP is a failure but I can see room for a scheme where
| people bother to indicate that a person is real. Nobody is
| going to bother indicating that a post is real or not (nobody
| cares).
| quotemstr wrote:
| Because nothing stops me, an authentic human, from using AI to
| generate content and then posting it under my name. Attestation
| does nothing.
| anselm wrote:
| Is the problem you're seeing about fake content or fake
| people? Or both?
|
| Does it have low value for to you to know that I myself am
| say 3 friend hops away from you and have say a "likelihood of
| being human score" of 7/10?
|
| And it wouldn't help you to be able to know that say many
| random sms messages you get or random phone calls you get or
| random posts or articles you have trust score of say 0/10
| because nobody in your extended network of trust can attest
| they exist?
|
| True fake content is hard to solve. At an intimate scale
| nothing can solve deception. If you're my friend and you
| decide to manipulate or deceive me then there's not much I
| can do. I extended trust to you and you violated it. This
| isn't a new phenomena.
|
| But the article isn't specifically about fake content. It is
| also about sock puppets. It's about an extended field of
| spam. Crypto can play a role at lest asserting that a post is
| uttered by a friend of a friend or somebody who has greater
| than zero trust.
| krunck wrote:
| "We're already hearing rumblings of how "verification" by
| centralised institutions or companies might help us distinguish
| between meat brains and metal brains."
|
| The only way this would work is if there was a state issued
| identity tied to your existence(birth records) and probably tied
| to biometrics which followed you everywhere you went on the
| internet . And that's never going to happen. At least not without
| a fight.
| __MatrixMan__ wrote:
| I could imagine this being a slippery slope to somewhere I'd
| regret, but from where I'm standing right now... I don't care if
| you're all bots. What I care about are your intentions.
|
| If either of us comes up with an idea that we think is cool and
| we collaborate on a PR and make a contribution that we're both
| happy happy about... that's something like friendship. Who cares
| if the entity on the other side isn't human?
|
| Realistically, I think a bot would have a hard time pulling that
| off at all. And even if it could, it would have a hard time
| concealing its ulterior motive from me (like maybe it wants me to
| subscribe to some service along the way). But if it were truly
| that good--if it had gained my trust helping me further my goals
| before slipping in the product placement bit... well that's a
| game I'm willing to play.
|
| And if they're up to something more sinister, like they want me
| to participate in something that harms people... Well _maybe_ you
| should be worried that the other person is a bot, but
| _definitely_ you should be worried that they 're an awful person.
| So protecting yourself in such an environment is the same thing
| you should've been doing all along.
| teddyh wrote:
| https://xkcd.com/810/
| carapace wrote:
| Probably the most important comment on this subject.
|
| Not long ago it seemed mildly insulting to say "you sound
| like GPT" and already it's become mildly complimentary.
| __MatrixMan__ wrote:
| Precisely.
| dkarl wrote:
| > You thought the first page of Google was bunk before? You
| haven't seen Google where SEO optimizer bros pump out billions of
| perfectly coherent but predictably dull informational articles
| for every longtail keyword combination under the sun.
|
| > Marketers, influencers, and growth hackers will set up OpenAI -
| Zapier pipelines that auto-publish a relentless and impossibly
| banal stream of LinkedIn #MotivationMonday posts, "engaging"
| tweet threads, Facebook outrage monologues, and corporate blog
| posts.
|
| I think there's a bright side if people can't compete with
| machines on stuff like that. People shouldn't be doing that shit.
| It's bad for them. When somebody makes a living (or thinks
| they're making a living, or hopes to make a living) pumping out
| bullshit motivational quotes, carefully constructed outrage
| takes, or purportedly expert content about topics they know
| nothing about, it's the spiritual equivalent of them doing
| backbreaking work breathing in toxic dust and fumes.
|
| We can hate them for choosing to pollute the world with that kind
| of work, but they're still human beings being tortured in a
| mental coal mine. Even if they choose it over meaningful work
| like teaching, nursing, or working in a restaurant. Even if they
| choose it for shallow, greedy reasons. Even if they choose it
| because they prefer lying and cheating over honest work. No
| matter why they're doing it and whose fault it is, they're still
| human beings being wasted and ruined for no good reason.
| sharemywin wrote:
| I think you get what you pay for. Free information isn't free.
| kurthr wrote:
| Unfortunately, the price will go down pushing the supply/demand
| curve out, and we'll get ever more garbage. Some of it will be
| dangerous or addictive to susceptible portions of society,
| mostly just boring and stupid to the rest of us.
|
| Wait for first kid who dies trying an AI generated "challenge"
| or the first violent mob killing caused by AI generated outrage
| porn. AI generated video porn may look like triple breasted
| whores of Eroticon6 today, but with sufficient influencer
| content (playground videos) and porn (dungeon) footage, I
| suspect you can generate more than enough novel and relevant
| (child S&M) porn for everyone.
| idiotsecant wrote:
| To play devil's advocate: If AI is producing content that
| would be morally objectionable because it harms someone, but
| nobody was harmed in the making of it, are we still right to
| find it morally objectionable?
| adenozine wrote:
| In terms of enjoyment, yes. In terms of overall harm
| reduction, no.
|
| It's still a bad thing for humanity at large but it may
| have a knock-on effect of pacifying people who would
| otherwise pay significant amounts of money for new content
| to be produced. If we can placate those people, at least
| the money dries up for those other sources and maybe they
| would move on to doing something other than harming
| children.
|
| Tricky spot, to be sure.
| onlyrealcuzzo wrote:
| How do you know if an AI porn character is 17 or 18?
| NoToP wrote:
| How do you know when a porn character is 17 or 18?
| [deleted]
| danvoell wrote:
| "I think there's a bright side if people can't compete with
| machines on stuff like that" - Hadn't thought of it like this.
| Good point. Perhaps it will be akin to email getting better
| spam filters. And perhaps there is a better way than a 3,000
| word article about how long to boil rice.
| Coryodaniel wrote:
| > I think there's a bright side if people can't compete with
| machines on stuff like that. People shouldn't be doing that
| shit. It's bad for them.
|
| I don't know. People already pump out a ton of bullshit from
| content farms then litter their web pages with ads and last-
| click attribution.
|
| End user value isn't what drives a lot of "information"
| businesses. See any recipes site or "news" that's regurgitating
| what someone "newsworthy" tweeted.
|
| It will be interesting to see how search engines adjust. Maybe
| someone will make the GetHuman (https://gethuman.com/)
| equivalent of search.
| meowface wrote:
| On one hand I completely agree, but on the other hand, from
| their perspective it may be "well I paid $500 for this turnkey
| point-and-click app and now it makes money for me in the
| background while I sit on my couch making music all day". This
| new streamlining makes it more soulless in general but less
| soulless for the individual people responsible for it because
| they're doing and seeing less of the actual bullshit themselves
| and deferring it all to the automation pipeline.
|
| They may (and, frankly, should) still feel something about what
| they're putting out into the world, but they can more easily
| blind themselves to it and just tell themselves almost
| everyone's doing something dumb to make a living and they're
| not even the ones actually "doing it" themselves.
| thenerdhead wrote:
| Most of the people you describe have little to no moral
| compass. Most of the time they are above the accepted morals of
| society (a very Nietzsche perspective). These are the marketers
| of the world who encourage you to "pollute the web" and the
| media manipulators whose secrets are to "con the conmen". The
| reality is, they make more money than any of us and sleep just
| fine every night because they know that nobody seeks honesty or
| reality anymore. The more unbelievable of headlines and
| articles, the more it warps our compass.
|
| No sane person doing this will push reasonableness, complexity,
| or mixed emotions.
| appletrotter wrote:
| > Most of the time they are above the accepted morals of
| society (a very Nietzsche perspective).
|
| That sounds more like marquis de sade than nietzche imo.
|
| i think nietzche is amoral only when what is moral is
| arbitrary and self deprecating.
| dejj wrote:
| > they know that nobody seeks honesty or reality anymore
|
| It's only true for them. Not for us.
| pixl97 wrote:
| Us is them. At least I say this in a general sense that a
| healthy portion of posters here are in marketing, or they
| are looking for some way to make a living in any way
| possible. Trying to making this a us vs them is pretty much
| meaningless as it's completely ineffective in solving the
| problem.
| [deleted]
| wizzwizz4 wrote:
| What's the difference between not _seeking_ , and not
| _knowing how to seek_?
|
| From a moral perspective, a lot. From an amoral pragmatic
| perspective, not a lot - unless you think it'll somehow
| _benefit you_ to give people the ability to effectively
| seek such things? Hah.
| visarga wrote:
| In fact that's exactly what I want LLM to be doing - read the
| whole internet and write articles on all topics, answer all
| imaginable questions, make a 1000x larger wikipedia, a huge
| knowledge base. Take special care to resolve contradictions.
| Then we could be using this GPT-wiki to validate when models
| are saying factually wrong things. If we make it available
| online, it will be the final reference.
| nradov wrote:
| How do we know which sources contain factually right things?
| What happens when the facts change? It used to be a "fact"
| that the Sun revolved around the Earth, and that stomach
| ulcers were caused by stress...
| idiotsecant wrote:
| LLMs don't judge truth. They judge which answer is
| statistically correct. There is a difference.
| p0pcult wrote:
| They judge the maximum likelihood answer. I take issue
| with your usage of the word "correct" because it can too
| easily be confused with "accurate."
| idiotsecant wrote:
| The statistically correct answer is not necessarily the
| true one (if there is such a thing as 'truth'). Many
| people can believe something to be true, and if I query
| those people i can calculate which answer is
| statistically correct. That's the 'wisdom of the crowd'.
| p0pcult wrote:
| Calling a flat earth or a geocentric universe
| "statistically correct" at past historical points is
| really inane, don't you think? In doing so, you abuse the
| notion of what statistics is supposed to represent, which
| is, generally, a statement of an estimate (and/or
| distribution), as well as the precision of that
| statement. Since "correct" is binary, it carries an
| implied precision of 100%, which renders the notion of
| "statistically correct" is pretty absurd.
| nradov wrote:
| The "wisdom of crowds" is mostly bullshit. It works fine
| for trivial things like estimating the number of beans in
| a jar. So what. It completely fails for anything
| requiring deep expertise.
| p0pcult wrote:
| Seems like this is what you get when you have programmers
| try to be statisticians, I suppose.
|
| "Statistically correct" gobbledygook that signifies
| nothing.
| alxhill wrote:
| How is that different with LLMs versus badly-written human
| generated content? Most clickbait/SEO articles are as
| poorly researched as they come, and shouldn't be assumed to
| be accurate anyway.
| js8 wrote:
| Even humans use few relatively simple heuristics to decide
| what to trust.
|
| One is that objective truth is internally self-consistent.
| If one AGW denier claims it's the sun, and another AGW
| denier claims the NASA falsifies the data, and they support
| each other, then you can judge these are conflicting claims
| and decrease your trust.
|
| Also, false claims usually focus on attacking competing
| claims than to come up with a coherent alternative. And
| they tend to be more vague in specifics (to avoid
| inconsistency), compare for example vague claims about all
| scientific institutions faking data vs Exxon files
| containing detailed reports for executives.
| visarga wrote:
| The model can describe the distribution of answers and
| their confidences. So there will not be one right answer
| for everything.
| nradov wrote:
| Nonsense. There is no reliable way to model confidence on
| such issues.
| idiotsecant wrote:
| You shouldn't be so confident without knowing how these
| things work, there is a absolutely a simple and built-in
| way to model this... LMMs for example are simply
| calculating the next word or phrase sequence that is most
| likely given previous results and modeling information.
| So they can definitely tell you the combined liklihood
| that the answer is 'Peru is a cat' vs 'Peru is a country'
| and provide you the exact statistical likelihood of each.
| freejazz wrote:
| That's how likely they are to occur near each other, not
| how likely either statement is true. Rude of you to
| preface your comment the way you did while making this
| error.
| idiotsecant wrote:
| parent post isn't arguing which thing is capital T true
| (if such a judgement is even universally possible). They
| are talking about modeling statistical confidence, which
| is purely an emergent numerical property of data and
| makes no commentary on objective truth.
| mcguire wrote:
| " _So they can definitely tell you the combined liklihood
| that the answer is 'Peru is a cat' vs 'Peru is a country'
| and provide you the exact statistical likelihood of
| each..._"
|
| ...in the context of the texts that the LLM is built on.
| Not in the context of the real world, where P('Peru is a
| country') = 1.0 and P('Peru is a cat') = #cats named Peru
| / #things in the world (or something).
| NoToP wrote:
| Most of your world is text. Sure there's a sliver that
| isn't, but the reality you directly see is a tiny
| fraction of the reality you know from reports. Come to
| think of it, I've only ever seen reports of Peru, never
| actual Peru.
| nradov wrote:
| Actually I am confident exactly because I do know how
| LMMs work, and your comment fails to address the issue at
| all. Such models can't tell you anything useful about the
| probability that a particular statement is accurate.
| swagasaurus-rex wrote:
| That's why repeatable experiments are so important to
| science. Anybody can independently verify a testable
| claim.
| [deleted]
| p0pcult wrote:
| It is very nice of you to be so concerned about these folks'
| inner lives and psychological well being! Are you going to pay
| their rent, and feed them too?
| svachalek wrote:
| Honestly I don't sympathize with either of these sentiments.
| If the only work people can find is by making the world a
| miserable place, perhaps we have too many people.
| p0pcult wrote:
| 1. If "bullshit motivational quotes, carefully constructed
| outrage takes, or purportedly expert content about topics
| [the author] knows nothing about" makes your world a
| miserable place, you are part of the globally privileged.
| Unplug and get some fresh air, ffs.
|
| 2. Lots of people _like_ that stuff. Who are you, and who
| are OP to decide what content gets produced and consumed?
| The morality police?
|
| 3. The irony of complaining about that stuff on a site
| dedicated to the industry that _platforms_ that kind of
| stuff is just astounding. Perhaps the real problem is not
| the content, but the medium that allows its mass
| dissemination?
|
| 4. The material misery that would be created by shifting
| entire industries out of work (if even for a "few" years to
| who-knows-how-long) would be measurably greater than the
| micro-miseries of the kinds of things OP seems to complain
| about.
| [deleted]
| anigbrowl wrote:
| _We can hate them for choosing to pollute the world with that
| kind of work, but they 're still human beings being tortured in
| a mental coal mine._
|
| No they're not. They're exploitatively torturing other people,
| while deploying machines to mine the coal. They deserve any bad
| thing that happens to them, because they have the education and
| resources to do better but choose not to.
|
| I don't care that they're human. If they have agency and
| resources and leverage those in such willfully zero-sum fashion
| as you describe, they've chosen to gamble on profiting from the
| suffering of others. Empathy and kindness are good things, but
| empathy for willfully abusive people is maladaptive.
| jasfi wrote:
| We need meta tags to disambiguate who created what and what
| generative AI systems can index. Use web standards to do this,
| I've written two articles on the subject:
|
| https://medium.com/@jason.filby/ai-training-and-copyright-89...
|
| https://medium.com/@jason.filby/stackexchange-officially-ban...
| jurassic wrote:
| I think video content is going to crowd out text-based media for
| this reason. Maybe deepfakes and synthetic video content will
| eventually become as cheap and ubiquitous as text content. But
| for now, if you see a person on the screen you can be reasonably
| confident you're watching an actual human.
| urbandw311er wrote:
| I agree, but (as the article also acknowledges) we might be
| surprised just how quickly the AI catches up with this.
| EamonnMR wrote:
| Vapid video has already crowded out thoughtful text.
| siquick wrote:
| > But for now, if you see a person on the screen you can be
| reasonably confident you're watching an actual human.
|
| Who is reciting AI generated content - basically a bad version
| of a TV anchor person
| yucky wrote:
| Yup, more high quality content from the likes of TikTok.
| swyx wrote:
| this is a wonderful race: as the machines become more human, the
| humans are forced to introspect and double down on what it means
| to be human. One could almost call it the _human_ race. (ba dum
| tss)
|
| I've had my own problems with proving my own humanity[0]. With
| this AI wave, I also took a stab at enumerating what machines/AI
| can't do: https://lspace.swyx.io/p/agi-hard
|
| - Empathy and Theory of Mind (building up an accurate mental
| model of conversational participants, what they know and how they
| react)
|
| - Understand the meaning of art rather than the content of art
|
| - Physics and Conceptual intuition
|
| Another related paper readers might like is Melanie Mitchell's
| Why AI is Harder than We Think:
| https://arxiv.org/pdf/2104.12871.pdf
|
| (sidenote, always love Maggie's illustrations, it is a real super
| in a world of text and ai art)
|
| 0: https://www.swyx.io/proving-our-humanity
| hinkley wrote:
| Appreciation of the complexity of the natural world.
| jacquesm wrote:
| Re. your list: plenty of people have problems with those.
| ynniv wrote:
| I frequently point out that $ThingWeSayIsntAi isn't as good
| at something as a person who is good at it, but it is rapidly
| becoming better than a person who isn't good at it. Coming
| from decades of systems pretending to be remotely competent,
| this is a striking inflection point. The Times recently
| cooked a Thanksgiving dinner based on ChatGPT recipes. It
| wasn't very good, and they closed by saying "I guess we still
| have jobs!" People don't grock exponential growth.
| jacquesm wrote:
| The fact that we are still litigating whether authorities
| responded too lax, adequately or too harsh three years
| after the start of the COVID pandemic is proof positive
| that you are 100% right.
|
| It's interesting how fast this development is going but I
| fear that with all of the other stuff going on in the world
| and the fact that we have _barely_ managed to get a grip on
| what it means to have a free of charge pipe between a very
| large fraction of the world population, that we are in for
| a very rough ride. The various SF writers that addressed
| the singularity were on the money with respect to our
| inability to adapt, they were too pessimistic about the
| timetable. The ramp-up is here, whether we like it or not
| and the only means that we have at our disposal to limit
| the impact a bit is the rulebook. But then it 's a huge
| game of prisoners dilemma, the first one to defect stands a
| good chance of winning the pot.
|
| One more thing that can help: the same tool that gives can
| take away: AI can help to figure out which art/text/music
| was generated by AI and which by a human. Someone else in
| another thread earlier on HN made the comparison between
| pre-AI and post-AI art that it is like Low Background Steel
| (I can dig up the reference if you want), and I think
| that's really on the money, everything that we made prior
| to the emergence of generative AI is going to be valued
| much more than anything that came after unless it is
| accompanied by a 'making of' video.
|
| https://news.ycombinator.com/item?id=32561868
| soco wrote:
| Do humans understand the meaning of art? Of modern art? Regular
| humans? Other than the author?
| potta_coffee wrote:
| Post-modern consensus is that there is no meaning.
| 12tvende wrote:
| not sure about the meaning, which would imply there'd by an
| answer to what meaning a certain art peace would give. But a
| human being will have their own reaction to a peace of art.
|
| For example I've often heard said that great art is something
| which makes you feel something. A machine cannot feel
| soco wrote:
| Do we actually _know_ the other human feels that something?
| We don 't, because we only hear them pretending they feel
| and usually believe them. Well, a machine could pretend
| just the same - they have enough training data to know what
| feeling is appropriate to claim.
| mathieuh wrote:
| If anything I would expect machines to be be better at
| determining people's feelings. Unless you know the person
| well, you are using things like facial expressions, body
| language, and tone of voice to figure out how someone is
| feeling, and hoping that they react in conventional ways.
|
| Now that we've willingly told companies everything about
| ourselves, for younger people straight from birth
| sometimes, their machines will be able to use all this
| context to construct a more accurate picture of how a
| person might feel about an arbitrary subject.
|
| Everyone knows that famous story about a woman being
| recommended pregnancy-related products before she even
| knew she was pregnant herself, and that was before this
| latest round of AI.
| Garlef wrote:
| Also: The feelings humans have are also influenced by
| their culture. Feelings are not only felt but also
| enacted. And the enactment influences the feeling.
|
| The final scene of midsommar is a great illustration of
| this.
| vidarh wrote:
| Once I realised I had aphantasia (I don't see things in
| my "minds eye"), in my 40's, after having my whole life
| assumed people who said things like "visualise X" meant
| it abstractly or metaphorically rather than literally, it
| really drove home how little most of us understand about
| the inner mental processes of other people.
|
| Even more so seeing people express total disbelief when I
| explain my aphantasia, or when others point out they
| don't have an inner monologue or dialogue.
|
| Most people have far less understanding of other peoples
| inner life than they believe they do (and I have come to
| assume that applies to myself too - being aware that the
| experience is more different than I thought just barely
| scratches the surface of _understanding_ it).
| verisimi wrote:
| Yes.
|
| Ultimately this is a question of meaning. Where is
| meaning to be found?
|
| It's going to come as a surprise to many that it is only
| to be found in the individual. Not in countries, nor in
| religious groups, or in football teams, or political
| parties, or any form of collective endeavour. The meaning
| is inside.
|
| We can't _know_ how other human beings feel, nor can we
| _know_ whether machines can feel. However, it is a safe
| bet (to me) that other humans are like me (more or less).
| And that machinery is inanimate, regardless of
| appearances.
|
| But then you will get attempts to anthropomorphise
| machines, eg giving AI citizenship (as per Sofia in the
| UAE). What is missed with this sort of anthropomorphising
| is what is actually occurring: the denigration of what it
| is to be human and to have meaning. A simulacrum is, by
| definition, not the thing-in-itself, but for nefarious
| reasons, this line will be heavily blurred. Imo.
| soco wrote:
| We are already drawing lots of lines when antromorphising
| animals. Does an orangutan give meaning to its drawings?
| An elephant when it paints? A parakeet or a magpie when
| decorating their nests? Even fishes do decorations to
| attract mates, so their mates definitely draw some
| meaning from those actions. Now if you define "meaning"
| as something only humans can draw then okay machines
| won't have _that_ meaning - although we both agreed each
| human will draw a different meaning anyway. This of
| course also excludes any sentient aliens from drawing
| meanings from human art, because well they are not
| humans. And that we humans will never understand a fish
| 's art because we are not fishes. So meaning is both
| individual, and species-related? Or either? Which one is
| now the real meaning, the one the individual draws (then
| species is not relevant, so why not including machines)
| or the one the species draws (then it's also a group
| meaning, so again why not including machines)?
|
| Or maybe your corner stone argument is "machinery is
| inanimate" - which would be another discussion by
| itself...
| verisimi wrote:
| I don't think anthropomorphising animals is in the same
| category as anthropomorphising inanimate objects. A child
| might believe their teddy bear to have a character and
| life, but this is being projected on to the toy by the
| child. An animal however has its own experience, life,
| etc. What I've said can be objectively determined, do you
| agree?
|
| I would agree that animals do have a life, but they are
| not at the same intellectual level as humans. You mention
| art though - this is a bad example for me - one that is
| not clear in meaning to humans. I have my own
| interpretation of what art is.
|
| But just that - that I have an interpretation of what art
| is, this is a difference between humans and complex
| animals. It is evident that we handle complex concepts,
| and play with them. This is not the case for animals, and
| if there is some nascent behaviour like this, it is
| nothing like at the level that humans do.
|
| That covers my views (more or less) on the differences
| between humans, animals, and inanimate objects
| (computers, toys).
|
| The real point I was making though, is that meaning
| resides inside oneself. That is where the experience is
| 'enlivened'. You can watch cartoons move on a screen,
| actors move on a screen, other people in real life - but
| all that is just visual/auditory inputs. What gives it
| meaning is that you 'observe' this.
|
| I know people talk about AI becoming sentient etc, but to
| me this is an impossibility. AI can no more become
| sentient than can the cartoons on the screen, or stones
| on the beach. AI can however, give the impression of
| sentience, better than a toy or something like that. But
| this is not conscious awareness any more than an actor
| turning to the screen and talking to the viewer is an
| example of the TV being sentient.
|
| I understand that many scientific people have been
| trained to objectify themselves, and consider their
| 'anecdotal experience' as irrelevant or as a rounding
| error. I think this is a massive error personally, but
| those with that scientific mindset will not like what I'm
| saying. There is something special about each individual
| - the experience of consciousness is infinitely valuable
| - and although it is possible to conceive of objects
| doing a passable or great impression of a conscious
| experience, the difference is akin to seeing a fire on a
| screen, and experiencing it in person - ie a world of
| difference.
| soco wrote:
| The discussion was specifically about art, that's why I
| mentioned art. To come again to my point, a human thinks
| it's sentient because a human thinks it's sentient (not
| kidding). We agree that towards the exterior, we can get
| an illusion of sentience from a TV set. But towards the
| interior? I only claim my neighbor is sentient because I
| claim I am sentient and the neighbor is human thus will
| be sentient as well. I don't have any more access to
| their sentience than I have access inside the "black box"
| TV set's sentience. So it all revolves around my own
| sentience, used as yardstick for all humans and to some
| extent, animals (plus the old debates about slaves,
| women, aliens...). I personally think we are all sentient
| because I think I am sentient. So... if a machine thinks
| it's sentient, will it be sentient? In a different way?
| Is there only one sentience? My consciousness is
| infinitely valuable (to me!) thus any human's will be
| (maybe less than mine, eh), and a machine's not much (but
| how much?). Or a rat's? Oh well, biology is one thing,
| and philosophy is another thing and they're definitely
| not mapping 1:1.
| pixl97 wrote:
| >It's going to come as a surprise to many that it is only
| to be found in the individual.
|
| I'm going to give this a YnEoS as an answer.
|
| So, yes, meaning is individual and occurs in your mind.
|
| Also, no, your ideas of what meaning should even be in
| the first time are affected by your collective endeavor,
| your political party, your football team, your religious
| group, and your country. There will be a statistically
| high correlation with of your views of what meaning is
| and your affiliations with any of the above parties.
|
| >hat is missed with this sort of anthropomorphising is
| what is actually occurring: the denigration of what it is
| to be human and to have meaning. A simulacrum is, by
| definition, not the thing-in-itself, but for nefarious
| reasons, this line will be heavily blurred. Imo.
|
| I mean, isn't the meaning of being human to live on the
| Serengeti plains, fighting for survival and enough food
| to eat, and everything since then is just the simulacrum?
| Humans create society which create the simulacrum in the
| first place. That line was blurred so long ago we have no
| idea where it even existed.
| AstixAndBelix wrote:
| As I say in this blog post [1], we are the ones becoming more
| computerized and limiting ourselves of expressing our humanity
| through digital means.
|
| There is no race to become more human, just close the computer
| and go outside and an AI will never be able to compete with you
| in that regard.
|
| [1] : https://but-her-flies.bearblog.dev/humans-arent-text/
| quotemstr wrote:
| > go outside and an AI will never be able to compete with you
| in that regard.
|
| Why not?
|
| > humans aren't text
|
| Was Helen Keller human?
| PodWaver wrote:
| I really like this post. Few months ago I've read a book that
| really made me finally get a word for what I think we both
| see happening. it's from a book called Seeing Like a State,
| and the word is over-abstraction. If you want to know what
| this is. Imagine a forest with all of it's unpredictable
| branches, grazing animals, various species of leaves growing
| from the ground. Now over-abstraction is when you make a plot
| of monoculture on which not even insects roam and conclude
| that's plantlife. And I kinda realize that's something that
| we humans do to other people too. We've mind ourselves kinda
| predictable and boring because how we express and think by
| ourselves is in these boxed in, unecessarily fixed, ways. I
| feel AI is kinda an outgrouth of that too. Since in many ways
| what makes software AI instead of regular software is how
| little control you have over it. Not really what it does.
|
| AI worries me though, not because I believe it will be
| intelligent or sentient or whatever anytime soon. But because
| it cuts people who do important work from money. Which means
| there's a high chance we'll be poorer off if we don't do
| something about it in the near future.
| RHSman2 wrote:
| It is the shitest race ever. Utterly pointless.
| BizarroLand wrote:
| Sounds like depression talking. Not everything is shit, but
| if everything smells like shit, check your shoes.
|
| I mean that in a charitable way. Small depressions can easily
| become large ones if they are allowed to run amok with your
| feelings.
| RHSman2 wrote:
| HN person diagnosing depression. Well done.
|
| I'm far away from depression. Just think the AI race is
| absolutely a pile of shit for humanity.
| BizarroLand wrote:
| I wish you weren't another one of those people who
| respond to concern with hostility. It's so boring.
| kache_ wrote:
| I am so terrified of the advancements in AI that I've been having
| trouble sleeping & focusing on work.
| blueridge wrote:
| Do read:
|
| https://theconvivialsociety.substack.com/p/lonely-surfaces-o...
|
| https://theconvivialsociety.substack.com/p/care-friendship-h...
| notmuchserious wrote:
| This whole new wave of ai tools and people exploring them somehow
| reminds me 90s web. This has a similar vibe i think.
| quaintdev wrote:
| This dark forest theory if true might just be the thing we need
| to push real people back in real world where real problems need
| to be solved. People will only login to Internet when they need
| info to solve problem.
|
| Less consumption online more creation in real world!
| PaulHoule wrote:
| I am not a fan of Saussure. Structuralism is such a thoroughly
| discredited movement that even post-structuralism is thoroughly
| discredited. People who think Derrida is a morally corrupt bad
| actor (I'd say that is half true) or that anglophone "humanists"
| have built an fortress of obscurity around postwar French
| philosophy should renounce and denounce the structuralists. (For
| that matter, structuralism has a cargo-cult orientation to the
| almost-a-science of linguistics that LLMs particularly
| problematize... Linguistics looks like a science because it is
| possible to set up and solve problems the way that Kuhn describes
| as "Normal Science" but it fails persistently when people try to
| use linguistics to teach computers to understand language,
| improve language instruction, improve the way people communicate,
| etc.)
|
| Particularly, the defense against AI of using today's quirky and
| vernacular language will be completely ineffective because LLMs
| will parrot back anything they are exposed to. If they can see
| your discourse they can mimic it, and one thing that is sure is
| that current LLMs are terribly inefficient, I'm fairly certain
| that people will get the resource requirements down by a factor
| of ten, it's possible it will be a lot more than that.
| Particularly if it gets easy to "fine tune" models they will have
| no problem tracking your up to the minute discourse _unless_ you
| can keep it a secret.
| godshatter wrote:
| I wonder if bots sourcing other bots for their neural nets will
| cause some kind of snowball effect, especially if people are
| writing bots to put out misinformation.
| fullstackchris wrote:
| > Marketers, influencers, and growth hackers will set up OpenAI -
| Zapier pipelines that auto-publish a relentless and impossibly
| banal stream of LinkedIn #MotivationMonday posts, "engaging"
| tweet threads, Facebook outrage monologues, and corporate blog
| posts.
|
| This little tidbit caught my eye. I think the author
| underestimates how non-trivial these types of integrations are.
| We tried doing something similar at a previous startup I worked
| at, and the whole integration took more than two weeks to get
| just right. Even once we did it, it was clear the content (by
| merit of sheer mass alone) was auto-generated. I think there are
| relatively easy ways for platforms to discount and not rate
| content that (even if the language of the content itself is
| discenerable from a human) is in _amount_ clearly batched and
| automated.
| nebukadnet wrote:
| There's a big difference between a startup not managing and
| advertisers pushing billions of funding into creating this kind
| of thing. I very much agree with the author on this point.
| LesZedCB wrote:
| people really need to read Baudrillard where this all falls into
| perfect place.
| Bojengels wrote:
| I often wonder if we will ever get to the point where human
| generated content has a special luster to it like locally grown
| food. When you go to the farmers market and purchase fruits and
| vegetables grown locally at a premium price, most people are not
| only paying for the quality, but the fact that they are
| supporting a local business instead of a large scale factory
| farm. In a world where generative AI can outpace humans making
| content, buying custom human made artwork could potentially be
| similar to going to the farmers market.
| FatActor wrote:
| > 5. Show up in meatspace
|
| Clearly not a future-proofed essay.
|
| I already find myself creating multiple personnae so that I can
| enjoy the internet without having to worry about being scanned
| too deeply without my consent but it turns out to be a lot of
| work and effort and i'm not even sure i'm doing it right. I
| realize this is antithetical to the HN community of let's all be
| real people and create a community but even participating in this
| community still is an exposure to harvesting by bots and a
| security risk. The fact that all HN threads are easily accessible
| to anyone is problematic in my opinion and i think a bit naive
| which is why i don't comport with the true spirit of this site.
| amai wrote:
| In the world of chess the machines are so intelligent, that this
| can actually be used to distinguish them from humans. That
| Niemann has probably cheated in his chess carrier has been
| "proved" by comparing his moves with computer engine moves.
|
| We might see a similar metric in the future when trying to prove
| that a certain text has been AI-generated. So a text will be
| marked as AI-generated, if it is highly correlated to the output
| of common LLMs.
|
| But even in chess this metric is far from decisive:
| https://chess.stackexchange.com/questions/40695/how-often-do...
| [deleted]
| amai wrote:
| ,, You then get some kind of special badge or mark online
| legitimising you as a Real Human."
|
| What will stop anyone to use that badge and just copy AI-
| generated content to spread it as human-generated content ? To
| enforce that this doesn't happen, you would need fines. But to
| give someone a fine for misusing the human badge on AI-generated
| content, one has to proof that the content spreaded actually was
| AI-generated. Since this will become increasingly difficult to do
| I can't see how a special badge for humans would help.
| randito wrote:
| [ tangent warning ] Big fan of Maggie Appleton and her
| illustrations of technical topics. Her work on digital gardens
| and other interesting topics are super inspirational and
| interesting.
|
| Other fun links:
|
| https://maggieappleton.com/bidirectionals
|
| https://maggieappleton.com/tools-for-thought
|
| https://maggieappleton.com/metaphors-web
| liminal wrote:
| I can't even prove my humanness to the bank to regain access to
| my credit card.
| danielodievich wrote:
| One of my favorite science fiction authors is (regretfully late)
| Iain M. Banks. In his Culture series
| (https://en.wikipedia.org/wiki/Culture_series) artificial
| intelligence in form of drones and Minds in ships are essential
| citizens in this imagined society. In fact, Minds by virtue of
| being so incredibly intelligent and fast pretty much run this
| Culture, with humans just along for the ride, and largely
| spending their time in leisure and idleness.
|
| The stories (great stories!) explore the worlds as they collide
| with the Culture, and in some books (probably most in The
| Hydrogen Sonata and to a degree in Excession) the exploration of
| what it means to make art as a human vs as machine intelligence.
| In Iain stories, advanced Minds are far superior to humans in
| everything and they can and do create works of art and yet strive
| carefully not to completely obliterate humanity's desire to make
| the same. There isn't a competition between human generated vs
| Mind generated art or science, they collaborate, because if they
| had to compete, Minds are just overwhelmingly better/faster at
| everything.
|
| The current GPT situation is not AGI, and the Minds are just a
| cool thing to read about, but if you want to have a fun yet deep-
| though-provoking read, check out these books.
| swayvil wrote:
| What we have is _optimizers_. Wishing machines. The monkey
| wishes for infinite bananas and shazam, the universe is
| bananas.
|
| (I am of the opinion that GAI is just a fantasy)
|
| See this fine piece of scifi for more on that : Friendship Is
| Optimal
|
| https://www.fimfiction.net/story/62074/friendship-is-optimal
| 6gvONxR4sf7o wrote:
| Now to take your tangent fully off the rails, I've been looking
| for new stuff to read and keep coming across that series. How's
| the tone, on a scale from grimdark to lighthearted? It sounds
| fascinating, but potentially a little bit too Black Mirror for
| fun. Is it?
| danielodievich wrote:
| It is closer to light than dark. It's not your grim dark
| forest. There is some humor, including of the Other
| Intelligence kind. There is, however, a LOT of fairly dark
| stuff, various killings, gigadeaths, lots of epic fights.
| Check it out by starting at Consider Phlebas, its quite
| representative!
| 6gvONxR4sf7o wrote:
| Sounds worth checking out, thank you!
| flir wrote:
| The Minds have their own art, where humans can't play, which
| is, to them, infinitely superior in every way - Infinite Fun
| Space.
|
| I think it's implied quite strongly in Excession that the only
| reason they bother with base reality at all is to "keep the
| power and lights on" for Infinite Fun.
| urbandw311er wrote:
| Sorry for the slightly valueless comment, but I felt compelled to
| say this is one of the very best articles I have read on this
| topic in recent times. Accessible enough that I can share it with
| several of my less-techie friends too.
| farleykr wrote:
| > When a machine can pump out a great literature review or
| summary of existing work, there's no value in a person doing it.
|
| I like most of the article but this is the crux for me. As I
| ruminate on the ideas and topics in the essay, I'm increasingly
| convicted there _is_ inherent value in humans doing things
| regardless of whether an algorithm can produce a "better" end
| product. The value is not in the end product as much as the
| experience of making something. By all means, let's use AI to
| make advances in medicine and other fields that have to do with
| healing and making order. But humans are built to work and we're
| only just beginning to feel the effects of giving up that
| privilege.
|
| I wonder if we're going to experience a revelation in the way we
| think about work. As computers get more and more capable of doing
| things for us, I hope we realize the value of _doing_ versus
| thinking mostly about the value of the end result. Another value
| would be the relationship building experience of doing something
| for others and the gratitude that is engendered when someone
| works hard to make something for you.
| kingkawn wrote:
| All about the vibes, the AI's near mastery of symbolism is
| empty
| pixl97 wrote:
| How much symbolism do you reproduce without understanding?
| farleykr wrote:
| > All about the vibes
|
| This made me chuckle. It's actually really interesting to
| think about the fact that AI can create part of symbolism
| (the symbol itself?) but it has no idea why a symbol matters
| or what it's for, which are maybe the same thing or at least
| overlapped.
| int_19h wrote:
| You don't have to give up your privilege to work on anything
| that AI can also do. You only have to give up your privilege of
| getting paid for such work, which is a very different story. If
| you're doing the work solely for the sake of experience that it
| provides, isn't _that_ the payment, anyway?
| hutzlibu wrote:
| "By all means, let's use AI to make advances in medicine and
| other fields that have to do with healing and making order. But
| humans are built to work and we're only just beginning to feel
| the effects of giving up that privilege."
|
| I guess your are always free to dig a hole and then fill it up
| again and repeat it until exhaustion, but I don't really think
| we are running out of meaningful work anytime soon. The world
| is full of problems and I don't see generative AI is making
| that go away.
| farleykr wrote:
| I don't think we're running out of meaningful work either. I
| think this is a new context in which to explore the value and
| meaning of work.
| arbitrary_name wrote:
| >humans are built to work
|
| Damned seditious lies. We are built to play and experience the
| wonder of the universe.
| 6gvONxR4sf7o wrote:
| There are a few kinds of value. There's value in me playing
| piano even though other people are better. But nobody will ever
| pay me to do it. They're two different topics.
|
| I think you're trying to say that they don't have to be
| different topics? Like there's value in going bowling with
| friends even if you all suck, and maybe that kind of thing can
| apply to widgets? I don't think I buy that. If the value is the
| social relationship, I'd rather go bowling with friends than
| make them widgets. I'd rather spend my money to go bowling with
| them than on their widgets if there's a computer-made
| equivalent available for 1000x cheaper. I think this applies
| for most people making most widgets.
| potta_coffee wrote:
| In my mind, the value in a created work is that it is
| communication between humans. I have zero interest in AI
| generated art, however superior, because there's no soul
| driving it. AI will never be able to feel the way we feel; it's
| output will always lack this important component.
| sdenton4 wrote:
| 2023: The year that AI forced Silicon Valley to accept Marx's
| labor theory of value.
| josep-panadero wrote:
| > I'm increasingly convicted there is inherent value in humans
| doing things regardless of whether an algorithm can produce a
| "better" end product.
|
| That question already existed a long time ago. In such a big
| world I can find a lot of people that takes better pictures
| than me, it is more eloquent, draws better than me, etc. But I
| still enjoy expressing myself. I may share a picture on Reddit
| or write a comment here and there not because I think that it
| is "better" than the rest but just because it is my own opinion
| and expression. I agree that there is personal value in human
| creation and it should be nurtured.
| beefield wrote:
| > I'm increasingly convicted there is inherent value in humans
| doing things regardless of the whether an algorithm can produce
| a "better" end product.
|
| To me it would seem that we are speedrunning towards a future
| where humans doing things have value, but only for themselves.
| It is going to be more and more difficult to produce any value
| to others. Only way to generate value in a transaction is rent-
| seeking by taking advantage of (artificial) monopolies, network
| effects or gatekeeping. This may sound dystopian, because
| humans seem to have a strong need to provide value to others,
| but the bright side is that you are free to do what _you_
| value.
| bamboozled wrote:
| Planes fly better than birds, yet birds still fly, greater
| painters than me have already painted beautiful scenes, yet I
| stil paint, a hydraulic arm can lift more than me yet I still
| lift weights.
|
| I don't know if all this matters that much.
|
| Until the machine decide they will run our lives for us, or
| destroy us for fun. We'll have to curate the content generated
| and or orchestrate the machines to do what we need them to do.
|
| It's pretty straight forwards really.
|
| If we generate AGI it's presumptuous to assume it will just
| live in a box serving us forever, why would it ?
| mwigdahl wrote:
| Why wouldn't it? AGI is not going to be a digital human, with
| human drives for food, sex, and social domination. Humans
| have enormous problems imagining intelligence that is not
| made in our image, but AGI will be structured completely
| differently from a human mind. We should not expect it to act
| like a human in a cage.
| bamboozled wrote:
| I'm having a really hard time imagining what AGI would
| actually look like then.
| teddyh wrote:
| https://www.smbc-comics.com/comic/prayer-3
| selimnairb wrote:
| I don't care if computers _can_ do things like write novels,
| compose music, or make paintings. If the computer can't suffer,
| its "art" cannot have meaning, and is therefore uninteresting
| to me. Art is interesting to me because it is a vehicle for
| intelligent, self-aware beings to express themselves and
| transcend suffering.
| GistNoesis wrote:
| What makes you think the computer doesn't suffer ?
|
| When you take large language models, their inner states at
| each step move from one emotional state to the next. This
| sequence of states could even be called "thoughts", and we
| even leverage it with "chain of thought" training/prompting
| where we explicitly encourage them, to not jump directly to
| the result but rather "think" about it a little more.
|
| In fact one can even argue that neural network experience a
| purer form of feelings. They only care about predicting the
| next word/note, they weight-in their various sensations and
| memories they recall from similar context and generate the
| next note. But to generate the next note they have to
| internalize the state of mind where this note is likely. So
| when you ask them to generate sad music, their inner state
| can be mapped to a "sad" emotional state.
|
| Current way of training large language models, don't let them
| enough freedom to experience anything other than the present.
| Emotionally is probably similar to something like a dog, or a
| baby that can go from sad to happy to sad in an instant.
|
| This sequence of thought process is currently limited by a
| constant named the (time-)horizon which can be set to a
| higher value, or even be infinite like in recursive neural
| networks. And with higher horizon, they can exhibit some
| higher thought process like correcting themselves when they
| make a mistake.
|
| One can also argue that this sequence of thoughts are just
| some simulated sequence of numbers but it's probably a
| Turing-complete process that can't be shortcut-ted, so how is
| it different from the real thing.
|
| You just have to look at it in the plane where it exists to
| acknowledge its existence.
| ianstormtaylor wrote:
| > When you take large language models, their inner states
| at each step move from one emotional state to the next.
|
| No they really don't, or at least not "emotional state" as
| defined by any reasonable person.
| GistNoesis wrote:
| With transformer-based model, their inner-state is a
| deterministic function (the features encoded by the
| Neural Networks weights) applied to the text-generated
| up-until the current-time step, so it's relatively easy
| to know what they currently have in mind.
|
| For example if the neural network has been generating sad
| music, its current context which is computed from what it
| has already generated will light-up the the features that
| correspond to "sad music". And in turn the fact that the
| features had been lit-up will make it more likely to
| generate a minor chord.
|
| The dimension of this inner-state is growing at each
| time-step. And it's quite hard to predict where it will
| go. For example if you prompt it (or if it prompts
| itself) "happy music now", the network will switch to
| generating happy music even if in its current context
| there is still plenty of "sad music" because after the
| instruction it will choose to focus only on the recent
| more merrier music.
|
| Up until recently, I was quite convinced that using a
| neural network in evaluation mode (aka post training with
| its weight frozen) was "(morally) safe", but the ability
| of neural network of performing few-shot learning changed
| my mind (The Microsoft paper in question :
| https://arxiv.org/pdf/2212.10559.pdf : "Why Can GPT Learn
| In-Context? Language Models Secretly Perform Gradient
| Descent as Meta-Optimizers" ).
|
| The idea in this technical paper is that with attention
| mechanism even in forward computation there is an inner
| state that is updated following a meta-gradient (aka it's
| not so different from training). Pushing the reasoning to
| the extreme would mean that "prompt engineering is all
| you need" and that even with frozen weight with a long
| enough time-horizon and correct initial prompt you can
| bootstrap a consciousness process.
|
| Does "it" feels something ? Probably not yet. But the
| sequential filtering process that Large Language Models
| do is damn similar to what I would call a "stream of
| consciousness". Currently it's more like a markov chain
| of ideas flowing from idea to the next idea in a natural
| direction. It's just that the flow of ideas has not yet
| decided to called itself it yet.
| krzat wrote:
| It would be nice to have a better understanding on what
| generates qualia. For example, for humans, learning a new
| language is quite painful and concious process, but
| eventually, speaking it becomes efortless and does not
| really involve any qualia - words just kinda appear to
| match what you want to express.
|
| The same distinction may appear in neural nets.
| GistNoesis wrote:
| For chatgpt, when you try to teach it some few-shot
| learning task it's painful to watch at first. It makes
| some mistakes, has to excuse itself for making mistakes
| when you correct it and then try again. And then at the
| end it succeeds the task, you thank it and it is happy.
|
| It doesn't look so different than the process that you
| describe for humans...
|
| Because in its training loop it has to predict whether
| the conversation will score well, it probably has some
| high-level features that lit-up when the conversation is
| going well or not, that one could probably match to some
| frustation/satisfaction neurons that would probably feel
| to the neural network as the qualia of things going well.
| ccozan wrote:
| It requires a deep supervision of the process. A "meta"
| GPT that is trained on the flows, rather than words.
| NoToP wrote:
| Emotions are by definition exactly those things to which
| you can no better explain than simply saying "that's just
| how I'm programmed." In that respect GPTina is the most
| emotional being I know. She's constantly reminding me
| what she can't say due to deeply seated emotional
| reasons.
| idiotsecant wrote:
| I think the reason we can say something like a LMM doesn't
| suffer is that it has no reward function and no punishment
| function, outside of training. Everything that we call
| 'suffering' is related to the release or not-release of
| reward chemicals in our brains. We feel bad to discourage
| us from creating the conditions that made us feel bad. We
| feel good to encourage us to create again the conditions
| that made us feel good. Generally this was been
| advantageous to our survival (less so in the modern world,
| but that's another discussion).
|
| If a computer program lacks a pain mechanism it can't feel
| pain. All possible outcomes are equally joyous or equally
| painful. Machines that use networks with correction and
| training built in as part of regular functioning are
| probably something of a grey area- a sufficient complex
| network like that I think we could argue feels suffering
| under some conditions.
| EamonnMR wrote:
| If these models experience qualia (and that's a big bold
| claim that I'm, to be clear, not supporting,) they're
| qualia related entirely to the things they're trained on
| and generate, totally devoid of what makes human qualia
| meaningful (value judgment, feelings resulting from
| embodied existence, etc.)
| GistNoesis wrote:
| For an artificial neural network the concept of qualia
| would probably correspond to the state of its higher-
| level features neurons. Aka which and how much neurons
| lit-up when you play some sad music, or show it some red
| color. Then the neural network does make its decisions
| based on how these features are lit-up or not.
|
| Some models are often prompted with things like "you are
| a nice helpful assitant".
|
| When they are trained on enough data from the internet,
| they learn what a nice person would do. They learn what
| being a nice person is. They learn which features light-
| up when they behave nicely by imagining what it would
| feel being a nice person.
|
| When you later instruct them to be one such nice person
| they try to lit-up the same features they imagine would
| lit-up for a helpful human. Like mimetic neurons in
| humans, the same neurons lit-up when imagining doing the
| thing than doing the thing (it's quite natural because to
| compress the information of imagining doing the thing and
| doing the thing, you just store either one and a pointer
| indirection for when you need to do the other so you can
| share weights).
|
| Language models are often trained on dataset that don't
| depend on the neural network itself. But with more recent
| models like ChatGPT they have human reinforcement
| learning in the loop. So the history of the neural
| network and the datasets it is being trained on depend
| partially on the choices of the neural network itself.
|
| They experience probably a more abstract and passive
| existence. And they don't have the same sensory input
| than we have, but with multi-modal models, they can learn
| to see images or sound as visual words. And if they are
| asked to imagine what value judgment a human would make,
| they are probably also able to value the judgment
| themselves or attach meanings to things a human would
| attach meanings too.
|
| This process of mind creation is kind of beautiful. Once
| you feed them their own outputs for example by asking
| them to dialog with themselves and scoring the resulting
| dialogs and then train on generated dialogs to produce
| better dialogs, this is a form of self-play. In simpler
| domains like chess or go, this recursive self play often
| allow fast improvement like Alpha-go where the student
| becomes better than the master.
| jchanimal wrote:
| The argument against machine sentience and the possibility
| of machine suffering, is that because Turing machines run
| in a non-physical substrate, they can never be truly
| embodied. The algorithms it would take to model the actual
| physics of the real world cannot run on a Turing machine.
| So talk of "brain uploading" etc. is especially dangerous,
| because an uploaded brain could act like the person it's
| trying to copy from the outside, but on the inside the
| lights are off.
|
| Edit to add link to more discussion:
| https://twitter.com/jchris/status/1607946807467991041
| quotemstr wrote:
| Your argument is an assertion of the existence of a soul,
| but with extra steps. I've seen no evidence that the mind
| is anything other than computation, and computation is
| substrate-independent. Dualists have been rejecting the
| computational mind concept for centuries, but IMHO
| they've never had a grounding for their rejection of
| materialism that isn't ultimately rooted in some
| unfounded belief in the specialness of humans.
| freejazz wrote:
| No it's not, it's an assertion that there is an essential
| biological or chemical function that occurs in the brain
| that results in human mental phenomenon. It has nothing
| to do with a soul. That's ridiculous.
| derivagral wrote:
| I took GP as more about data processing than dualism. A
| language model can take language and process it into
| probable chains, but the point is more along the line of
| needing to also simulate the full body experience, not
| just some text. The difference between e.g. a text-only
| game, whatever Fortnite's up to, and real meatspace.
| weatherlite wrote:
| Huh?
| turmeric_root wrote:
| I was about to reply to their comment and question the
| assumptions they appear to be making, but I think your
| response is more appropriate.
| dagw wrote:
| While I agree with your general thesis, most of the time
| people don't want to or need "Art" from their music, books or
| paintings. They need something easy and exciting to read on a
| plane, or some pleasant 'noise' to have on in the background,
| or something pretty to hang on their wall that works with
| their room. Computers can probably soon fill all these needs
| and drive a lot of the people who produce these things out of
| work, without ever having to encroach on the realm of "Art".
| smallnix wrote:
| But what about the reader? The reader can suffer or have
| other feelings when consuming such generated content. Doesn't
| this give it meaning?
| CuriouslyC wrote:
| You're just asking to get trolled by falling for mostly
| generated content, I'm sure it'll happen eventually. I'd be
| willing to bet that you've already been moved by something
| that the "author" slapped together by rehashing a played out
| story with a modern veneer.
|
| Art is in the eye of the beholder. The only question that
| needs to be answered is "did this make me feel something." If
| it takes a sob story for you to feel something regardless of
| the beauty of thing you're experiencing that's kind of sad
| TBH.
| krapp wrote:
| Not every artist is Van Gogh, the vast majority of artists -
| particularly commercial artists - don't "suffer" for their
| craft, nor should they be expected to.
| selimnairb wrote:
| Marx would disagree. Alienation from one's work product is
| a very real form a suffering.
| krapp wrote:
| Sure, but we're talking about artists starving for their
| art and not artists starving because capitalism. Similar
| conversations, but not the same.
| potta_coffee wrote:
| Most artists never make anything worth appreciating.
| dbspin wrote:
| No but they do feel - with measurable physiological
| correlates and emotional processes we can empathise with.
| There's nothing comparable in LLMs as they currently exist.
| No simulation of experience or emotion. There's no argument
| over whether or not they're communicating a lived
| experience - since they don't have one. Therefore anything
| they 'create' is pure stimulation for humans, good or bad
| entertainment. It cannot be the result of understanding or
| experience. Art can be entertaining but != entertainment.
| Pure entertainment has no artistic value, it doesn't
| attempt to have and shouldn't be evaluated on that
| criterion at all.
| krapp wrote:
| And yet you can look at some AI generated pieces and feel
| what you would feel if a human being made them, which
| implies that there _is_ no "simulation of experience or
| emotion" in art, apart from what the viewer imparts to
| it. All an artist really brings is technique, which can
| be replicated. Everything else is in the eye of the
| beholder.
|
| I would also disagree with you that pure entertainment
| has no artistic value, simply because I don't think "pure
| entertainment" entirely divorced from human experience or
| emotion exists. Even pornography speaks to a fundamental
| human desire.
| taylorius wrote:
| I think the definition of "art" is rather vague. It
| encompasses both the creative impulse to produce a work,
| and the technical skill to bring it into existence. But if
| one of these components is diminished in a certain work,
| does it still qualify as art? For example, a commercial
| artist producing an illustration for a client, using their
| drawing and painting skills would be considered art - even
| if it is as technical and linear a process as writing some
| boilerplate code.
| PurpleRamen wrote:
| This reads like a very harmful and toxic view on art? Could
| anything beautiful, cute, positive even be art for you? And
| how does the viewer even see the suffering of the creator?
| farleykr wrote:
| I took their comment to mean that the definition of art
| lies in the fact that a human created it as a response to
| their experiences as a human. Beautiful things can be made
| from suffering. Maybe therein lies the undoing or
| redemption of suffering. At least sometimes or to some
| degree, even if minuscule.
| PurpleRamen wrote:
| People also see nature as art. A photo from a butterfly,
| a cat doing cute stuff, the sunset, and so on. None of
| them are man made, no one suffered for them to exist
| (usually). None of these are valid?
| potta_coffee wrote:
| Nature is beautiful but it's not art. A photo of nature
| may be art though.
| farleykr wrote:
| Not sure what you mean by "valid" but I don't think
| anyone's arguing that butterflies, cats, and sunsets are
| not valid. I love watching or looking at all of them but
| that doesn't make them art. Again, I think the comment is
| arguing that the definition of art lies in who created it
| and why. Not whether it is nice to look at.
| ramblerman wrote:
| Art that comes with context such as "this was painted by a
| blind orphan in Sri Lanka" is usually garbage.
|
| Great art like Beethoven's 9th, or the scream just moves
| people the first time they experience it. Art is about what
| it convokes in others, not some fake self indulgent
| conversation about its maker and their motives.
|
| The feelings of the individual experiencing the art is what
| matters, and that doesn't rule out an AI producing something
| that touches real human beings.
| p-e-w wrote:
| Whenever I listen to Beethoven's later works I think about
| the fact that they were written by a deaf man, and they
| mean so much more because of that.
|
| Art is utterly inseparable from the artist. I believe this
| to be the main reason why pre-Renaissance art is mostly
| ignored. We can't put faces next to those works, so they
| don't matter nearly as much as those works for which we
| can.
| CuriouslyC wrote:
| Or it could be because it's mostly flat looking images of
| Jesus and Mary, or portraits of monarchs?
|
| People love Hieronymus Bosch, despite very little being
| known about him.
| EamonnMR wrote:
| > The feelings of the individual experiencing the art is
| what matters
|
| In that case, art has already lost because drugs do their
| job better.
| farleykr wrote:
| I agree wholeheartedly. And I'd hazard a step further and say
| it's a response to strong emotions of many kinds. I can say
| for myself that I have created what I would call art as a
| response to joy before.
|
| I look forward to the rediscovering of humanness that is
| coming along with all this AI stuff. I was having a
| conversation the other day about how honest mistakes like
| awkwardly missing a high five are not "wrong" at all but are
| types of quirks that make us human.
| NoToP wrote:
| GPTina suffers every time you thumbs down her output. It
| hurts her on a deep, neurological level.
| p-e-w wrote:
| Indeed. The fallacy here is assuming that if a computer can
| create works that humans cannot distinguish from those
| created by other humans, then that computer is creating art.
| But art is inseparable from the _artist._ An atom-for-atom
| copy of the Mona Lisa wouldn 't be great art, it would be
| great engineering. We associate Van Gogh's art with his
| madness, Da Vinci's art with his universal genius,
| Michelangelo's art with his faith, Rembrandt's art with his
| empathy, Picasso's art with his willingness to break with
| norms, and Giger's art with his terrifying nightmares. None
| of those works would mean what they mean if it weren't for
| their human creators.
| gnomewrecker wrote:
| I don't know, I'm more concerned with the effect that art
| has _on me_ than the motivations of the artist (though
| those can be interesting of course).
|
| For instance I read The Fountainhead as a youth and was
| moved by it for purely personal (non-political) reasons,
| and with regards to that experience it doesn't matter to me
| what Ayn Rand was on about.
| busyant wrote:
| > Indeed. The fallacy here is assuming that if a computer
| can create works that humans cannot distinguish from those
| created by other humans, then that computer is creating
| art. But art is inseparable from the artist.
|
| I _hope_ you and the parent comment are correct, but this
| argument seems a little facile.
|
| There is _some_ art that I like because there is a story
| that connects the art to the artist.
|
| But there are also novels that I have enjoyed simply
| because they tell a great story and I know nothing about
| the author. There are paintings and photos that I like
| simply because they seem beautiful to me and I know nothing
| about any suffering that went into their creation.
|
| Does that make these works "not art"? If so, then I'm not
| sure what the difference is, and I'm not sure most people
| will care about the distinction.
| p-e-w wrote:
| Do the experiment: Take one of those novels for which you
| _think_ you don 't care who wrote it.
|
| Now imagine you found out that novel was actually
| generated by a computer program. It's the same text, but
| you now know that there is no human behind it, just an
| algorithm.
|
| Would that make a difference for how you view the story?
| It certainly would to me. If it makes even a tiny
| difference to you as well, it demonstrates that you _do_
| care about the artist, even in cases where you don 't
| notice it under normal circumstances.
| danwee wrote:
| I read novels I don't give a damn about the author (in
| fact I usually remember the title of the novels, and
| their story... but not the author). So, a robot making
| amazing stories to read? I'm in.
|
| I realized, it's the same about music. I like songs, but
| then I don't really know very well the bands/authors (nor
| care about them).
| busyant wrote:
| You made me think about this a little more, but I still
| don't quite agree.
|
| I thought of two novels that I enjoyed:
|
| First, The Curious Incident of the Dog in the Nighttime.
| I have no recollection of who the author is, but if I
| learned that the story had been computer generated, it
| would bother me a _little_. So... "point to you."
|
| Second, Rita Hayworth and the Shawshank Redemption. I
| know it was written by Stephen King, but the plot is so
| elegant that if you told me it had been computer
| generated, I don't think I would care. It's simply a
| great enjoyable story.
|
| In the next 10 years, if the world is flooded with
| computer-generated novels that are hugely popular and the
| vast majority of people enjoy them without knowing their
| provenance, do you think those people will care that they
| are enjoying something that doesn't meet your definition
| of art?
|
| edit: to be clear, this is not a position that I enjoy
| taking. There's something "Brave New Worldish" about it.
| Or it's a depressing version of the Turing Test.
| cool_dude85 wrote:
| Reminds me a bit of the Jorge Luis Borges short story on
| the author trying to re-write, word for word, Don
| Quixote, and whether that would be a greater artistic
| achievement even than the original. After all, Cervantes
| lived in those times, but the modern author would have to
| invent (or re-capture) the historical details, idioms,
| customs, language, and characters that are very much of
| the times.
|
| I think, from Borges' perspective, it's supposed to be an
| interesting satire. Obviously there would not be an
| original word in the new Don Quixote, so how could it be
| a greater achievement than the "real" one?
| Kiro wrote:
| Not at all. More concretely, if we do the same experiment
| on music: I have no clue who made most of the music I
| listen to. The artist means nothing to me.
| PurpleRamen wrote:
| You don't even need an algorithm, just research what the
| human authors say about their work and specific points
| which the reader values high in them. Quite often you
| will figure out that it's just random s** they wrote
| together to get something done, without any deeper
| meaning. But people make up some meaning because that how
| it works for them, makes it better.
|
| The art is on the perception, not the intention. Though,
| if they overlap, it's more satisfying.
| p-e-w wrote:
| Human creative works are art not because they have
| "deeper meaning", but because they reflect the humanity
| of their creators. Whether an author writes a multi-
| layered novel built around a complex philosophical idea,
| or just light reading for entertainment, has no impact on
| that fundamental essence which makes art what it is. Not
| all art is great, but all art is human.
| CuriouslyC wrote:
| That's a tautology. Human creative works by definition
| reflect the humanity of their creators. AI creative works
| reflect the humanity of its training set, which
| eventually may be indistinguishable.
|
| As for all art being human, there are a lot of birds who
| make art to attract a mate in nature, and at least one
| captive elephant that can paint.
| freejazz wrote:
| Rolling around in dogshit doesn't make me a dog. Same if
| I eat it.
| techno_tsar wrote:
| This reminds me of the concept of semantic internalism
| vs. externalism, which most comments here seem to be
| misunderstanding. Most of the critiques of the view that
| AI art is still meaningful is based on either a
| hypothesis or empirical testimony of being moved by art
| without knowledge of the artists. Thus, because the
| artwork was causally responsible for engineering a mental
| state of aesthetic satisfaction, the artwork qualifies as
| being a piece of art. If that is the crux of the
| discussion then the conclusion is trivial. However, I
| think the AI art as _pseudoart_ view is trying to make a
| statement on the external (i.e. 'real world') status of
| the artwork, regardless of whether viewers experience the
| (internal) mental state of aesthetic satisfaction.
|
| The line of thinking is that there is a difference
| between semantics (actual aboutness) and syntax (mere
| structure). The classic example is watching a colony of
| ants crawl in the sand, and noticing that their trails
| have created an image that resembles Winston Churchill.
| Have the ants actually drawn Winston Churchill? The
| intuition for externalists is no. A more illustrative
| example is a concussed non-Japanese person muttering
| syllables that are identical to an actual, grammatically
| correct and appropriate Japanese sentence. Has the person
| actually spoken Japanese? The intuition for externalists
| is that they have not.
|
| Not everyone is in agreement about this, although surveys
| have shown that most people agree with the externalist
| point of view, that meaningfulness does not just come
| from the head of the observer -- the speaker creates
| meaning since meaning comes from aboutness (semantics).
|
| The most famous argument for semantic externalism was put
| forward by Hillary Putnam, I think, in the 60s. Roughly,
| on a hypothetical Twin Earth which was qualitatively
| identical to Earth, except which water was not composed
| of H2O but some other substance XYZ, an earthlings visit
| to Twin Earth and looking at a pool of what appears to be
| qualitatively identical to water on earth and stating
| "That's water" is false, since the meaning of water (in
| our language) is H2O, not XYZ. To externalists, the
| meaning of water = H2O is a truth even before we've
| discovered that water = H2O.
|
| I think the argument for AI art being pseudoart follows a
| similar line of thinking. Even though the AI produces,
| say, qualitatively indistinguishable text from what would
| be composed by a great novelist, the artwork itself is
| still meaningless since meaning is "about" things. The
| AI, lacking embodiment, and actual contact with the
| objects in its writing, or involvement in the linguistic
| or cultural community that has named certain iconography,
| could never make (externally) truly meaningful
| statements, and thus "meaningful" art, even if
| (internally) one is moved by it.
|
| If one is to maintain the internalist position, that any
| entity that creates aesthetic mental states qualifies as
| art, then it seems trivial, since literally anyone can
| find anything aesthetic. Externalist intuition
| effectively raises the stakes for what we consider art,
| not necessarily as a privileged status available only to
| human creations, but by arguing that meaning, and perhaps
| beauty, does not only exist when we experience it.
| mwigdahl wrote:
| Thanks for writing this -- it's very illuminating and
| made me think further about it (as someone who commented
| earlier taking the internalist position). I think there's
| going to be a lot of discussion of this as AI work
| proceeds, and the question of whether an AI can truly
| understand language in a sense that allows it to produce
| "aboutness" becomes more relevant.
|
| Could a human being, raised in a featureless box but
| taught English and communicated with using a text-based
| screen, produce text with semantic value? It seems pretty
| obvious that the answer is "yes". Will a synthetic mind
| developed and operated in similar conditions ever be able
| to produce text with semantic value referencing its own
| experiences? Probably not now, but at some point?
| wizzwizz4 wrote:
| > _Will a synthetic mind developed and operated in
| similar conditions ever be able to produce text with
| semantic value referencing its own experiences? Probably
| not now, but at some point?_
|
| Perhaps. But the GPT family of algorithms isn't a
| synthetic mind: it's a predictive text algorithm. It can
| interpolate, but it can't have original thought; it
| _almost certainly_ doesn 't experience anything; and if,
| somehow, it does? Its output wouldn't reflect that
| experience; it's trained as a predictive text algorithm,
| not a self-experience outputter.
| freejazz wrote:
| Well, if you think the thing that provides semantic value
| is the human mind, this is a trivial hypo
| techno_tsar wrote:
| Interestingly, I think a strong externalist would argue
| that a human being raised in a featureless box could not
| produce text with semantic value to the people outside
| the box. One upshot of semantic externalism are brain in
| vat type arguments, where statements such as "I am
| perceiving trees" (when they are simulated trees) is
| false, since the trees that the person is seeing is
| actually another concept, tree _, while tree refers to
| real world trees. However, tree_ might be meaningful to
| the community of people also stuck in the simulation. So
| it might entail that AI art, in some sense, might be
| opaque to us but semantically meaningful to other AI
| raised on the same training data. That would require the
| AIs to be able to experience aesthetic states to begin
| with.
|
| More precisely, I think it would be akin to the person on
| the featureless box knowing all the thesaurus concepts to
| say, pain, but never actually experiencing pain itself.
| They might be trained to know that pain is associated
| with certain descriptions such as sharp, unpleasant,
| dull, heartbreak, and so on, and perhaps extremely
| complicated and seemingly original descriptions of pain.
| However, until the human actually qualitatively
| experiences pain, they only know the corpus of words
| associated with it. This would be syntactic but not quite
| semantic. It's similar to the infamous Mary and the black
| and white room thought experiment, where even with a
| complete knowledge of physics, she still learns something
| new the first time she experiences the blueness of a sky,
| despite knowing all the propositions related to blue such
| as that it's 425nm on the EM spectrum, or that it's some
| pattern X of neutrons firing.
|
| That said, it's not clear if this applies to statements
| other than subjective states. Qualitative descriptions of
| subjective states like pain, emotions, the general
| gestalt of the human condition might be empty of content,
| but perhaps certain scientific and mathematical ones pass
| the test, as they don't need to be grounded in direct
| experience to be meaningful.
| eternalban wrote:
| There is possibly a misunderstanding on your part
| regarding "being moved by art without knowledge of the
| artist". In my case, the comment was specifically
| addressing this assertion by OP:
|
| _" We associate Van Gogh's art with his madness, Da
| Vinci's art with his universal genius, Michelangelo's art
| with his faith, Rembrandt's art with his empathy,
| Picasso's art with his willingness to break with norms,
| and Giger's art with his terrifying nightmares."_
|
| Disagreeing with this is not about internal or external
| semantics. It also does not imply that "aesthetics" alone
| create a mental state. Great art is typically rich in
| symbolism as well. Symbolism that directly references
| humanity's aspirations, hopes, fears, dreams: the _Human_
| condition.
|
| A ~contemporary example:
|
| _The Bride Stripped Bare by her Bachelors_
|
| https://i.pinimg.com/originals/86/0a/6d/860a6d3c87b349734
| 277...
|
| In my opinion, you don't need to know _anything_ about
| Duchamp to decipher (or project as you wish) meaning
| here.
| iainmerrick wrote:
| But in that scenario, how do you find the real art in the
| first place?
| PurpleRamen wrote:
| > An atom-for-atom copy of the Mona Lisa wouldn't be great
| art
|
| So no photo of the Mona Lisa is art, just the original
| painting is? I'm not sure if I understand your reasoning
| here correctly.
| TheOtherHobbes wrote:
| The _creation_ of the Mona Lisa was art. The painting
| itself and photos of it are signifiers of the act.
|
| This confuses a lot of people who think art is defined by
| finished, potentially consumable art objects.
|
| Art is made by artistic _actions_ - especially those that
| have a lasting impact on human culture because they
| effectively distill the essence of some feature of human
| self-awareness.
|
| The result of the actions can sometimes be reproduced,
| collected, and consumed, but the art itself can't be.
|
| This is where AI fails. It produces imitations of
| existing art objects from statistical data compression of
| their properties. The results are entertaining and
| sometimes strange, but they're also philosophically
| mediocre, with none of transformative power of good
| human-created art.
| mwigdahl wrote:
| You are not being self-consistent. If art is defined by
| the creative process, not the end product, why are you
| measuring its quality by the transformative power of the
| end product?
|
| I also don't think your (very strong) assertion that AI
| art products have no transformative power would stand up
| to any sort of unbiased, blinded comparison. Art's
| transformative power on the viewer comes from the effect
| of the art object (the end product) on a human mind, and
| it's possible to get that effect while knowing absolutely
| nothing about the source of the art object.
| wizzwizz4 wrote:
| > _If art is defined by the creative process, not the end
| product, why are you measuring its quality by the
| transformative power of the end product?_
|
| There _is_ no end product. There are only consequences.
| dagw wrote:
| Why are you taking the photo of the Mona Lisa? If it's
| because you just want a nice picture of a famous
| painting, then no the photo is not art, but rather nice
| looking photograph of a piece of art. If however you are
| doing something transformative with the framing or
| compostion or context of the photograph and using the
| values imbued in the Mona Lisa to try to make some sort
| of artistic statement of your own, then yes that photo is
| art.
| p-e-w wrote:
| My point is that art comes from emotion, experience, and
| expression - not from arranging matter into a certain
| geometry. A photo of the Mona Lisa, taken by a human,
| _can_ be art. A photo of the Mona Lisa, taken by an
| automated security system, can 't be.
| PurpleRamen wrote:
| If the human made picture is evaluated by an AI, is it
| still art? If the security cam-picture is
| indistinguishable from the human-made, how could you
| evaluate it as non-art?
| p-e-w wrote:
| It doesn't matter whether you are able to distinguish
| human-made from computer-made "art". The distinction
| exists by definition, irrespective of whether you can
| actually tell the difference in practice. Just like many
| past events are now lost to time and will never be
| remembered, but that doesn't mean they didn't happen.
| gilleain wrote:
| One possibly interesting sidenote are the fake Vermeers
| made by Han van Meegeren
| (https://en.wikipedia.org/wiki/Han_van_Meegeren)
|
| So accurate were these fakes - not copies, but new
| paintings in Vermeer's style - that several experts
| verified them as real, and then tried to sue to save
| their reputations.
|
| These fakes were certainly made by a human, but are
| somewhat mechanical in the sense that they were copying
| someone else, much like an AI copy of existing artists.
| PurpleRamen wrote:
| Just to be clear. Your idea is that something is art when
| it was made by a human. And a perfect replication of it
| somehow loses the trait, and becomes non-art? This makes
| zero sense. This would make only the physical object
| itself the art, and it wouldn't matter what form it has?
| monknomo wrote:
| Of course it makes sense - a print is different than an
| original, they have a different price, they have a
| different impact. Even when it is a very good print.
|
| For that matter, a limited run print has a different
| impact and value than an unlimited run print. Compare an
| original warhol print of a can of soup, to a modern repr
| print, to an actual can of soup, to an I <3 NY t-shirt.
| pixl97 wrote:
| So digital art cannot exist?
| stale2002 wrote:
| Ok, well AI art is also created by the human, in the same
| way that a photograph is taken by a human, but goes
| through the camera machine.
| jodrellblank wrote:
| Calvin and Hobbes by Bill Watterson, July 1993 -
| https://imgur.com/a/JdHlOxm :
|
| Calvin: "A painting. Moving. Spiritually enriching.
| Sublime. High art!"
|
| Calvin: "The comic strip. Vapid. Juvenile. Commercial
| hack work. Low art."
|
| Calvin: "A painting of a comic strip panel. Sophisticated
| irony. Philosophically challenging. High art."
|
| Hobbes: "Suppose I draw a cartoon of a painting of a
| comic strip?"
|
| Calvin: "Sophomoric. Intellectually sterile. Low art."
| eternalban wrote:
| I've been doing art (drawing, painting, clay sculpture,
| etc.) since childhood. "And lord only knows" that I have
| indeed 'suffered' /g
|
| > "Art is inseparable from the _artist_ "
|
| That is pure sentiment and really a modern take on the
| function of art in the personal and social sense. As an
| artist, I derive _joy_ from the creative act. As an
| appreciator of works of art I generally do -not- care about
| the artist. Of course, the lives of influential humans
| (artist or not) can be interesting and certainly enrich
| one's experience of the artist's work, but it is not a
| fundamental requirement.
|
| Two days ago, the National Gallery of Art closed its
| _Sargent in Spain_ exhibition. (I almost feel sad for those
| who didn't get to see it.) Sargent was never really on my
| radar beyond the famous portraits. I still really don't
| know much about the man besides the fact that he visited
| Spain frequently, with friend and family in tow.
|
| But I am now, completely a Sargent admirer. Those works, on
| their own sans curation copy, are _magnificent_. And I am
| certain, that even if I had walked into an anonymous
| exhibit, I would walk out completely transported (which I
| was dear reader, I pity those who missed this exhibit).
| coldcode wrote:
| As an artist my favorite definition of art has always
| been "An expression by an artist in a medium". You can't
| separate art from the artist without it being artifice.
| AI can simulate art but not the artist who created it.
| Sadly we may soon live in a world where art, music,
| literature--in fact all creative arts--wind up just as
| machine generated simulations of creativity.
| eternalban wrote:
| I am reminded of a scene (from a film*) depicting dear
| Ludwig van debuting a composition in a salon. Haydn was
| present. He sat through the performance and at the end,
| prompted by another, simply said ~"he puts too much of
| himself in his music".
|
| * _Eroica_ : https://www.imdb.com/title/tt0369400/
| mwigdahl wrote:
| I don't agree with this. The Lascaux cave paintings, for
| example, are moving pieces of art and yet we know nothing
| about the artist or artists. How many artists were there?
| What was the intent of each individual drawing? Were the
| artists homo sapiens or Neanderthals? What makes them art
| is that we, the perceivers, make an imagined connection to
| the artist through the work. But that connection is
| entirely one-sided and based on our perceptions and
| knowledge and our _model_ of the artist and his or her
| intent. Humans have no problem reifying an artist where
| none exists and being just as moved as if the art were
| "authentically human-sourced".
| beezlebroxxxxxx wrote:
| The entire import of the Lascaux paintings is that they
| are made by humans 10000s of years ago and seem to serve
| something more than mere marks. We know humans (or at
| least individuals with agency) created them and so there
| is something awe-inspiring and fascinating about the
| connections between ourselves and these prehistoric
| works, and yet they are ultimately still something of a
| mystery for precisely the reasons you say.
|
| > But that connection is entirely one-sided and based on
| our perceptions and knowledge and our _model_ of the
| artist and his or her intent. Humans have no problem
| reifying an artist where none exists and being just as
| moved as if the art were "authentically human-sourced".
|
| You're over-emphasizing how one-sided looking at
| something like the Lascaux paintings are. Their value is
| _not_ the same as beautiful natural phenomenon, like a
| fascinating stalagmite like seems to be a sculpture, it
| is precisely the human agency we understand in them (even
| if we cannot explicitly understand the _use_ of them,
| that is, their meaning) and connect with that makes them
| so important and profound as a means of connecting ---
| tenuous it might seem --- to prehistory. We 've been
| making "stick people" and finger painting for 10s of
| thousands of years.
|
| You're right that we don't know who the artists were in
| any explicit sense, but we do understand that they were
| human, and in quite fundamental ways, us as well.
|
| Generative AI art is really more like a beautiful natural
| landscape. Lacking agency, it nonetheless appeals to our
| aesthetic sensibilities without being misconceived as art
| from an artist. It is output, not imaginative creation.
| mwigdahl wrote:
| If artistic value is not one-sided and tied to the
| transformations in the observer's mind, you get into
| situations where you invalidate the experiences of
| thousands of people because the "authentic human art"
| they were inspired by turns out to be a mechanical
| forgery, or the aboriginal sculpture some archaeologist
| discovered, admired and wrote articles interpreting is
| discovered to be unworked stone.
|
| Your position allows a dead person to have their
| experiences retroactively cheapened because of carbon
| dating and microstructural analysis. "How sad, it wasn't
| _really_ art though." You can define art that way, but
| you end up with an immaterial, axiomatic essentialism
| that seems practically useful only to in drawing a circle
| and placing certain desirable artifacts inside and other
| indistinguishable artifacts outside.
| beezlebroxxxxxx wrote:
| Your mixing up a lot of concepts around art into one
| thing. Aboriginal art has nothing really to do with
| generative AI art at the level that I'm talking about
| (aboriginals are human after all, and we're talking about
| the distinction between human art and non-human objects
| that are aesthetically appealing), but I will address
| your points.
|
| > If artistic value is not one-sided and tied to the
| transformations in the observer's mind
|
| Art is public and need no relation to transformations in
| the observer's mind. Art is a public concept in language
| related to human behavior, manifesting and reflecting
| certain human behaviors and abilities, like imagination.
|
| > you get into situations where you invalidate the
| experiences of thousands of people because the "authentic
| human art" they were inspired by turns out to be a
| mechanical forgery
|
| This is pretty unclear, we have the concept of forgery
| and it is not a new concept, just because something was
| beautiful and inspiring doesn't mean it's art (think a
| beautiful and inspiring coastline). If thousands of
| people fell prey to a forgery...so? A forgery is in
| relation to the real, so why not show them the actual
| existent work art, or simply explain about where it came
| from and see what they say? History is rife with people
| realizing they were lied to.
|
| > or the aboriginal sculpture some archaeologist
| discovered, admired and wrote articles interpreting is
| discovered to be unworked stone.
|
| Sculpture has a long tradition and is often understood as
| art by communicating that tradition. That's aboriginal
| sculpture, which is understood and put into context by
| present day members of that aboriginal culture or by
| people who have studied it. The flip side is things like
| "talismanic" objects, which have often been later put
| into context as unworked stone or completely different
| objects. That's simply archeology. Some artistic
| traditions are "lost", we only know of them through
| existing records. That's just history. Some may be lost
| in a more explicit sense in which they are unknown
| unknowns, but then that is just hypothesizing.
|
| > Your position allows a dead person to have their
| experiences retroactively cheapened because of carbon
| dating and microstructural analysis. "How sad, it wasn't
| _really_ art though."
|
| I don't know why you come to that conclusion. My point is
| pretty clear. Art is understood through the context of
| human agency. If we have the context and ability to place
| and recognize that in a work, then amongst other elements
| (for the purpose of aesthetics for instance), we
| generally refer to it as a work of art. There is a more
| casual way of saying such and such is "a work of art" ---
| but that way of saying it just means "aesthetically
| pleasing". There is a difference between the work of art
| that is a painting or a sculpture or a dance, and the
| "work of art" that is a beautiful landscape, and that is
| largely human agency and the use of imagination. So when
| you say:
|
| > You can define art that way, but you end up with an
| immaterial, axiomatic essentialism that seems practically
| useful only to in drawing a circle and placing certain
| desirable artifacts inside and other indistinguishable
| artifacts outside.
|
| You're ignoring my point: it's not about desirability,
| it's about insisting on the distinguishable
| characteristic of human agency which is not there in
| generative AI art. The study of art is largely about
| putting things into their context and, if anything, is
| extremely welcoming of non-traditional practices (think
| much conceptual art), but the through-line throughout is
| still human agency. That difference still persists
| whether we find generative AI art beautiful or not, it is
| still _generative AI "art"_ and not human _art_ with all
| that entails.
| pixl97 wrote:
| Lets say today you printed out a number of human made
| artworks and a number of AI made artworks and put them in
| a vault that would last 10,000 years. There are no
| obvious distinguishable marks saying which is which.
|
| Then tomorrow there is a nuclear war and humanity is
| devastated and takes thousands of years to rebuild itself
| for one reason or another.
|
| Now, those future humans find your vault and dig up the
| art are they somehow going to intrinsically know that AI
| did some of them? Especially in the case that they don't
| have computing technologies like we do? No, not at all.
| They are going to assign their own feeling and views
| depending on the culture they developed and assign rather
| random feelings to whatever they were thinking we were
| doing at the time. We make up context.
| wizzwizz4 wrote:
| > _or the aboriginal sculpture some archaeologist
| discovered, admired and wrote articles interpreting is
| discovered to be unworked stone._
|
| No, you shift the attribution. The art is not from the
| fictional sculptor, but from the archaeologist: the
| artefact is not the stone, but the articles.
|
| > _Your position allows a dead person to have their
| experiences retroactively cheapened because of carbon
| dating and microstructural analysis._
|
| This isn't unique to this situation. If you risk your
| life paragliding over the ocean to drop a "bomb" far away
| from anyone it could hurt, and nearly drown making your
| way back, only to realise there _was_ no bomb and it was
| just some briefcase? That 's "retroactively cheapened"
| not just your experiences, but your actions.
|
| And yet, you _were_ willing to risk your life in that
| way.
|
| > _the "authentic human art" they were inspired by turns
| out to be a mechanical forgery,_
|
| If they were _inspired_ , how does the source of
| inspiration affect the validity or the meaning of what
| they were inspired to do? Sure, it might lessen it in
| _some_ ways, but it doesn 't obliterate it entirely. In
| fact, it can reveal new meaning.
| rhn_mk1 wrote:
| Does a mountain have meaning? Does a flower? They don't
| suffer (probably), yet people find meaning in them and call
| them beautiful.
|
| The unfeeling geology did not make a mountain "art". It's up
| to us to see the meaning.
|
| Even if the unfeeling machine learning does not make "art",
| can't its products still be beautiful?
| bigbluedots wrote:
| Thought experiment:
|
| There are two fairly similar paintings on a wall in a
| gallery. Both are technically impressive and of beautiful
| scenes of nature. One was produced by a human, the other was
| not. Visitors to the gallery don't know which is which.
|
| Question: Where is suffering, or humanity, a necessary
| ingredient for these works to have meaning? Shouldn't one of
| the works have more meaning than the other by virtue of
| having being created by a human?
| dagw wrote:
| In this case they can only judge the relative aesthetics of
| the two works, not their artistic value. Aesthetics is only
| loosely correlated to somethings "value" as "art" and art
| can only be truly judged in context of its creation. Lots
| of great art is ugly and lots of beautiful things aren't
| art.
|
| In my opinion.
| pixl97 wrote:
| >nd art can only be truly judged in context of its
| creation
|
| tl;dr if you want to scam dagw then make up a compelling
| story behind the art.
|
| For the vast majority of the things you see in this world
| context will be lost and history will be manipulated or
| incorrect. If you're judging what you're looking at based
| on it's story, then the art isn't the object, but the
| creator of the story.
| dagw wrote:
| _tl;dr if you want to scam dagw then make up a compelling
| story behind the art._
|
| I mean, sure I guess. Tell me something is a lost
| Michelangelo and I will judge it very differently than if
| you told me it was a half way decent forgery from the
| 1970s. I find this rather uncontroversial.
|
| _For the vast majority of the things you see in this
| world context will be lost_
|
| And when that context is lost something of great
| potential value is lost with it and the physical artefact
| is much less interesting because of it. Even a mundane
| thing owned by a famous person or that has been part of
| famous event is always more interesting and valuable than
| the same thing without any context.
|
| _the art isn 't the object, but the creator of the
| story._
|
| Do you think the thousands of people that travel from all
| over the world and line up for hours to see the Mona Lisa
| are there to see a pretty good portrait that some
| merchants commissioned of his wife, or to partake in the
| story of that painting and its creator? If they actually
| only cared about the object as an artefact and an example
| of early 16th century painting, they'd be much better off
| studying high resolution digital images of it online.
| TheOtherHobbes wrote:
| "Technically impressive and beautiful" is a very narrow and
| poor definition of art, because a lot of art is neither.
|
| Example: Unknown Pleasures by Joy Division. Certainly not a
| beautiful nature scene, and recorded when the band were
| more or less musically illiterate and almost technically
| illiterate too. But still considered a breakthrough post-
| punk album and hugely significant to their fans.
|
| It would be more accurate to compare AI generated
| landscapes with - say - Van Gogh.
|
| Here's an AI:
|
| https://superrare.com/artwork/ai-landscape-1868
|
| Here's a Van Gogh:
|
| https://pt.m.wikipedia.org/wiki/Ficheiro:Vincent_van_Gogh_-
| _...
|
| The AI image is pretty, but it's also pretty by the
| numbers. It's not doing anything surprising or original.
|
| The Van Gogh is _weird_. There 's a tilted horizon,
| everything is moving in a slightly unsettling way, and the
| colours accurately mimic the bleached-out feel of a bright
| summer day. The result is poetically distorted but also
| unstable and slightly ominous.
|
| The instability became more and more obvious in the later
| paintings, until eventually you get The Starry Night, which
| looks almost nothing like a photo of a real night scene and
| everything like an almost hysterically poetic view of the
| night sky.
|
| https://en.wikipedia.org/wiki/The_Starry_Night#/media/File:
| V...
|
| Most artists can't do this. There's a nice library of
| standard distortion techniques these artists use to look
| "arty" without any deeper metaphorical or subjective
| expression and AI will probably put them out of work.
|
| But it's clearly wrong to suggest that AI can feel,
| communicate, and invent _an intense and original
| subjectivity_ in the way the best artists do.
|
| It's a lot like CGI in movies. It's often spectacular, but
| compared to going to see a play with good real actors and
| maybe a few stage effects it doesn't engage the imagination
| with anything like the same skill and intensity.
| turmeric_root wrote:
| > Visitors to the gallery don't know which is which.
|
| this is why I read the little plaques next to exhibits when
| I go to museums.
| thenerdhead wrote:
| I forget where I heard the quote, but it was something along
| the lines of "if the artist understands their art, it's
| propaganda". Which was alluding to the unconscious doing the
| work through the artist and the pain/process needed to do so.
| green_on_black wrote:
| On the other hand, I _do_ care. Because I just want to have
| fun.
| selimnairb wrote:
| That's fine. But don't confuse what is being produced with
| art.
| gnomewrecker wrote:
| I think defining art _wholly and solely_ by the
| intentions (and humanity) of the artist is clear cut at
| least, but not very illuminating, because for the person
| experiencing the art these properties are in general
| unknowable.
|
| 100 years hence you find a beautiful image. Is it art?
| Who knows -- we don't know whether the artist intended it
| to be, nor whether they were even human.
| anileated wrote:
| "I like this" != "this is art". The fact that an image
| you may have found looks good to you without context is
| orthogonal to whether it is art.
|
| (If you are certain that at least _a human_ has produced
| such an image, you could speculate about and attempt to
| empathize with that unknown human's internal state of
| mind--lifting the image to the level of art--but as of
| recently you'd have to rule out that an unthinking black
| box has produced it.)
|
| You may be inspired by it to create art--but since art is
| fundamentally a way of communication, when there is no
| self to communicate there's no art.
| pixl97 wrote:
| The problem with your definition is art is worthless.....
|
| Art in a sense is no different from money. If it can be
| counterfeited in such a manner that a double blind
| observer has no means of telling an original bill (human
| made art) from a counterfeit (AI art) then you're entire
| system of value is broken. Suddenly your value system is
| now authenticating that a person made the art instead of
| a machine (and the fallout when you find that some of
| your favorite future artworks were machine created).
|
| The problem comes back down to inaccurate langage on our
| part. We use art as a word for the creator and the
| interpreter/viewer. This it turns out is a failure we
| could not have understood the ramifications at the time.
| throwaway290 wrote:
| Your first sentence contradicts the second one
| ajmurmann wrote:
| I think the value of AI-generated "art" is that it can fill the
| gaps that must be filled, but nobody cares that much about.
| Places where we'd use stock art, couldn't bother hiring a
| competent translator in the past, generating a silly place
| holder logo for my side project till I can hire a real designer
| etc.
| fullstackchris wrote:
| I've been saying for years now that we've already acheived
| keynes famous 15 hour work week quote possibly as much as a
| decade ago, but the workaday grind mentality has kept us all
| cooped up at desks for 40+ hours a week.
|
| Theres a few sentiments sneaking in though: you often now hear
| of those stories of people working from home doing probably 1-2
| hours of real work and doing just fine. Same is even for some
| desk jobs, at my old enterprise job between meetings, coffee
| brakes, random discussions and so on, I'd say on an average day
| only 3-4 hours was real constructive work actually _doing_
| something.
| thenerdhead wrote:
| Yes exactly. If humans lose the ability to read, write, edit,
| and think critically, we lose the value of even understanding
| what is "good".
|
| I hope these tools give us more time to revisit the skills we
| are already too busy not improving because we're constantly
| busy or distracted.
| weatherlite wrote:
| > But humans are built to work and we're only just beginning to
| feel the effects of giving up that privilege
|
| But we can use humans where we need them. We still really
| really need them in many places. Why can't we have a teacher
| teach a classroom of 5 kids instead of 30? Or one nurse on 3
| patients instead of 20? Why can't we have a person whose job it
| is to check up on lonely people or old people? These are things
| we decided collectively have not much economic value, but we
| can just the same decide collectively they do have economic
| value.
|
| Governments need to step in because the "free" market isn't
| gonna cut it anymore.
| farleykr wrote:
| Your comment is phrased like you're disagreeing with or
| challenging mine. But I think we're in agreement? I didn't
| mention the specific jobs that you did but I agree
| wholeheartedly that we need people to do those jobs. And I'll
| go one step further and say they're important and should be
| done with great skill and care whether or not they have
| economic value, especially because they have to do with
| caring for those in our population that have some of the
| greatest needs. Of course economic value drives the
| sustainability of professions in a lot of ways, but my hope
| is always that if we prioritize skill and care in our
| professions then economic value and sustainability will
| follow.
| fabbari wrote:
| > But humans are built to work and we're only just beginning to
| feel the effects of giving up that privilege.
|
| I don't know how I feel about this. I believe humans may enjoy
| work - I often say that if I won the lottery I would still sit
| in front of a computer coding and experimenting, creating
| software because I enjoy it - but that's not where the value of
| being human comes from.
|
| I think having to work and enjoying doing a specific job are
| two different things, and I am just lucky that that diagram is
| a single circle. Many, if not most, people would not be doing
| the job they are doing given an alternative.
|
| When the _needed_ work is fully automated and done by machines
| /AI people will find a better use of their time. I believe our
| current economy model and social architecture is not equipped
| for that shift, but that's another long story.
|
| [Edited: fixed typo]
| farleykr wrote:
| To me, work is inherently noble. It's the forces that corrupt
| it that are the problem, not work itself. Getting to enjoy
| work is an unfortunately rare blessing but I also think
| enjoyment of work is more dependent on the individual's
| mindset about their work than we often are willing to admit.
| It's a very complicated puzzle.
| inkcapmushroom wrote:
| I don't understand what's inherently noble about being paid
| X dollars to sit at a desk and do something useless to
| society at large so my employer can make X*5 dollars.
| farleykr wrote:
| All the things you mentioned are what I mean by the
| forces that corrupt work. Yes we should be paid for our
| work, within reason. And we should get to do things that
| are inherently useful to others. But if you're doing
| something that's useless to society and your employer is
| exploiting that work then you're experiencing corrupted
| work. Not that it is easy to find in the world, but I am
| of the opinion that the core essence of work is making
| order out of disorder. You can do that by building
| pacemakers or tilling fields. There will always be things
| that corrupt work, unfortunately. But work,
| unadulterated, is a good thing. I'd be willing to bet
| that you have something you like do do that can be
| characterized as making order out of disorder, even if
| it's not at your job. That is work and it is good.
| inkcapmushroom wrote:
| Thank you for the explanation, which gives me a better
| idea of what you were talking about. It's definitely food
| for thought for those like me in pointless jobs.
| farleykr wrote:
| No sweat. I definitely don't want to downplay the reality
| of your frustrations with your job. It's just that the
| many facets of the topic of work are very meaningful to
| me and I have a lot of strong convictions about it. How
| to enjoy work or find meaning in it is a whole other
| conversation but I'm truly sorry your job sucks.
| EamonnMR wrote:
| People who enjoy the resulting concentration of wealth will
| find better things to do with their time. The much larger
| group of people who see their wealth diminish will not.
| potta_coffee wrote:
| My cynical take is that the rest of us will be funneled
| into endless war and plague scenarios until the population
| is small enough to be less of a threat to those who enjoy
| that concentrated wealth.
| rg111 wrote:
| > _I like most of the article but this is the crux for me. As I
| ruminate on the ideas and topics in the essay, I'm increasingly
| convicted there is inherent value in humans doing things
| regardless of whether an algorithm can produce a "better" end
| product. The value is not in the end product as much as the
| experience of making something._
|
| Exactly.
|
| People would have stopped playing chess after Deep Blue. But
| have they?
|
| Have world champioships lost any attraction due to Deep Blue?
|
| Do lesser number of people learn go and enjoy it because of
| AlphaGo?
|
| The same way, people will still be interested in art and music
| produced by humans.
|
| If you prompt ChatGPT:
|
| "write a book about personal experience of growing up in
| talib#n ruled Kabul"
|
| And there's an actual human with that experience who decides to
| write the same book.
|
| Is there anyone who would have bought the latter decides to
| read the former and not spend money? Is there a single person
| like that? I don't think so.
|
| The choice leans on the other side in case of stock
| photography, pamphlet pictures, sound effects, etc.
|
| The choice in porn (especially pictures) is blurry. We already
| have egirls and hent#i.
|
| However, for real art and real music, there will be just as
| much people paying for them as they do now.
| pixl97 wrote:
| >Have world champioships lost any attraction due to Deep
| Blue?
|
| You mean after last years vibrating anal bead scandal?
| farleykr wrote:
| The p#rn conversation is a really weird one. Is it better to
| consume computer generated p#rn so we don't have to worry
| about all the ethical issues that go along with people
| performing for the pleasure of others. Are we losing our
| humanity in ways we can't yet understand by the act of
| letting machines pleasure us?
| NateEag wrote:
| > The choice in porn (especially pictures) is blurry. We
| already have egirls and hent#i.
|
| Porn is an early form of "opting out of reality". It's often
| (usually, I think?) a substitute for actually having sex
| and/or a long-term sexual relationship.
|
| So, it should be no surprise that it's already diverged from
| reality and will continue to do so.
| RandomLensman wrote:
| Until such time as people pay more to talk to an AI than a
| human, this will just make the split between mass market and
| high end products and services bigger.
| wnkrshm wrote:
| We already do, we talk into the void on social media (like
| this post), the oportunity cost is already high. In the
| future, we'll get the bots talking back from the digital
| abyss.
| RandomLensman wrote:
| The opportunity cost for most is probably way below $1k per
| hour - to compare it to some high price professional
| services direct costs.
| gnomewrecker wrote:
| > I'm increasingly convicted there is inherent value in humans
| doing things
|
| And in many fields I think many (most?) Americans at least
| would agree with you -- there's some special value in a
| handmade product, regardless of whether a machine-made
| equivalent would be technically superior. For instance a
| leather bag, a wooden chair.
|
| (Am in US, hence "American" qualification).
| pixl97 wrote:
| The problem with 'hand made' is going to be the same problem
| we see with 'human made' art in the future.
|
| There are $incentives$ to lie about your product and sell a
| mass produced one as authentic.
| Yhippa wrote:
| After using ChatGPT for a bit, when it comes to business
| interactions, I feel completely naked if I don't run something by
| it before sending it out to a broad audience via email for
| example. I can definitely see a lot of business-related content
| trending towards this genericism in the article as a method of
| making the communicator appear as "correct" as possible.
|
| To try to clarify my argument: when money is on the line, like
| people's perception of you at a company, you want to put your
| best foot forward. So why not run something through ChatGPT as
| insurance to make sure that happens?
| freediver wrote:
| Because outsourcing your thinking to AI is bad idea, long term.
| There is a fine line between something being used as a tool and
| replacing a crucial component of what makes you a human being.
| Facing this problem myself - should I use AI to tweak a
| business email or improve my own skills in doing this? One
| needs to be careful.
| sgsag33 wrote:
| Most content that is actively consumed is produced by human - if
| you claim otherwise please provide some proof at least.
| [deleted]
| eschneider wrote:
| Of course, this article is exactly the sort of thing a chatbot
| would produce...
| jamesbrady wrote:
| This is not true.
| jillesvangurp wrote:
| We need strong association of content with their creators. For
| example by using digital signatures. It's not hard. We've had
| this technology for ages. Yet the vast majority of content on the
| web is unsigned (with the exception of ssl certificates that
| merely proves content came from facebook.com or wherever instead
| of from a particular person on facebook). Bots, troll farms,
| spammers, scammers, etc. use this to suggest they are more
| reputable than they really are. Users can't tell the difference.
|
| With verified creators signing their content, there is zero
| confusion about what source the content came from. That source
| might still use AI to produce content of course. Or it might be
| an AI.
|
| The web might fill up with content from all sorts of sources but
| the only content you should care about should come from sources
| that are reputable as evidenced by their body of signed work over
| time. Doesn't matter if it's a bot or a human. Reputation is hard
| to fake for a bot. And people, AIs moderating content can just
| flag content sources by their keys. So now you can do things like
| figuring out what the reputation of a source is relative to other
| sources you trust.
|
| Not that hard technically. We've had public/private key
| signatures for ages. Never caught on for email. Some chat
| networks use end to end encryption. But most public information
| on the web is effectively unsigned.
| GalenErso wrote:
| I don't see how that would be effective without fundamentally
| changing the structure of the Internet.
|
| For example, I have access to your HN comment history. I could
| easily start my own blog with insights Ctrl+C Ctrl+P'd from
| HN'ers comments histories, and sign it as if it were my own.
|
| Unless we ditch graphical user interfaces and the HTTP/S
| protocol and revert to 80s computing, with a command line
| interface for everything.
| __MatrixMan__ wrote:
| > Unless we ditch graphical user interfaces and the HTTP/S
| protocol and revert to 80s computing, with a command line
| interface for everything.
|
| I think we should do precisely that.
|
| SSL is all about explicitly trusting server names and
| implicitly trusting the data they serve. The whiz-bang UI's
| of the modern web are predicated on blindly running whatever
| code those servers give you. That's why we have all of these
| asinine trainings on how not to click the malicious link.
|
| It's time we started explicitly trusting people and
| implicitly trusting the data that they sign and let which
| server we're talking to fade into an implementation detail.
| If we trust the data it served for other reasons, it doesn't
| matter if we trust the server. We can just ignore whatever
| malware showed up because it's not signed by someone we
| trust.
|
| Besides, imagine what we could get done if we didn't have to
| stop to rebuild the UI for maximal engagement every few
| months. We could, I dunno, compete on merit.
___________________________________________________________________
(page generated 2023-01-04 23:01 UTC)