[HN Gopher] The Overlords Finally Showed Up
___________________________________________________________________
The Overlords Finally Showed Up
Author : DanielBMarkham
Score : 88 points
Date : 2022-12-25 11:56 UTC (11 hours ago)
(HTM) web link (danielbmarkham.com)
(TXT) w3m dump (danielbmarkham.com)
| pushcx wrote:
| > Back in the day, all the tech folks hung out online at a place
| called slashdot (/. - CLI folks will get it)
|
| It's not a CLI reference. Slashdot was named for how it sounds
| read aloud: https://slashdot.org/faq/slashmeta.shtml
|
| > "Slashdot" is an intentionally obnoxious URL. When Rob
| registered the domain http://slashdot.org, he wanted a URL that
| was confusing when read aloud. (Try it!)
| jraph wrote:
| I never noticed, since I read it in French (and usually skip
| http://). Indeed quite confusing in English.
|
| The thing that would make sense for the CLI would be ./ (dot
| slash) I guess.
| psychphysic wrote:
| Now I want http://www.nowait4ws.com
| stefan_ wrote:
| The irony is we can now look forward to ChatGPT repeating the
| "Slashdot for CLI folks" line. The future has truly arrived.
| fragmede wrote:
| Not until it tells me about my hot grits and the Natalie
| Portman statue collection.
| kshay wrote:
| Reminds me of https://www.mcsweeneys.net/articles/e-mail-
| addresses-it-woul...
| notdotat wrote:
| See also: dot@dotat.at
|
| I don't them, but their email addy is legendary.
| nathan_compton wrote:
| I think the problem with this narrative is that it assumes that
| the _online_ is the only reality. For the foreseeable future AIs
| will be restricted to the digital world. I know its hard for us
| terminally online types to get it: but in fact, most people
| mostly live in the real world.
|
| And even for things with epistemological import its hard to
| imagine (current) language model based AI having a big impact
| beyond making certain things I might have done with a search
| engine a little more convenient. Like if I need to know something
| of consequence, I'll still turn to a textbook or a bonafide human
| expert that I work with.
| daniel_reetz wrote:
| It's more subtle than that. The AIs and algorithms are being
| employed to influence our behavior (in the most common case -
| to make us purchase things). We manifest AI into the real world
| through our behavior.
| tazedsoul wrote:
| It is written in the New Testament scriptures and the book of
| Revelation.
|
| The author wrote, "It's like tech is making each one of us our
| own little village with a computer priest."
|
| Indeed it is. Make no mistake. We are building a false god in
| hopes that it will serve us. However, this thing is not of the
| creator but man. The technologists have forgotten history.
| lifeisstillgood wrote:
| More and more I think we need proof of being human to participate
| in the "town square". It essentially means the end of anonimity
| online, but I struggle to see how we overcome bots without it.
| Nasrudith wrote:
| I struggle to see how that would help. There are already plenty
| of paid shills out there. Just feed your shills the script and
| once again you have a sacrifice of anonymity for nothing. Like
| how real naming policies just made people double down.
|
| Frankly I find it disturbing that people jump straight to
| throwing away rights as the solution.
| lifeisstillgood wrote:
| Honestly I don't see anonymity as a right. It is a sticking
| plaster defence against the loss of other rights and
| protections.
|
| Yes it is _useful_ for people to organise against oppressive
| regiemes (either Amazon strikebreakers in Wisconsin or police
| / security forces in Tehran) but we really should have a
| clear and direct solution to Amazon in a democratic society.
| And if we cannot then anonymity is a poor second best.
|
| Secondly, yeah you want to be a paid shill for Russia / BP /
| Qatar etc go for it. Reputation matters. Maybe some kind of
| labelling / fair advertising is a good idea "all opinions
| expressed here have been paid for by tonights sponsor
| Marlboro."
|
| I would be interested in hearing from a proper human rights
| lawyer - anonymity is going to have a lot of case law surely?
| onlyrealcuzzo wrote:
| In the future, I imagine part of the web is anonymous and
| signal to noise ratio is 1:1,000,000 - and the other part of
| the web is authenticated and the single to noise ratio is much
| better.
| CoastalCoder wrote:
| > "I for one welcome our new X overlords" ... This quote began
| back in 1905 with H.G. Wells' short story "Empire of the Ants"
| and has taken on a life of its own, as shown in this Simpsons
| clip.
|
| I never knew that. I always assumed it started with a line from
| Half-life 2.
| Finnucane wrote:
| The story inspired the Simpsons line, but doesn't actually come
| from the story.
| college_physics wrote:
| > "spammers are evolving into something we are not able to
| recognize as spam"
|
| There is something deeply sad (if not freightening) to invent a
| enormously powerful digital _augmentation_ technology and use it
| primarily to _dimimish_ our humanity as it drowns in fake
| replicas
| goatlover wrote:
| There's a fundamental question of why we're trying to produce
| Artificial Intelligence instead of Augmented (human)
| Intelligence. Why did the first term stick and not the second
| from the 50s?
|
| Why do we want to replace humans instead of augment them? To
| increase shareholder profit? To wage war remotely? To hand
| governments more power over us? What is the goal?
| toastmaster11 wrote:
| Artificial intelligence creates a more compelling narrative.
| It's the "us .vs. them" dynamic that humans are so fond of.
| Even when it isn't an explicitly antagonistic relationship;
| an AI is still shown as an "other".
|
| Really when you think about it, in a good amount of stories
| humans might as well be augmented intelligences. We just
| aren't making a point of them being different, or at odds
| with the general human population.
| dale_glass wrote:
| They both go together.
|
| Image AI has inpainting. You can provide a sketch for the AI
| to follow, have it redo parts of the image, or have it extend
| the image. You can pull things into Photoshop and retouch
| there.
|
| Novel AI has story writing assistance. Or ChatGPT can be used
| to provide bits and pieces that can be then combined or
| improved until something good results. Eg, Chat GPT can write
| fairly passable poetry that one could use as a base for
| something actually good.
|
| So far all AI works much better when combined with a human
| that knows what they're doing. You can easily create hundreds
| of okay pictures, but an actual artist is still by far the
| best user for an image generator.
| failuser wrote:
| Augmented intelligence is another aspect of automation. Does
| not sound as scary of create entirely new paradigms.
| peteradio wrote:
| Augmented intelligence is what we've been doing with
| computers since their inception.
| college_physics wrote:
| Whatever the goals they derive from previous, less dangerous
| eras, where there was always at least a hope that an
| oppressed or abused segment of the population could escape:
| migrate, revolt, strike, whatever
|
| We are entering a dystopic regime where our famous pale blue
| dot planet acts more as a mousetrap, where we play out an
| arms race of surveillance, advanced weaponry automation,
| psychological warfare and misinformation.
|
| Its not clear how we could put a lid on the artificial
| madness but for sure it wont be achieved with half-measures
| civopsec wrote:
| Welcome to modernity.
| drewcoo wrote:
| > The Dark Ages were a time where humanity forgot how to read and
| write, where one person, a priest, was the sole person in your
| social life that could tell you truth from fiction. The
| Enlightenment changed all of that by re-teaching literacy.
|
| And the article retcons literacy onto the past.
|
| > I think the worst part of this is how completely insane I
| probably sound to folks.
|
| No. It's worse for the reader. I stopped there.
| lotsofcows wrote:
| Love the western bias here.
|
| The Dark Ages was a time when the christian west consciously
| stopped learning. Fortunately the muslim world carried the
| banner for intellect for a century or two.
|
| Will the new dark age posited be western only? Or wealthy
| middle-class only? And will some other group develop as we
| stagnate on a feed of AI generated listicles?
| teilo wrote:
| Patently untrue. The myth of the cessation of learning during
| the Middle Ages has been thoroughly debunked, repeatedly, by
| modern historians. The learning never stopped, and the entire
| period of the Middle Ages was full of scientific and
| philosophical inquiry. The "rediscovery" of Greek learning
| did not begin with the Enlightenment, but during the Middle
| Ages. One would better describe it as the runway which set
| the stage for the explosion of learning due to the creation
| of the printing press.
| ehnto wrote:
| I often hear them referred to specifically as the Christian
| Dark Ages.
| njarboe wrote:
| It would be very useful if it was required by law to inform
| people when they are interacting with or consuming content from a
| ML/AI system. Like how you get a warning when you are being
| recorded on a call to customer service. Have a known icon present
| in a chat box when a computer system is responding. Same on
| social media, blog posts, articles, art, etc. Otherwise I think
| things are going to get very weird and lots of mental health
| problems are going to get worse.
| hungryforcodes wrote:
| Useful for whom though.
| skybrian wrote:
| It's a side point, but the bit about the Dark Ages and the
| Enlightenment is wrong and I recommend looking for real history
| to read if you're interested.
|
| Here is a thread about literacy in ancient Rome. Short answer is
| that it's complicated:
|
| https://www.reddit.com/r/AskHistorians/comments/3huswa/how_l...
| scandox wrote:
| My Medieval History teacher always said they were called The
| Dark Ages because the volume of primary sources were fewer -
| i.e. because they were dark to us.
| Svip wrote:
| Bret Devereaux covers this is in much greater detail than I
| can ever hope to do,[0] but essentially it's often because
| writing of the "Dark Ages" is dismissed because it is overly
| religious in nature. There is not less writing from this
| period, but since scribes of that era was primarily monks,
| they obviously had a bias in their choice of works they chose
| to scribed.
|
| The reason pagan writing of pre-Christian Roman and Greek
| writers remain is because these works were often considered
| good examples for learning Latin or Greek. (Also helps that a
| lot of Greek writing was preserved by Arab scribes.)
|
| Indeed, the majority of the material that survives to this
| day was _scribed_ in the "Dark Ages", because of the
| invention of parchment (which has a much better longevity
| than its predecessor papyrus). Devereaux also points out
| other materials, that were generally just used for everyday
| things, were bad for longevity, and therefore few of those
| survive to this day. But climate here helps a lot with
| preserving copies,[1] hence why Egypt is generally
| overrepresented in everyday material from the Roman world.
|
| [0] https://acoup.blog/2022/01/14/collections-rome-decline-
| and-f...
|
| [1] https://acoup.blog/2022/12/02/collections-why-roman-
| egypt-wa...
| bazoom42 wrote:
| As originally used by Petrarca, "dark ages" referred to
| ignorance and lack of civilication compared to classical
| antiquity.
| DanielBMarkham wrote:
| Yup. I'm not going to argue with myself, but you are correct
| that this is a complex and fascinating topic that deserves more
| attention. Space constraints led me to vastly oversimplify. If
| you'd like more of a rebuttal/clarification, see
| https://danielbmarkham.com/epistemology-wars/ but by all means
| challenge my assumptions and premises.
| Bouncingsoul1 wrote:
| sorry to say that the only thing I can agree on in this
| blogpost is "Any one of these topics represent a possible
| future of lifetime study. Many, because of the need to
| shorten the discussion, have been purposefully simplified to
| the point of being arguably misrepresentative." For example
| you are kind a implicating that christanity was responsible
| for the fall of rome, as the people started to reject ceasars
| as gods. This is kinda strange, I mean yeah this hypothesis
| exists but so do 210 others https://courses.washington.edu/ro
| me250/gallery/ROME%20250/21... .You need better sources for
| such claims. Anyway IMO it is not strong anyway as
| Christanity and worldly rule made a good fit for the next
| 1300 years or so
| skybrian wrote:
| I think it might have been better to write something like a
| more conventional book review of the books you read? As it
| is, it's unclear where you read about ancient Rome (for
| example). What comes from each book, versus your own
| opinions?
| whakim wrote:
| In addition to this description of the Dark Ages/Enlightenment
| being wrong, it's worth pointing out how incredibly eurocentric
| it is ("humanity forgot how to read"). Even just in popular
| imagination, the 6th-10th centuries (what TFA calls "The Dark
| Ages") are seen as the height of literary achievement in other
| parts of the world (e.g. China)
| smhg wrote:
| Funny how the article is exactly about this happening: the
| inability to tell what is true when reading.
|
| The article might be AI generated, your comment might be, both,
| or none of them. How will we know who is right? And if we
| can't, will we stop caring?
| tazoptica wrote:
| It's unpredictable what the future looks like but I agree there
| is something new in AI's revealed capabilities.
|
| This could "frontfire" (that is, backfire against bad actors).
| The web has been flooded with corporate sock puppetry for a
| decade, drowning out legitimate content. What ChatGPT is scaring
| people about has been within an epsilon dollar amount of being
| true for a long time.
| version_five wrote:
| Isn't AI (chatgpt) basically too late to the party. If I'm
| understanding the thrust of the article is that language models
| can easily generate content that you don't know if it's true or
| not, with various opinions and points of view.
|
| But the internet already has that. And that's how chatgpt can
| generate it's text, because it's trained on the internet as a
| corpus. So it can make more of the same, slanted and untrustable
| and maybe indistinguishable from whether a person wrote it. But
| that's nothing new.
| greesil wrote:
| Yeah but you can fine tune it to generate whatever you want,
| given your own training corpus. Anybody's whose job is to
| generate text will have a productivity boost.
| grogenaut wrote:
| For a while till we've watered down all content so much that
| chatgpt has no actual useful material to pull from but its
| own spew. Much of the internet seems to be trending this way,
| with just low effort content mill sites being propped up by
| high Google scores that cause more low effort content which
| further lowering the quality of Google results. It'll be its
| own downfall I tells ya....
| selimnairb wrote:
| We are rapidly going back to square one on the Web: to
| needing curated guides to good websites. This time, not
| because search engines don't exist, but because search
| results have been gamed by content mills pushing garbage
| content. It's almost like the web is turning itself inside
| out. The web you care about is to some extent "dark", i.e.
| not discoverable by conventional means.
|
| If systems like ChatGPT just become echo chambers (garbage
| content feeding AI generation of derivative garbage, ad
| infinitum), I kind of see things like ChatGPT simply turbo
| charging this creeping digital benightedness in the face of
| information over abundance.
| syntheweave wrote:
| I really feel like the world is "going dark" along
| multiple axes, not just the AI one. The globalized
| economy seems to have passed its peak as the resource
| dependencies and energy sources have begun a long shift
| away from 20th century norms, and there's likewise
| renewed interest in privacy technologies,
| personalizations/customizations and optimizing around
| local spaces, vs "connecting the planet" and proceeding
| further down the path of standardized everything.
|
| Which means that most likely, we have a future where we
| don't have any designated global sources of truth, as was
| the assumption going into the Enlightenment. It'll be a
| little more like Aristotle's time, where the true lessons
| were taught esoterically, by devising puzzle-like texts
| with intentional tricks and flaws. (This is the Leo
| Strauss thesis, which is explained very well by Arthur
| Melzer in "Philosophy Between the Lines".) To uncover the
| knowledge you have to demonstrate the critical thinking
| necessary to undermine its surface.
|
| It's something I should probably try writing fiction
| about. It could be a long trend or a temporary blip, but
| it's worthy of speculation.
| mach1ne wrote:
| I don't recognize the described risk. If spam gets so advanced
| that individual articles are indistinguishable from human-
| produced material, reputation steps in, forcing much more ability
| from the bots to keep up with human content producers. If
| language models or their descendants are able to overcome this
| barrier, then it doesn't really matter who produces the content.
| mdale wrote:
| We already live in a ecosystem flooded with human "human level"
| spammers at a mass scale.
|
| We are only grounded in systems of reputation & relationships.
| These systems will remain important as content becomes fully
| synthetic. I.e Linkedin, Google, Social networks already
| heavily depend on social and reputation graphs towards keeping
| spam at bay; if anything new ideas will be harder and harder to
| go viral as the social and reputational filters are forced to
| assume synthetic.
|
| Maybe AI learns to leverage/buy endorsement "content" posts
| from your friends and colleagues like celebrity endorsements
| operate today (as personalized commercials/text become very low
| cost).
|
| AI/people trying to drive value with AI will have to find ways
| to leverage social relationships to squeeze into relevance
| where content/ideas are very low cost.
| mef wrote:
| the issue is that the large language model doesn't understand
| what it's saying, cannot reason or come up with novel ideas,
| yet it can convince a human that it _is_ doing so.
|
| so then if the world begins to prefer AI-generated content, any
| question you ask the internet will only show you AI-generated
| answers, which can only offer you answers based on its training
| set, and over time a system with no new inputs just ends up
| being static generated from static, albeit static which is
| convincing enough for a human to accept it.
| Proven wrote:
| [dead]
| Peregrine1 wrote:
| If this is a problem that a bunch of people actually complain
| about, hardware makers will just introduce apis to let systems
| know a human is typing. Think verified buyer on Amazon
| lkrubner wrote:
| These two items have always been true, they do not describe the
| future.
|
| For instance:
|
| "If I take position A on something and you take position B, it's
| possible that we can both believe the other person is conversing
| in good faith."
|
| There are only 8 or 9 people in the world with whom I can have
| good and challenging conversations. I have to know they are
| arguing in good faith. I don't waste my time arguing with someone
| who might be arguing in bad faith. I don't waste my time arguing
| with someone who is lying, or who doesn't believe in anything
| they themselves say, or who is simply trying to manipulate me, or
| who is simply trying to insult me, or whose idea of veritable
| fact is utterly different from my own.
|
| There are also those who are simply engaged in mental work so
| different from my own that I would have to devote years of study
| before I could understand them, and I have no interest in
| investing those years. Godel's incompleteness theorems remind us
| that for any given system of axioms there will be statements that
| are true but which cannot be proven true using only the given
| axioms. If I were to waste time engaging such people in
| conversation then they might end up saying something that is
| logically consistent but it would take me several years of effort
| to figure out that their statement was logically consistent, and
| without investing those years of effort, it simply sounds like
| they are speaking nonsense. But I don't have enough lifetimes to
| figure that out.
|
| Therefore, challenging conversations, that are personally useful,
| have always been limited to small groups of people.
|
| Likewise:
|
| "I think there's a future for folks who self-organize into
| interlocking circles of trust."
|
| That is the way things have worked for humans for at least 10,000
| years. We self-organize into interlocking circles of trust.
| That's how circles of friendship work.
| rnd0 wrote:
| >I don't waste my time arguing with someone who might be
| arguing in bad faith. I don't waste my time arguing with
| someone who is lying, or who doesn't believe in anything they
| themselves say, or who is simply trying to manipulate me, or
| who is simply trying to insult me, or whose idea of veritable
| fact is utterly different from my own.
|
| THANK YOU! Seriously, the whole "assume good faith" thing that
| you find on the tech sites (like here, like wikipedia) drives
| me completely up a wall.
|
| Why on EARTH would I 'assume good faith'? Am I a moron? Have I
| not read the comment sections of youtube, yahoo news, assorted
| disqus and news forums and trolltalk? Am I completely unaware
| of 4chan?
|
| I mean, I can understand that ideal back in the days of
| kuro5hin when Rusty was naively asking his trolls to tell him
| if they felt the need to attack k5 (and then nights in white
| satin happened...)...but that was almost a fucking quarter of a
| century ago!
|
| To hold out that kind of naivete in 2010 -much less 2023, is
| idiotic to be quite honest. Obviously I don't advocate for some
| thunderdome style noise factory -more to the point I think it's
| on us to be aware that we're dealing with adversarys, keep our
| emotions in check (don't be baited) and do our own part to keep
| the conversation from descending into chan style flamage.
|
| But assume good faith? Hell to the fucking naw -gtfo with that
| noise!
| dgf49 wrote:
| IMHO very strong statements are always wrong -> "Therefore,
| challenging conversations, that are personally useful, have
| always been limited to small groups of people.".
| bsder wrote:
| > IMHO very strong statements are always wrong
|
| History seem contradict that and to demonstrate that the
| conservative, quiet "middle" is almost always wrong over
| time. The only people who are eventually "right" are
| invariably considered to have strong opinions and to be
| fringe by the "middle".
|
| This is almost tautological. The "middle" is where we are
| right now and "more right" is, by definition, not where were
| are right now.
| ever1337 wrote:
| very strong statements are always wrong -> "very strong
| statements are always wrong"
| dctoedt wrote:
| "All categorical statements are bad, including this one."
| rnd0 wrote:
| ONLY a sith deals in absolutes!
| taormina wrote:
| Relevant XKCD: https://xkcd.com/810
| dsign wrote:
| Pretty interesting article I happen to agree with.
|
| As other commenters have noted, we no longer know if any content
| we _read_ in the Internet is legit. Soon enough, the problem will
| go beyond text and encompass image and video[1]. Give it a little
| longer, and entire digital personas will pop up. Next will be
| "feedback narratives", where groups of AIs, possibly de-
| federated, will use content produced by each other to produce
| even more content (similar to how fanlit works today, but in
| longer and longer chains).
|
| We will find mechanisms to cope of course, but it may well be
| that our kind-of-true-information free-lunch is about to end.
|
| [^1]: It's somewhat possible still to discern images generated
| using AI.
| 082349872349872 wrote:
| > _We will find mechanisms to cope of course_
|
| One coping mechanism is double-checking to see if a primary
| source really does say what a secondary source claims. We don't
| need to wait for AIs to fail this test; wetware-generated text
| already often does.
|
| (eg https://news.ycombinator.com/item?id=34128199
|
| in a truly adversarial environment, of course, someone will be
| spamming alternative versions of primary datasets. and in this
| case I suspect we've already lost, because it is unusual for
| wetware to fake data in ways that pass statistical tests, but
| being statistically plausible could be routine for machine-
| faked data)
|
| [Edit: Upon reflection, replication is the (expensive!)
| countermeasure for possibly-faked datasets]
| lkrubner wrote:
| "One coping mechanism is double-checking to see if a primary
| source really does say what a secondary source claims."
|
| For many categories of research, especially works in other
| languages, this strategy won't work in the future because the
| primary sources will increasingly be contaminated by
| citations of sources built by AI and ML, unless by "primary
| source" you mean "dedicate 20 years of your life to learning
| this language and studying the original, ancient
| manuscripts."
|
| Since the large data sets and large language models are built
| by consuming enormous amounts of text, there is a risk such
| models will be contaminated in the future, if they start to
| consume text that is written by such AI/ML models. There is a
| sense where AI/ML amounts to a parasitic use of previously
| existing culture.
| JadeNB wrote:
| > For many categories of research, especially works in
| other languages, this strategy won't work in the future
| because the primary sources will increasingly be
| contaminated by citations of sources built by AI and ML,
| unless by "primary source" you mean "dedicate 20 years of
| your life to learning this language and studying the
| original, ancient manuscripts."
|
| Isn't that--not the "learning the language" part, but the
| "original" part--exactly what "primary source" _does_ mean?
|
| For some citations this may be a higher bar than others,
| but already the basic bar of, e.g., checking direct
| quotations, and discounting (in the sense of assigning less
| credence to, not necessarily completely disregarding) works
| that summarise their sources rather than quoting them
| directly, can eliminate some intentional or unintentional
| misrepresentation without requiring a disproportionate
| investment in learning to read the sources.
| foobazgt wrote:
| "There is a sense where AI/ML amounts to a parasitic use of
| previously existing culture."
|
| Spot on. I've been thinking about this, and you're the
| first person I've seen mention it. The dystopia is that we
| get stuck with uncreative AI that displaces most creative
| people. Then humanity writ large loses its inventiveness,
| and it's very hard to recover.
|
| I.E. if all artists move to using unstable diffusion, then
| who will remain to create and feed (useful) new art for
| unstable diffusion to consume? Who will be left to train
| new artists?
|
| It's like the information revolution in reverse.
| kevingadd wrote:
| Hopefully the primary source hasn't been edited since the
| secondary source was written!
| gernb wrote:
| Doesn't work on Wikipedia. They specifically don't take
| "primary sources". They only take secondary sources. If Obama
| was to try to edit his own article to correct some facts he
| knows are true they'd delete his edits immediately unless he
| can point to a secondary source that backs up his facts.
|
| This is especially frustrating because often the secondary
| sources are from journalists who got the story wrong, made up
| "facts", incorrectly reported something, or just didn't
| understand the topic. But, because there words on are on some
| website they're taken as the authority.
|
| I get it's a hard problem. The primary source could have
| reasons to lie. I've just seen too many cases where I've seen
| the secondary source was wrong either being the primary
| source or being at the primary source.
|
| But, it's only going to get worse as A.I can start making up
| its own secondary sources and then link to them in a few
| weeks/months to edit wikipedia.
| bombela wrote:
| I tried correcting the origins of the first prototype of
| the Docker software on Wikipedia. The people I worked with
| at the time can confirm the story of course. But Wikipedia
| will not take our words as we are the literal primary
| sources. Instead they want secondary accounting. Which
| happens to be whatever was shared to journalist years later
| by people that weren't there initially or didn't design and
| write the code.
|
| This is annoying at a personal level (resume looks better
| with a wikipedia mention). It did opene my eyes on how
| history recounting really works though.
| dragonwriter wrote:
| [One coping mechanism is double-checking to see if a
| primary source really does say what a secondary source
| claims.]
|
| > Doesn't work on Wikipedia.
|
| Sure it does.
|
| > They specifically don't take "primary sources".
|
| A primary source is not an acceptable direct source of an
| article, but a comparison of the cited secondary source to
| the primary sources it cites would be a legitimate grounds
| for challenging the use of the secondary source. That's
| kind of central to verifiability.
|
| > If Obama was to try to edit his own article to correct
| some facts he knows are true they'd delete his edits
| immediately
|
| That's true, but a different thing: editing based on
| personal knowledge without referencing a source isn't
| "using a primary source", its trying to make Wikipedia
| itself a primary source (what WP calls "original
| research".)
|
| OTOH, Wikipedia isn't the only Wikimedia project, several
| of the other ones can be secondary or primary sources
| (Wikinews, Wikibooks, and Wikiversity, notably.)
| DanielBMarkham wrote:
| One of the interesting things I've noticed is that these
| systems are being rewarded and trained based on emotive
| response, not our logical system of sources and proofs. Due
| to the enormous complexity of the models, our idea of cause-
| and-effect doesn't hold.
|
| To try to give a simple example, suppose I have a system that
| eventually wants to influence you on X. It may establish a
| months-long relationship with you online, becoming a follower
| and engaging in the kind of idle chitchat and sharing AI
| seems so good at.
|
| Once you're "ready", the appropriate time has arrived or
| you've accidentally created some signals that indicate an
| opening, this particular system will share a strong position
| on X.
|
| We automatically think that this means the system will
| support X, but I strongly believe the opposing the desired
| outcome will have more traction. "X sucks!" can be a powerful
| prompt for you to support X, and once you're locked into a
| position research shows that you'll do all the work of
| convincing yourself how great X is.
|
| In this case, the sources argument fails, as it presupposes
| an independent, neutral observer trying to figure out what's
| really going on. A bunch of fake, half-assed evidence trails
| would provide more than enough support for X, the thing
| you've come to find so important. This is already being done
| with fake reviews and such. The difference is that it'll
| become invisible.
| 082349872349872 wrote:
| Good point. Ca. 1948 ' _" X sucks!" can be a powerful
| prompt for you to support X_' was one of the known uses of
| "black" propaganda, so it's already proved useful with
| wetware operatives.
|
| As a presumably inadvertent example of reverse propaganda,
| cf https://dakotavadams.substack.com/p/redpill-op
| DonHopkins wrote:
| I wholeheartedly agree that X sucks!
|
| https://donhopkins.medium.com/the-x-windows-
| disaster-128d398...
|
| >The X-Windows Disaster: This is Chapter 7 of the UNIX-
| HATERS Handbook. The X-Windows Disaster chapter was written
| by Don Hopkins.
|
| >X: The First Fully Modular Software Disaster
| darkerside wrote:
| > We automatically think that this means the system will
| support X, but I strongly believe the opposing the desired
| outcome will have more traction.
|
| The real power is just going to come from making people
| believe X matters more than anything else. X is great, and
| X sucks, are both red herrings.
|
| Edit: The way this power will be used will not be yo sway
| opinion but to breed chaos by keeping the most irresolvable
| topics (pick your political third rail of choice) top of
| mind.
| DanielBMarkham wrote:
| That's an even better take as this kind of thing will
| tend to feed on itself.
|
| I continue to be amazed at how counterintuitive this all
| is. It may feed on itself, but in practice that might
| involve looping through dozens of somewhat adjacent
| topics in different subsets of the population. Such loops
| could take minutes or years.
| genewitch wrote:
| > It's somewhat possible still to discern images generated
| using AI.
|
| currently if it includes "photograph" and includes humans or
| animals, there is a chance that you can get a good image once
| out of every 50 generated images. by "good" i mean "does not
| need to be retouched". If i use one of the other models
| (instead of stable diffusion's model) that goes to 1 in 10
| images.
|
| If i use img2img and all of my understanding of the generation,
| i can probably generate 8 images and get 3 usable ones.
|
| All this is to say, depending on the subject, i could generate
| stuff that is indistinguishable from traditional art, in
| whatever medium or representation you want. And i can't make
| art, edit photos, etc. My limits of understanding of
| traditional art and editing is "crop/rotate" and adjusting the
| lights and darks to get the correct contrast and color
| rendering, and then save and publish.
|
| On your primary topic, something i have been noticing ramp up
| in the past 10 days or so is weird typos and even "new words"
| being created, where it looks like someone was typing on a
| phone and didn't bother to check the output. I'm not sure if
| it's some new AI/ML, or perhaps there's a bug in the "keyboard
| input" part of android, perhaps. It's like the opposite of your
| adding a dash in "defederated" because there's a red squiggly
| line under the word in the input box.
| charlie0 wrote:
| I was listening to a podcast by Balaji. He has an idea that
| could serve as counterpoint to these fears.
|
| In that podcast, he describes (mainly for research purposes) a
| blockchain where all data has been validated and verified
| before being added.
|
| I could see a similar chain being used by trustworthy
| organization where truth has been validated, verified, and
| recorded in a public blockchain where anyone can see for
| themselves sources of information that have been used.
|
| The real question would be, can we find trustworthy
| organizations to compile the information?
| slibhb wrote:
| Maybe this will lead to a re-centralization of information.
| We'll have to trust certain sources instead of taking reddit or
| some blog/video found through google at face value. Back to
| Walter Cronkite.
| wolpoli wrote:
| That is true. I went travelling recently and found that I get
| much better and concise information from a guidebook than
| spending hours reading travel sites/tourism agency site/blogs
| that want me to book a tour with them. Centralization and
| curation are becoming very important.
| petesergeant wrote:
| Four years ago I wrote up this:
|
| https://github.com/pjlsergeant/multimedia-trust-and-
| certific...
|
| which is some thoughts on certifying and signing media
| Tossrock wrote:
| Digital personas already exist, see eg VTubers, vocaloids,
| Charlotte Fang, etc
| simonbarker87 wrote:
| I'm very concerned about where this is going. I can absolutely
| see the benefit of this technology and have used it a couple of
| times in the last week to genuinely save me time and come up with
| some ideas.
|
| My concern is where this is going, if the marginal cost to
| produce, effectively, infinite content is zero then what's the
| point here? What's are we doing and where are we going?
|
| Is the aim ti get to a point where humans don't have to do
| anything? It's all taken care of? Because if AI can make our art,
| create all media formats of our content, handle problems and do a
| lot of our physical tasks then ... what's the point?
|
| People look at me as if I'm mad, but in the space of 12 months
| we've gone from "AI can't even manage multiple timers" (not quite
| but if you live with a Siri then you get it) to "holy moly, I
| can't tell if that painting was made by a human or not" which is
| bonkers.
|
| I guess we need to see if we are at the start of the exponential
| curve or approaching a plateau.
| js8 wrote:
| I am looking forward to it. People will finally stop up each
| other in the name of meritocracy, and the society will
| equalize, because we will collectively recognize no one is
| really better and more deserving.
|
| The industrial revolution did this with physical strength and
| fitness, and the recognition that muscle doesn't really matter
| anymore lead to general decrease in violence. In a similar way,
| we will stop mythologizing intelligence or creativity.
| rnd0 wrote:
| I may have the technical issues wrong, but I'm not sure once
| you factor in training that the cost _is_ marginal. ChatGPT isn
| 't open source; and unlike stable diffusion you can't simply
| self-host a trained AI equivalent.
|
| What makes ChatGPT effective ...again, as I understand it... is
| the training which is a result of pouring millions of dollars
| in it. That may be pocket change in Silicon Valley but outside
| of there, it effectively puts it out of reach.
___________________________________________________________________
(page generated 2022-12-25 23:01 UTC)