[HN Gopher] Show HN: Death by AI
___________________________________________________________________
Show HN: Death by AI
I recently created a simple but efficient tool to help document
writers generate a list of acronyms and definitions. Here comes
chatGPT. At home, it's all fun and games until I realize this
thing might be able to do exactly what my "dumb" website does:
"Give me the list of acronyms used in the previous text and their
definitions"... It had a good-looking answer. Of course... At
first I was bummed, but quickly realized that even though the
output looked neat, it was also not reliable (missing some
acronyms, adding some that were not in the original text, albeit
related to it). Anyway, the next day I stumbled upon
https://killedbygoogle.com and the two things collided in my head:
other people probably feel like AI is going to "kill" them or their
product too. Why not make a list of these things? A shameless
clone later (it's open source ) https://deathbyai.com was born. I
was initially going for a more serious tone, but it soon felt
impossible to give a fair story for each item in ~5 lines, so I
went for something lighter and, hopefully, more fun. This is
neither an attempt to criticize AI nor one to bury its "victims."
On the contrary, I hope this page can highlight the potential of AI
while showing that the "old ways" can still be relevant. Hope you
like it. Feedback and contributions are welcome!
Author : demondragong
Score : 123 points
Date : 2022-12-20 17:17 UTC (5 hours ago)
(HTM) web link (deathbyai.com)
(TXT) w3m dump (deathbyai.com)
| rcpt wrote:
| Ignoring the mountains of regulation the issue with doctors is
| that bedside manner plays a huge role in effective treatment.
| Patients are often hiding symptoms in order to avoid a scary
| diagnosis or unaware that something they've been experiencing for
| years is worth mentioning. There's a very human element involved
| in extracting the data you need and convincing people to follow
| treatments.
| pixl97 wrote:
| As people hook themselves up to more AI assistants and health
| monitoring applications I'm not sure if that will always remain
| true. Our telemetry devices may start to pick up on these
| things before we even do.
| [deleted]
| blfr wrote:
| Unparsable submission title but a cool site.
| netsharc wrote:
| The "RIP-off" is a terrible pun that someone would utter and
| say "Get it, do you get it, because it involves death, and it's
| a copy, a rip-off!".
|
| Don't try to be too cute, OP (submitter's username is the same
| as the github username linked at the bottom of the webpage, and
| the RIP-off (har har har) pun is also at the bottom)...
| dpflan wrote:
| Indeed: maybe: "Death by AI -- a 'Killed by Google' clone but
| for AI"
| bee_rider wrote:
| Yah I was wondering why Google would, via RIP-off, kill a site
| focusing on things AI is threatening.
| dang wrote:
| Submitted title was "A Killed by Google RIP-off focusing on
| things AI is threatening". We changed it.
| demondragong wrote:
| Me: Can you write a short title for a forum post about a
| humorous website that lists things that people think AI is
| threatening to replace. Bonus points if you can give credits to
| "Killed by Google", a website that inspired it.
|
| ChatGPT: Here is a possible forum post title that combines
| humor with a reference to the website "Killed by Google":
|
| "AI Taking Over the World? Here are the jobs (and hobbies) it's
| gunning for (with a nod to 'Killed by Google')" [...]
|
| Me: The same but not trying to be too cute.
|
| ChatGPT: Here is a more straightforward forum post title that
| gives credit to the website "Killed by Google":
|
| "AI Threats: What is Artificial Intelligence Endangering?
| (Inspired by 'Killed by Google')"
| hamilyon2 wrote:
| An AI would write a better title
| [deleted]
| [deleted]
| spaceman_2020 wrote:
| Things dying because of AI really depends on how good the prompts
| are.
|
| As someone with a lot of doctors in his family, I can assure you
| that most patients are absolutely abysmal at offering prompts.
| kgeist wrote:
| You probably should add "Show HN"
| dang wrote:
| I've added it. Thanks for watching out for a fellow user!
| demondragong wrote:
| Thank you! It's my first post and I have to admit the rules
| confused me a bit.
|
| In Show HN rules: "Off topic: blog posts, sign-up pages,
| newsletters, lists, and other reading material. Those can't
| be tried out, so can't be Show HNs. Make a regular submission
| instead" => felt like this would be categorized as a list but
| I'm glad you promoted the post as Show HN.
| dang wrote:
| Actually you're right - it could be categorized as a list.
| I guess we'll call it a borderline case.
|
| Thanks for thoughtfully considering the rules, on your
| first post no less! If only everyone would...
| paddw wrote:
| I personally would appreciate a more expansive and less
| opinionated (forgoing the value judgments about the ultimate good
| or bad about AI being applied to something) version of this, as a
| way of tracking AI progress month to month. Maybe crowdsourcing a
| rating system from 1-10 in each category for how well AI can
| complete each category of task.
| morisy wrote:
| Great backstory! Been playing around with a very similar tool
| (for a specific, niche context: context for archives of
| government documents) and then was excited for GPT-3 to help
| solve it versus other options and have run into the same
| experience: Good, very expensive way to get what feels like a
| great solution and then you start noticing the many cracks.
| 0x457 wrote:
| I feel like people talking about "Killed by AI" don't understand
| how Machine Learning works. There are a lot of fields that can be
| "killed" by current AIs if trained correctly - almost anything
| menial, repetitive and something that people could answer on
| their own, but "skill issue" prevented them from.
|
| Art? Yes, you can get it to draw something in style of, but it
| won't be able to create new style.
|
| StackOverflow? Well, what are you going to train it all? Current
| AI is good at answering questions that been already answered by
| SO.
|
| Spotify's recommendations? Absolutely not. Probably the worst
| recommendations out of all streaming services. Sharing playlists
| EW not because of AI, but because of music accessibility at
| unprecedented levels. People don't even make playlists for
| themselves anymore, let alone share them. Sidenote: 4/5 of my
| Discover Weekly is insta-dislike because Spotify doesn't get
| _why_ I liked the song similar to it.
| EamonnMR wrote:
| So what's really going to be killed by AI is originality. The
| moat to cross to get to anything original will get wider as AI
| makes it easier to churn out rehash.
| 0x457 wrote:
| I don't think it will get killed, but I think those models
| will eventually replace people that work on non-original
| stuff. Like, I don't doubt AI can "direct" the next 100 CoD
| games and no one will notice anything.
| rglover wrote:
| That, and civilizational progress.
|
| The only real "threat" of AI is it lulling people into apathy
| and shutting their brains off. When the majority defer their
| thinking to AI because it's "good enough," civilizational
| progress will plateau. You can't invent the future using a
| machine that was only trained on the past (especially if
| you've atrophied your own ability to think about/interpret
| what the machine spits out).
| steviescene wrote:
| _Art? Yes, you can get it to draw something in style of, but it
| won 't be able to create new style._
|
| I don't think that's true. Because DALL-E knows how to create
| art in various styles, it implicitly knows what are the
| parameters which separate one style from another -- for
| example, thickness or shape of brush strokes, degree of realism
| vs. surrealism, typical subject matter, etc.. I imagine if you
| asked some variant of DALL-E to create a new style, it could
| arbitrarily alter the parameters that define "style" and come
| up with something new, and even alter them in a way such that
| the new style could reasonably be expected to be pleasing/well-
| received.
| baandang wrote:
| It is not true at all. I have already seen more new styles
| since summer from Stable Diffusion than in the last 20 years
| of going to art galleries.
|
| Every artist has their influences. If they come up with
| something new it is from a unique combination of their
| influences. Stable Diffusion makes this literally trivial.
| 0x457 wrote:
| I think you just discovered that art galleries either show
| art by dead artists (can't be a new style) or zombie
| formalism. I haven't seen anything groundbreaking new by
| Stable Diffusion or DALL-E. Well, sometimes it produced
| funny images because it doesn't understand what it's
| actually "painting".
| throwaway1851 wrote:
| A big difference is that a good artist does this in a
| meaningful way, creating novelty in a way that triggers a
| specific emotional impact or conveys a message. With
| generative AI, it's all statistical sampling with any
| "meaning" inferred after the fact. But I do believe AI will
| be invaluable in helping artists explore idea space and
| refine their vision.
| 0x457 wrote:
| It knows how to create in styles it been trained on. Yeah,
| you can tune and tweak parameters, but it's not the same. I
| will for sure kill most of digital artists, though.
| [deleted]
| Nexialist wrote:
| I'm generally bearish on AI, but I do feel that there are going
| to be a surprising number of applications where 'good enough'
| results are actually perfectly fine.
|
| In particular, a couple of industries that I think are likely to
| get disrupted in the next few years are stock art, and the art-
| by-commission scene. The ability to use a Stable Diffusion +
| Dreambooth style setup to easily generate new artwork in the
| style of your favourite artist that's "good enough" is incredibly
| powerful.
|
| Doctors and Programmers? Not so much, for now.
| quaintdev wrote:
| I thought it's going to take decades to dethrone Google. Lo and
| behold a chatbot funded by Microsoft is giving some serious
| competition to Google. Bing might finally beat Google in case
| they don't rename Bing or launch completely different product!
| visarga wrote:
| Google is toast. It has been in continuous regression for the
| last 10 years. On the other hand, language models are
| progressing by leaps and bounds.
|
| It is likely that within the next year, a new open chatGPT
| model will emerge that is comparable to "ClosedAI", to be used
| as the next stage after search. This model, along with others
| in its family such as Stable Diffusion and Wisper may become
| integral components of browsers and operating systems and could
| potentially be used as the main interface for accessing the
| internet.
|
| As people become tired of ads and spam, they may turn to
| language models as a way to shield themselves from these
| distractions. These local models may also serve as personal
| creative spaces, allowing individuals to work and explore ideas
| without outside interference.
| ben_w wrote:
| Google can trivially duplicate the research and compute
| needed for their own version of GPT-3 if they want to. Given
| the size of the company, I wouldn't be surprised if several
| people or teams have done so without them all knowing about
| each other.
|
| Google also has at least two generative models of their own
| that they, like OpenAI is criticised for, don't publish "just
| in case"; one for duplicating voices, and the LaMDA chatbot
| that got in the news before ChatGPT.
|
| People will get tired of ads, but I bet someone will use a
| text agent to rewrite stories and scripts so heroic
| characters always enjoy the great taste of $beverage while
| saving the day, the lead romantic opportunity is
| $consumer_gender_preference and likes wearing $fashion_brand.
| htrp wrote:
| technically google can trivially duplicate this work with
| their own llms...
|
| organizationally, google is a hot mess that can't even
| figure out how to launch a product
| dvngnt_ wrote:
| google already has this technology and have submitted their
| own research. this is just hard to deploy at google scale.
| chatgpt is millions, google is billions and many companies
| rely on that entire ecosystem/ads
| btbuildem wrote:
| In my experience at least, the LLM-based "search" is super
| unreliable, because it makes up "facts" half the time.
| neel8986 wrote:
| Google have literally invented LLM using Transformer. LLMs are
| trained on large corpus of text. Google have bigger corpus than
| anyone. So this prediction of google's demise don't make any
| sense. ChatGPT literally have no advantage in terms of
| technique and dataset that Google don't have. Reason companies
| like google are exposing their internal LLMs are they are not
| production ready to handle 50 billion queries a day
| ben_w wrote:
| I was agreeing right up to the last sentence:
|
| > Reason companies like google are exposing their internal
| LLMs are they are not production ready to handle 50 billion
| queries a day
|
| If anyone can do that, Google can. Even at that scale, I'd
| bet Amazon and Microsoft can also do that right now, possibly
| also Apple and Twitter, and the only reason I'm not listing
| Tesla is that (to my surprise) SOTA image AI uses way less
| RAM than SOTA text AI, and Tesla's probably all about image
| data.
|
| Nah, all of them have got a lot of potential reasons why they
| might not make them public, but I don't see that being one of
| them, and especially not Google given how often they jump in
| with half-baked launches that they try to make work later.
| srajabi wrote:
| Great site!
|
| I feel like it's missing some things though:
|
| * Data Entry -- OCR and the like
|
| * Data Retrieval -- Don't search engines still qualify as AI
|
| * Sorting mail?
|
| * Other factory use cases like removing undesirable tomatoes:
| https://www.youtube.com/watch?v=aYQ_5c6m8Is
|
| * Many others I'm not thinking of...
| spaceman_2020 wrote:
| I feel that AI can do away with 90% of the KPO industry. Most
| of that is just grunt work being done by a lowly paid human
| halfway across the world.
| demondragong wrote:
| Thanks! It's definitely a short list at this point. I had a few
| more items in mind but was too eager to share it. I will add
| some of your suggestions in the coming days.
| harshreality wrote:
| Is OCR that good yet? From what I've seen, it's good if you
| have uniform text in a single standard-looking font. When
| layout or font varies widely, or there's extraneous stuff on
| the page (for instance: headers, footers, page numbers,
| marginal notes, same-page footnotes, low quality source
| material with marks on it, or scan artifacts), quality
| degrades. Good OCR engines I've seen can still OCR all the
| things and present them in a somewhat readable text format, but
| general intelligence (human or AGI) allows quick, automatic
| recognition of different sections of text that a narrow OCR AI
| struggles with. A human or AGI knows this text and that text
| are both blockquotes, or marginal notes, and instinctively
| attaches semantic meaning to each area, font, style, color
| encountered. An OCR engine struggles to get beyond blocks of
| text each with their own margins and no semantic meaning
| attached, leading to markup hell.
|
| To highlight the limitations, look at an OCR'd version of a
| technical book with code samples and different fonts and styles
| that have different meanings, and that has both footnotes and
| endnotes. The text will be readable, but disorganized, probably
| inconsistent styling, and even if some footnotes and endnotes
| are linked by a good engine, I suspect that's less than fully
| reliable. For the purposes of reading the book, I'd rather have
| the scanned pdf with page images for reading, with the OCR'd
| text as the text layer for searching.
|
| Lower-quality source images seem to cause major problems for
| tesseract, and even ABBYY judging from archive.org text
| conversions. Those engines confuse more ambiguous letter or
| punctuation combinations, while humans can still read the
| images without much trouble.
| ozten wrote:
| I think sites with user generated content are accurately flagged.
| There is a danger that infinite, free, low quality content will
| lower the value of their platform unless they figure something
| out.
|
| I disagree with occupations getting flagged. For many
| professions, AI will become human augmentation. Illustrators,
| writers, and other forms of design will require people to adopt
| new tools but they will become more creative and more efficient.
| aqme28 wrote:
| Doctors in general? Definitely not for a while.
|
| Radiologists? Probably soon.
| sb057 wrote:
| >Stack Overflow
|
| >People realized that they could simply ask a bot their question
| and that it always had an answer.
|
| >Turns out the answer is sometimes completely wrong.
|
| To be fair, it's already achieved parity.
| cyborgx7 wrote:
| You say that as a joke, but I actually think it's a very
| relevant point. People keep pointing to mistakes the AI makes
| in order to make an argument that there will always be a place
| for people doing that kind of work. The website in this post
| also does this. But often they fail to take into account that
| people make mistakes all the time as well. And the success rate
| of AI is only going to get better.
| watt wrote:
| "Maybe dispersion will understand your accent ". Is that
| intentional? @demondragong
| gs17 wrote:
| Siri misheard them saying "this person".
| kazinator wrote:
| > _That being said, "House A.I." sounds like the most boring show
| yet to be produced._
|
| Nope; I nominate Doogie Howser, A. I.
| LordDragonfang wrote:
| If the obnoxious ads I keep seeing on tiktok are any indication,
| replika is trying to take over the "unlicensed therapist" sector
| (though based on recent ads, they seem to be pivoting towards
| "sexting"? Unclear.)
| rvz wrote:
| This list significantly affects journalists, writers,
| illustrators and digital artists and especially StackOverflow
| which programmers both senior and junior will be impacted and
| less roles going around for both of them.
|
| As for paramedics, doctors, lorry drivers and Google, etc the
| impact of AI is hardly a threat to them and this website has
| greatly exaggerated their own predictions.
| [deleted]
| nemothekid wrote:
| > _It 's on the tip of your tongue! You're going to remem... too
| late! Someone pulled out Shazam and you just missed your chance._
|
| Man did someone really create a whole page to cry about not being
| able to impress their date by remembering a song?
| wayeq wrote:
| no?
| drdaeman wrote:
| > Spotify knows me better than anyone and every song on my
| Discover Weekly hits the spot!
|
| I'd wish. If there is one area where AI had utterly failed it's
| ads. Product recommendations are always off for me. I love a joke
| that goes like "meatbag bought a sofa recently, must like sofas,
| let show them all the sofas we have" because it's silly but very
| true. And checking out those items leads you down to a rabbit
| hole where some smartass decided that if I look at something it
| means I'm potentially interested in more of that, so a machine
| must feed me more of "related products".
|
| Spotify recommendations are 90% of time provide no value to me -
| I stopped bothering with their recommendations and "discover"
| playlists. I actually hate that they start playing music of their
| choice when my playlists are over. The only reason I keep a
| subscription is because a) they have a fairly decent collection
| of things I like and b) it's easy to share a playlist with my
| wife.
|
| Same with Kindle book recommendations. Same with all those
| streaming services. Same with Steam games. Even though I spend
| some time there trying to tell all those platforms what I like
| and what I don't.
|
| I always have to browse catalog, checking every single thing
| individually. Like a quest looking for a good apartment on Airbnb
| among the sea of places that weren't ever good for living, only
| crashing for the night. Hopefully there is some user-curated
| collection to start with - because "related" stuff automated
| classifiers generate are typically echo chambers full of things
| I'm _not_ looking for.
|
| The problem is, machines recommendation systems fail to
| understand the reasons for picking one thing over another,
| because they don't know much about the product they're
| recommending, only its relationship with other products. They
| don't read the book (or watch a movie), so they have no clue
| about its language (or actors play), tropes, and how interesting
| (or stupid) the plot is. They don't listen to the song so they
| have no clue about how it sounds like, the vocals, and certainly
| don't care about lyrics. And so on.
|
| /rant
| pixl97 wrote:
| Maybe you're just an outlier and AI doesn't show you good ads
| because it doesn't have a set of good ads that meet your needs
| that pay a decent amount?
|
| Unfortunately with bulk data over populations you are going to
| have some members that are in the long tail. Instead the
| operators of the system look at the efficiency of the system
| over larger parts of the population, even if it has the
| potential to lose sales for a small part of the population.
| drdaeman wrote:
| Yeah, well, I could imagine being an outlier with some
| service or two - but everywhere? I'm a grumpy ass who loves
| to nitpick, but I don't think I'm that unique.
|
| Even the Google Ads (which are supposed to have some magic
| pixie dust algorithms done by the fanciest experts in the
| field because that's Google's bread and butter) are typically
| showing me some products I couldn't care less about. It's
| always some random post somewhere (which could be an ad) that
| actually makes me interested and drives me to making a
| purchase, not some bullshit video banner.
|
| > the operators of the system look at the efficiency of the
| system over larger parts of the population
|
| My guess is that they look at engagement metrics, not
| consumer happiness or satisfaction. And even though I'm
| pretty much disappointed, I still use those services and
| still click on "related products" and even check some book
| samples etc. - in hope that maybe I'm wrong, and maybe this
| time it's a good recommendations. Very infrequently that
| happens, but mostly it's not what I'm interested in. So I go
| find some non-machine recommendations, search for those and
| thus remain an active user operators want on their platform.
| kolar wrote:
| I can't agree more with this. Maybe 1 song out of 30 in
| Discover Weekly is worth to be marked as liked.
| thorncorona wrote:
| Great job! Why don't we expand this next to all jobs killed by
| technology.
|
| We can start including computers (the people), mail sorting,
| telephone operator, and the community shaman.
| mitchbob wrote:
| The shamans are still with us.
| https://www.wired.com/story/health-business-deprivation-tech...
| dale_glass wrote:
| I don't think it's quite right. AI is much better as a helper
| than as something that does the whole job. It also doesn't
| actually think, so replacing doctors with it would be a terrible,
| terrible idea.
|
| I asked it to write a poem and got this:
| https://imgur.com/DBHjki5
|
| Now, as an assistant, that's amazing. As an actual finished
| result, it's not good. It needs tweaking and fixing, to make
| everything work right. Same goes for AI images.
|
| So here's what I think AI will actually endanger: stock
| photography. If you're writing a blog and need to stick a random
| illustration on an entry for extra appeal, the AI is perfect. You
| don't care if it fits in a theme, or if it's quite right. You
| need a picture of a cute cat, so most any cute cat will do.
|
| Doing actual, specific illustrations with an AI is hard work and
| you'll find beating your head against it quite frequently,
| especially if you need more than one, and if you need something
| that's not quite well covered in the dataset. AI works much
| better to compliment a human artist that can guide it to generate
| what's needed, and fix the problems in post.
| chroma wrote:
| Remember three years ago when AI couldn't do any of the stuff
| you described? I'm confident the capabilities will continue to
| improve.
| Fricken wrote:
| I'm an ex editorial illustrator and midjourney can do the whole
| job often enough. With about 20% of ideas I try out I've been
| able to get submission ready work. Maybe another 30% can be
| cleaned up without too much hassle in Photoshop. I don't expect
| these odds to get worse in the future.
|
| A photographer friend of mine had a gig to do some festive food
| arrangements. He needed a full day and a half, a studio, $30k
| in gear and years of professional experience. All I needed was
| 5 minutes and some clever prompt poetry. I did not have the
| heart to show him why he might have a hard time getting a
| similar gig next year.
|
| As Clayton Christensen's theory of disruption proposes: the
| disruptive product does not need to be better. It needs to be
| good enough and far cheaper. So cheap now that there is zero
| marginal cost for bespoke art.
| gamegoblin wrote:
| Obviously, I agree replacing doctors is a terrible idea.
|
| That said, in the last 5 years I have broken an ankle in a rock
| climbing accident, and gotten a herniated disk in my neck from
| wrestling. Both times, medical professionals misdiagnosed me
| for _months_ (both times until I eventually found a doctor who
| would refer me for an MRI which proved the issue in both
| cases). This resulted in a lot of undue suffering and increased
| damage.
|
| In both cases, I explained the situation and symptoms to GPT3
| (this is before ChatGPT was released), and it correctly
| diagnosed the issue in both circumstances, down to the exact
| bone I most likely fractured in my ankle (talus), and down to
| the exact vertebrae I was likely herniated in my neck (C6-C7).
|
| Now, I "consult" GPT before going to the doctor.
| retrac wrote:
| I think this may end up being the most powerful application
| of GPT. While I've been using it for coding, most of it is
| high level "what are the trade-offs?" kind of thinking and
| very little actual code.
|
| A person with some background and sufficient general
| knowledge, but who is a non-expert on a topic, can dive into
| a topic and allow the AI to handle most of the detail work.
| What do I need to know to understand this? It falls out
| naturally from the exploration of the topic, in a faster and
| more integrated way compared to chasing down articles and
| definitions.
|
| It's like having a research assistant, that is an expert that
| has read everything ever published about the topic, and that
| is also infinitely patient. Of course, it's also prone to
| bouts of being confidently incorrect, and it can't actually
| reason, or think up anything new. That may seem like a
| serious limitation, but it also describes many of my college
| professors, and they still taught me plenty. ;)
| ilaksh wrote:
| I think ChatGPT is most similar to text-davinci-003 which
| does have gaps in it's knowledge which it can be stubborn
| about but code-davinci-002 has fewer gaps and can be more
| reasonable about requests. It's sometimes gets stuck in a
| loop though and is in beta with very limited capacity.
| Point being ChatGPT is the not quite best programming
| model.
| smrtinsert wrote:
| It's 3 am in the morning, there's a medical condition that came
| up suddenly, I'm on hold forever - you can be sure I will jump
| on an AI first. I'll get clear answers first that probably
| compete with or exceed over the phone diagnosis accuracy and
| the virtual diagnostician wont disappear after I hang up.
| xg15 wrote:
| To be fair though, that's the poem it produced when the entire
| prompt were the words "ode to a space lizard".
|
| It's like with describing a commission: If you just give basic
| info, then there is a lot left for the artist to guess, and
| they might be inclined to err on the safe side.
|
| I often got better results when I included more info in the
| prompt.
|
| E.g., here that could be: What is a space lizard? What's so
| amazing about it? What should the general emotion of the poem
| be? Should it be of some particular style? etc etc.
| Fatnino wrote:
| So the skill lies in knowing how to make a good prompt.
|
| Sounds like human work.
| dale_glass wrote:
| True, I was intentionally trying to make it do something
| weird to see how it'd fail. But it's not just a lack of
| information. A tail might help with swimming but won't propel
| you through the void. And "anything deployed" is awkward. And
| what's an "intergalactic norm"? And so on.
|
| I think more information would result in something more
| interesting, but I don't think it'd fix the problem that some
| of the concepts used just don't add up, because it's not
| actually a thinking machine, just very fancy statistics that
| work surprisingly well.
| LordDragonfang wrote:
| >A tail might help with swimming but won't propel you
| through the void.
|
| No offense, but I feel like if you're criticizing ai-
| written poetry for a lack of absolute realism, maybe _you
| 're_ the one that doesn't quite grok poetry. Next you'll be
| telling Poe that ravens can't actually enunciate human
| speech.
| failuser wrote:
| That will cut costs, so it will be used no matter the outcome.
| Humans make mistakes too, right? US is ripe for multi-tier
| medicine. Limiting diagnostics to AI will be preferable to huge
| medical bills for so many.
| 101008 wrote:
| I agree 100% with you. I have a free digital monthly magazine
| in PDF. It has articles, etc. Finding images was a hard task: I
| had to find good images, creative commons, etc.
|
| For the last month I paid MidJourney and it was amazing. I
| enjoy doing the magazine much more now. As you said, the image
| doesnt need to be exact, they must have some objects / surface
| a topic or idea. And I can modify them with Photoshop if I want
| to change something (or add text).
|
| I think that's where the IA will prevail: scenarios where the
| requirements are not really important.
___________________________________________________________________
(page generated 2022-12-20 23:02 UTC)