[HN Gopher] Why Nature will not allow the use of generative AI i...
___________________________________________________________________
Why Nature will not allow the use of generative AI in images and
video
Author : geox
Score : 139 points
Date : 2023-06-10 15:51 UTC (7 hours ago)
(HTM) web link (www.nature.com)
(TXT) w3m dump (www.nature.com)
| nbardy wrote:
| This wreaks of performative grandstanding.
|
| Dictating the tools that artists use for a commission is punitive
| and moralizing.
|
| Let the artists decide the morality of their own profession.
| hgsgm wrote:
| Nature is not an art magazine.
| CharlesW wrote:
| > _This wreaks of performative grandstanding._
|
| It's a straightforward clarification of their existing
| editorial policy. https://www.nature.com/nature-
| portfolio/editorial-policies/i...
| morphicpro wrote:
| [flagged]
| rkagerer wrote:
| Good for them! I'm glad we're starting to see some pushback
| against the shady practices vendors of this tech employ regarding
| their datasets. I hope someone figures out how to maintain and
| apportion attribution (even if it's as awkward as eg. list the
| million images that contribute >X% to a given result).
| sebzim4500 wrote:
| I would expect that if X was 1 then there would almost never be
| a single image that contributes more than X%.
|
| So you'd have to make X=0.0001 or something, and then what? Pay
| them all a fraction of a cent?
| dclowd9901 wrote:
| Apropos of nothing, I'm always really encouraged to see companies
| take strong philosophical stances like this. I don't think it's a
| particularly controversial stance, but all the same, it's
| encouraging to know they want to promote integrity in this space
| and try to set an example.
| wilg wrote:
| > For now, Nature is allowing the inclusion of text that has been
| produced with the assistance of generative AI, providing this is
| done with appropriate caveats
|
| Why not just apply this rule to all media? What is the purpose of
| singling out images and video?
| activiation wrote:
| [flagged]
| bentcorner wrote:
| I can tell you didn't read the article.
|
| > _Apart from in articles that are specifically about AI,
| Nature will not be publishing any content in which photography,
| videos or illustrations have been created wholly or partly
| using generative AI, at least for the foreseeable future._
|
| Plus in this very article they have a photo containing AI-
| generated art, but it's done in a way that is obvious - it's a
| photo of a user using DALL-E with appropriate credit.
| activiation wrote:
| [flagged]
| varelse wrote:
| [dead]
| infoseek12 wrote:
| A lot of diagram generating tools are starting to incorporate
| generative AI of some form. In some instances the UI probably
| won't make it clear that underlying LLM technology is being used.
|
| I wonder if their graphics designers will need to move from
| industry standard software to something less capable.
| Interestingly the Amish may have been ahead of their time in
| creating purposely limited technology that was compatible with
| their beliefs (https://www.npr.org/sections/money/2013/02/25/1728
| 86170/a-co...).
| Neilsawhney wrote:
| So sad, they already broke that rule in the first image they
| included.
| DrammBA wrote:
| That image was not "created wholly or partly using generative
| AI". It's merely a photograph that happens to contain an AI
| generated image displayed on a smartphone screen. Funny how
| they basically show you how to circumvent the new policy.
| inciampati wrote:
| I _just_ published conceptual art that I created using Midjourney
| in Nature!
|
| Ironically, Nature's own licensing rigor drove me to generate
| this art. It was replacing content that had come from other
| sources, where the time to obtain and clear copyright was too
| long for our timeline. More hilariously, one of the images that I
| replaced was from the US government, and in the public domain.
| The other was from a consortium in which I am part of the project
| leadership.
|
| They seemed perfectly okay with this, as long as I proved to them
| that I had the professional Midjourney account where copyright is
| not encumbered. I wonder when they will again allow this kind of
| use.
| firefoxd wrote:
| Can you share the article?
| theodric wrote:
| From my perspective in 2023: based. But in 50 years time will be
| regarded as a bizarrely conservative, even Luddite, position
| (unless GPT-9 ends up kicking off WW4).
| seydor wrote:
| This is unimportant - and they are doing it for attribution
| reasons.
|
| But it is irony of the ironies for Nature which sources all its
| content AND revisions from the open community to say they care
| about fair copyright compensation of creators.
| HPMOR wrote:
| Of course they care about copy right! It is their whole
| business model after all!! Scihub is "bad" because "copyright"
| ergo generative AI is "bad" because copyright.
| seydor wrote:
| Hold on, so you are sayign that because GenAI is using open
| content it can't be copyrighted properly? Hmm, i wonder who
| else is using publicly-funded content and editors ... And by
| the way things like adobe's genAI are definitely using
| licenced content but nature doesnt even allow that
|
| Aren't they unlegitimizing their own business model by
| claiming such things?
| [deleted]
| skilled wrote:
| > for the foreseeable future
|
| The publication already has a reputation and I don't think people
| would judge Nature if they used Midjourney for featured images.
|
| Videos are an entirely different thing, it will take a few more
| years for AI to be able to create interesting videos, so in a
| sense it is meaningless to even mention it.
| firefoxd wrote:
| I'm sad to see the comments here arguing about small details and
| nuance. What if the image is from a phone that use ai to do blah
| blah blah.
|
| The reality is we all know what kind of images to expect from
| Nature. Generative Ai is not appropriate there and we all know
| it.
| m3kw9 wrote:
| Not sad just annoying
| matteoraso wrote:
| Yeah, this backlash is really weird. The only time where
| generative AI images are appropriate in an article is when the
| article is actually about generative AI, and Nature isn't
| banning that. What's the problem?
| Der_Einzige wrote:
| Good luck enforcing any of these bans. AI models are
| multiplicities (model + lots of sampling, decoding parameters,
| etc).
|
| In general, it's extremely difficult to prove that anything is AI
| generated at all. Even more impossible to prove which model was
| used with which settings.
| colechristensen wrote:
| I don't know, a whole lot of generative AI imagery contains
| obvious artifacts. Just go down to the noise floor of size and
| intensity, AI doesn't look like thermal noise in a sensor or
| real lens artifacts and fuzziness. Not to mention obvious
| things like mangled hands or other complex structures.
| jsheard wrote:
| There are also second-order giveaways that someone is using
| AI generation, in the case of photos the photographer would
| probably take numerous shots of the subject before submitting
| the best one, and if challenged they could produce the rest
| of them as evidence that they're the real deal. As far as I'm
| aware, using AI to generate a plausible _series_ of photos
| with all of the details being consistent between them is
| _much_ more difficult than generating just a single plausible
| photo.
|
| In the case of artwork, the author of even the most
| convincing, artifact-free AI generated piece will immediately
| crumble if asked to show WIPs, non-flattened project files or
| timelapses. I have seen some charlatans attempt to fake WIPs
| by using style transfer to turn their finished piece back
| into a "sketch" but the results aren't very convincing, the
| models aren't trained on the process of creating art
| conventionally so they're not good at faking it.
| Der_Einzige wrote:
| This is possible today, it's called "reference only
| controlnet".
| chasing wrote:
| Plagiarism can also be tricky to identify and prove. But the
| reputational harm of _lying_ if you're caught can be massive
| and an effective deterrent if you actually care about your
| career.
|
| I'll say that even in my personal life if I catch you flat-out
| lying to me about something I have a very difficult time
| reestablishing trust. It's like you've revealed that deep down
| you think it's an acceptable behavior and now everything that
| comes out of your mouth has to weighed as possible bullshit.
| Waterluvian wrote:
| It's not really about enforcement. It's about saying it's not
| allowed. That's sufficient for many cultures.
| CharlesW wrote:
| > _In general, it 's extremely difficult to prove that anything
| is AI generated at all._
|
| It seems like it _could_ be pretty simple -- if there 's a
| question, you ask the creator to provide the original RAW and
| have a conversation about how they got to the final "developed"
| image. If there's still doubt, they could be asked to
| duplicate/approximate the process in a screen-sharing session.
|
| I'm not familiar with the current state of content provenance
| initiatives like Content Authenticity Initiative1, but
| generative AI is likely to boost their popularity.
|
| 1
| https://en.wikipedia.org/wiki/Content_Authenticity_Initiativ...
| LapsangGuzzler wrote:
| That's a good point. RAW is such a common format in the
| photography community but somewhat of a silly format for a
| generative AI to write to based on its file size.
|
| Also, is generative AI capable of dramatically upscaling the
| quality of it's output relative to its input? I would assume
| so but I've never really thought about it.
| CharlesW wrote:
| > _RAW is such a common format in the photography community
| but somewhat of a silly format for a generative AI to write
| to based on its file size._
|
| You _could_ cheat and convert the image to a RAW file, but
| it 'd be very difficult to do so in a way that would fool a
| forensics investigator.
|
| > _Also, is generative AI capable of dramatically upscaling
| the quality of it 's output relative to its input?_
|
| If the image output is too small, one could use tools like
| Topaz Gigapixel AI to scale it up
| golemotron wrote:
| True. The Copyright Office is going to eventually have to walk
| back its recent guidance too. Whether they will realize they
| need to on their own or need to have Congress to act is the
| only question.
| [deleted]
| morphicpro wrote:
| [flagged]
| chasing wrote:
| Is it exhausting having to reframe every single thing through a
| bizarro culture war lens? Seems like it would be.
| morphicpro wrote:
| I think is more exhausting using platforms and syndication to
| promote ideologies that claim to cause harm (but mostly only
| to those who are successful, while also mostly being used by
| those who are successful) so thus it should be limited or
| controlled, though AI has the most value for those who they
| wish to control as it makes tasks accessible to those with
| out. To me its more like the people in power fighting to keep
| that power, of which I could give no shits about. Only thing
| I could say at this point to those who are still putting up a
| fight, deal with it. This is a matter of making things
| accessible, not a matter of who has the most talent. They
| would like you to think they are more worthy of making art
| than you. I'd be more worried about that.
| hooverd wrote:
| I don't see why Nature is obligated to publish you, free
| expression or not.
| morphicpro wrote:
| I don't see why Nature is pandering the privileged while also
| saying that people who get aid can fuck off? When you frame
| it in a matter of accessibility does that make you feel like
| an ass for telling people that they must make the grade and
| their work is not worthy? What kind of inclusive community
| does that create? Oh can't afford $$$ worth of glass and
| cameras, get lost. That's all I see here. I'm fine avoiding
| that community all the same too. Think about how many young
| people who are poor who have no means to get the gear
| required to participate, except they have this app that would
| allow them. But this community has made a clear message to
| that person they are not welcome. I think thats sad and a
| real statement unto itself and is a perfect reflections of
| our current "nature"
| throw101010 wrote:
| > Nature will not be publishing any content in which photography,
| videos or illustrations have been created wholly or partly using
| generative AI, at least for the foreseeable future.
|
| Allow me a bit of a rhetorical question, what are the chances
| they already publish photos taken on devices that apply by
| default some form of AI-based generative/corrective algorithms
| like the "AI detail enhancement engine" by Samsung (the one they
| use to enhance photos of the moon)?
| m3kw9 wrote:
| For them purposely asking stable diffusion(not ok) for an image
| vs iPhone image processing(ok) would be your baseline for
| distinguishing. Picking at small details seem like a nice way
| to waste time, you just got to keep it simple
| analog31 wrote:
| I'm peripherally involved in this scene. The answer is that the
| journals don't want processed images, but of course the
| scientist doesn't always know what kind of processing happened
| to the image en route to their display and file system. The
| idea is that an image supposedly constitutes "data" and as
| such, should represent raw data.
|
| Also, what constitutes "raw data" is itself a matter of debate.
| How raw is raw? Like any interesting pursuit, scientific
| publishing struggles to keep up with developments in
| technology.
| joshspankit wrote:
| Since enhancements (and "enhancements") are going to get more
| pervasive, it feels like a good time for smartphones and
| cameras to add a "scientific" setting that only stores the
| unprocessed sensor data.
| jxramos wrote:
| I like that > Like any interesting pursuit, scientific
| publishing struggles to keep up with developments in
| technology.
|
| I'm going to keep that in mind, there does seem to be this
| interesting human nature presumption that everyone keeps in
| sync with the latest and greatest. But that's simply not the
| case.
| jacquesm wrote:
| Astronomy is in for a hard time then, anything that uses
| false color is technically very much processed.
| lkbm wrote:
| This is certainly not a "no processing" policy.
|
| Where the line is drawn as to what's "generative" and
| what's "AI" may be blurry, but they haven't just banned
| traditional transform operations.
| progrus wrote:
| I think if it gets all the way to a computer model
| rendering, where the raw data input is in no way shaped
| like an image "yet", the distinction between traditional
| and new-generative-model approaches sounds more like a
| difference in degree.
| dylan604 wrote:
| passing photons through a filter in front of the sensor is
| absolutely not even close to being the same as "AI" post
| processing of the data.
| Ajedi32 wrote:
| I mean, in digital photography "raw" has a very well defined
| meaning: https://en.wikipedia.org/wiki/Raw_image_format
| codetrotter wrote:
| > How raw is raw?
|
| Certainly no jpeg image produced by any digital camera is
| really "raw" as it will already have been through a
| debayering filter
|
| https://en.wikipedia.org/wiki/Bayer_filter
|
| And then on top of that is the JPEG compression artifacts.
|
| But I do wonder how many raw files also contain data that has
| been debayered already. I have not looked into that.
|
| I know that with third party firmware such as Magic Lantern
| it is possible to get the image data without debayering.
| https://magiclantern.fm/
|
| Likewise I know that the Camera Module 3 for Raspberry Pi is
| possible to retrieve the image data from without debayering.
| sudosysgen wrote:
| The RAW data for the vast majority of MILCs and DSLRs is
| pre-debayering.
| userbinator wrote:
| Even Android smartphones will give pre-debayering raw
| data from the sensor if you use the appropriate camera
| app. (There are quite a few cheap ones where the OEM
| debayering filter is just horrible, and the sensor is
| actually capable of much better quality.)
| morphicpro wrote:
| [flagged]
| Wowfunhappy wrote:
| I feel like there's a meaningful difference between that
| stuff and the computational photography common on
| smartphones.
| astrange wrote:
| There may be a meaningful difference between demosaicing
| and generative AI - actually there isn't because
| demosaicing/upscaling/image generation are all the same
| problem, but there might be one since people like to
| think of them as different.
|
| There isn't a difference between auto white balance and
| generative AI though. The colors in an auto mode digital
| camera picture are not real.
| robocat wrote:
| You can only get real colours in a raw format digital
| picture.
| dclowd9901 wrote:
| I'd say "raw" is light imprinted on film. I may be biased but
| wouldn't mind seeing 35mm make a comeback.
| dylan604 wrote:
| that's not what "raw" means though, and this is a really
| weird interpretation
| asynchronous wrote:
| Probably close to 100% at this point.
| 0xBABAD00C wrote:
| > what are the chances they already publish photos taken on
| devices that apply by default some form of AI-based
| generative/corrective algorithms
|
| 100%?
| LapsangGuzzler wrote:
| > devices that apply by default some form of AI-based
| generative/corrective algorithms like the "AI detail
| enhancement engine"
|
| Isn't this a contradiction, though? My understanding is that
| generative AI is created entirely from software, using a
| network of previously created images as input. A corrective
| filter modifies an image taken directly from a sensor instead.
|
| I personally don't mind aesthetic corrective modifications to
| photos. I was an astronomy observatory last night and learned
| that most of the magnificent images we've seen of distant
| nebulas and galaxies have post-production coloring applied,
| they mostly look black and white coming off the sensor. Does
| the coloring fundamentally change our understanding of what it
| is that we're looking at? I don't think so, and that's where I
| draw the line.
| dragonwriter wrote:
| > My understanding is that generative AI is created entirely
| from software, using a network of previously created images
| as input. A corrective filter modifies an image taken
| directly from a sensor instead.
|
| Your understanding is incorrect, generative AI can modify an
| image taken from any source, as well as creating from
| scratch.
| Imnimo wrote:
| >Artists, filmmakers, illustrators and photographers whom we
| commission and work with will be asked to confirm that none
| of the work they submit has been generated or augmented using
| generative AI
|
| "or augmented"
| golemotron wrote:
| There's no hard line in the technology. This means that a
| ban is pointless because the landscape is going to keep
| changing.
|
| It's interesting to compare this to other situations where,
| say, law tries to create lines that aren't really there and
| the incentive to ignore imaginary ones is greater than the
| incentive to keep them.
|
| This seems to be a very common phenomenon with technology.
| pxc wrote:
| In the case of Nature, it functions as a statement of
| values that scientists publishing with Nature will be
| happy to comply with to the best of their abilities.
|
| I doubt that the editors are under some illusion that the
| nominal ban will create a hard line in reality. I'd be
| surprised to learn that that is their idea of success
| with this measure.
| ghaff wrote:
| The context matters. There are image manipulations I might do
| to a photo I'm going to hang on the wall that wouldn't be
| kosher if I were shooting an event for a newspaper especially
| with respect to removing objects from the photo.
| charcircuit wrote:
| Some generative AI tools let you input a base image to work
| off of. You can definitely use generative AI for just
| sharpening in these tools.
| ghaff wrote:
| I think you can argue that there is significant daylight
| between "created wholly or partly using generative AI" and the
| sort of ML-based noise reduction, sharpening, etc. that you see
| in products like Lightroom and Photoshop. Of course, the whole
| area will evolve and rules like these will have to evolve as
| well. News photography has dealt with this since pre-digital
| although different publications may have different standards.
| hgsgm wrote:
| They'll have to use an AI to determine what manipulation
| counts as AI.
| chasing wrote:
| "Generative AI." I know there are kind of weird edge cases. "My
| iPhone made the sunset way redder than it was in real life."
| But I think we all know what they're talking about and I
| suspect if you're in a position for it to really be a concern
| then you will communicate with Nature and sort out what their
| comfort zone is.
| etrautmann wrote:
| This is fascinating and gets pretty interesting. As a
| computational neuroscience person, some of the more advanced
| neural signal processing algorithms use generate models
| internally to model recorded neural data. The result is
| likely a smoothed, simplified, and hopefully more
| interpretable view of neural data, but there's no guarantee
| that some portion of the resulting multidimensional signal
| isn't hallucinated.
|
| As a result, most findings should be validated by verifying
| that some property of interest is present in the high
| dimensional raw neural data, though that's only conceptually
| possible sometimes.
| [deleted]
| ChatGTP wrote:
| On the other hand, I really hate what these algorithms do to my
| photos, even my new iPhone which is considered good tech. So I
| get it.
| Kapura wrote:
| I am not in the game at all but as I understand it Samsung
| doesn't make high quality DSLRs of the type used by
| photographers. I reckon that photographers would be asked to
| disclose this sort of thing if submitting to Nature in future.
| CharlesW wrote:
| > _...AI-based generative /corrective algorithms..._
|
| Every photo is touched by "corrective algorithms". _Nature_ is
| talking about generative AI specifically, which means using an
| LLM to generate part or all of an image. This precludes using
| Midjourney, Photoshop 's new "generative fill", etc.
|
| I assume that what Samsung's "Space Zoom" feature does --
| replacing elements with higher-quality stock photography -- was
| already disallowed. If so, whether the elements were
| identified/replaced manually or automatically isn't really a
| concern from an editorial perspective.
| ghaff wrote:
| Yes, they already had guidelines for photographs. [1]
|
| e.g. "Digital images submitted with a manuscript for review
| should be minimally processed. A certain degree of image
| processing is acceptable for publication (and for some
| experiments, fields and techniques is unavoidable), but the
| final image must correctly represent the original data and
| conform to community standards. Editors may use software to
| screen images for manipulation."
|
| [1] https://www.nature.com/nature-portfolio/editorial-
| policies/i...
| dclowd9901 wrote:
| Sounds like this would allow for processed astronomical
| photography too. Methinks this questions wasn't in earnest.
| ghaff wrote:
| No reason to assume bad faith. But some folks get very
| literal. And if you have a legitimate question, that's
| one of the things editors are for.
| MiguelX413 wrote:
| How might one use a Large Language Model "to generate part or
| all of an image"?
| PartiallyTyped wrote:
| Something like this: Hey ChatGPT, write a
| prompt for midjourney to generate a realistic photo of XYZ
| with ABC parameters.
|
| Then plug it into Mid-journey.
|
| Technically the LLM isn't generating the image, and I
| agree, but I think their point is rather obvious and we
| need not be intentionally obtuse nor needlessly pedantic.
| seabass-labrax wrote:
| 'Text-to-image' systems like Stable Diffusion really are
| Large Language Models: an encoder such as BERT creates a
| mapping from text tokens to the latent space of the image
| generation model. As part of this training step, the system
| is learning the concepts of certain words and grammatical
| constructs.
|
| There are quite a few in-depth explanations of the whole
| system; here's one for instance:
| https://jalammar.github.io/illustrated-stable-diffusion/
| morphicpro wrote:
| There is also a big distinction between fully disclosing that
| an image has AI vs the non post production edits. So why not
| just ask people to just be honest and disclose about the means
| of creating the content and disclose the images as being "AI"
| created vs telling people they should to be purists in their
| craft. Is the claim that the art made by AI itself is harmful
| or that the act of making it causes harm. Where is the real
| harm in this given people are properly informed as the the type
| of content they are looking at? The reasons why is because this
| has nothing to do with art and expression and everything to do
| with control under the guise of fear.
| [deleted]
| belter wrote:
| It was not a problem when presenting the EHT results and
| subsequent "images" - https://www.space.com/first-ever-black-
| hole-image-ai-makeove...
| criddell wrote:
| Were those produced by a generative AI?
| belter wrote:
| No, but under the same algorithmic principles. How do you
| think that yellow color come about?
| abeppu wrote:
| I think it's unfortunate that they feel pushed to have a blanket
| policy. Not all images hold themselves out to be representative
| of a specific truth. If an article calls for an illustrative
| diagram of, e.g. a generic manifold representing energy
| associated with points in a parameter space, in context, readers
| should understand it as a hypothetical case whose specific
| attributes are not the focus, and there isn't really an
| opportunity to be 'misled' by it. If an article needs a
| microscopy image of tissue that has been treated by some factor
| being studied, then swapping in a DALL-E image in place of one
| produced through actual microscopy (and post-processing) _would_
| be misleading. But the context of what the image purports to
| represent is critical.
| rflrob wrote:
| One thing that's confusing is that Nature has two purposes:
| first as a scientific journal, and second as a science news
| magazine. They're bundled in the same physical issue (though
| there are also branches of the journal, eg Nature Genetics,
| Nature Chemistry, etc), but internally handled by different
| staff. I suspect the policy will mostly be relevant to the news
| magazine side, though you would also want to ensure that a
| paper on the journal side doesn't include an AI generated image
| in a non-AI context.
|
| I just asked DALLE for "A scientific illustration of a membrane
| bound protein being phosphorylated", and while the results
| aren't all that credible, I could imagine using them as a
| starting point.
| mgraczyk wrote:
| The first justification in the article is silly and detracts from
| Nature's position: "we all need to know the sources of data and
| images, so that these can be verified as accurate and true"
|
| How do you verify whether this cartoon illustration of of stacks
| of money against a red background is "accurate and true"?
|
| https://media.nature.com/lw767/magazine-assets/d41586-023-01...
|
| Would it have made a difference if that image were generated by
| Midjourney?
|
| The actual reasons, given later in the article, are that Nature
| is taking a political/legal position on copyright and privacy.
| That's fine by me, but it's disappointing that they give a
| misleading and nonsensical justification before the actual
| justification, as if to make their stance sound less political.
| neilv wrote:
| > _Saying 'no' to this kind of visual content is a question of
| research integrity, consent, privacy and intellectual-property
| protection._
|
| Evidence that STEM people can think clearly about this, when
| their paycheck doesn't depend on pretending otherwise.
|
| (Personally, I'm going to be in the latest AI techbro gold rush,
| but will try to do it responsibly.)
| aurizon wrote:
| This is a rear guard action, in a few months - little more, tech
| will do an end run.
|
| https://www.forbes.com/sites/danielfisher/2012/01/18/sopa-me...
| swayvil wrote:
| This medium, text, pictures, video. It's seductive. It's tempting
| to pretend that it is reality, but it isn't.
|
| I know that's a naive truth and we all know it. But still, we
| really do pretend otherwise.
|
| I think that might be a bigger deal than we acknowledge. I think
| maybe our sanity is bent from living this way.
| ilrwbwrkhv wrote:
| Simulacra and Simulation
| [deleted]
| etrautmann wrote:
| Does this policy apply to cover art as well as figures and data
| as part of articles?
| PlasmonOwl wrote:
| Hahahahah. Nature. Integrity. Fuck me.
| malkia wrote:
| But it just did, in the article itself -
| https://media.nature.com/lw767/magazine-assets/d41586-023-01...
| mgraczyk wrote:
| Second paragraph of the article: "Apart from in articles that
| are specifically about AI"
| [deleted]
___________________________________________________________________
(page generated 2023-06-10 23:01 UTC)