[HN Gopher] Nvidia Canvas
       ___________________________________________________________________
        
       Nvidia Canvas
        
       Author : forgingahead
       Score  : 554 points
       Date   : 2021-06-25 02:40 UTC (20 hours ago)
        
 (HTM) web link (www.nvidia.com)
 (TXT) w3m dump (www.nvidia.com)
        
       | esjeon wrote:
       | This looks still dang hard for my cursed paws. I'm pretty sure
       | it's not easy for most people, and it still can't beat google
       | image search considering the amount of images.
        
       | amelius wrote:
       | They implemented Bob Ross.
        
         | eludwig wrote:
         | It's more like handing off Bob Ross paintings to an
         | overachieving photorealist who promptly paints over them.
        
       | toomanyducks wrote:
       | solution looking for a problem?
       | 
       | though if anyone does know of problems this solves I'd love to
       | hear about them, this is an incredibly cool solution.
        
         | ts0000 wrote:
         | Concept art for games, movies, perhaps.
        
         | chrsig wrote:
         | Does every program need to be a solution to something? One
         | might say the problem it solves is by satisfying one's desire
         | for novelty.
         | 
         | Put another way: It's just really cool, and that can be enough.
        
         | xvector wrote:
         | I've often _desperately_ wanted to put certain landscapes from
         | my dreams into art, but I suck at drawing.
         | 
         | There are some dreams that I remember years later because of
         | how beautiful they were, and how they made me feel. This would
         | be a godsend if it works as well as the demo pictures show.
        
         | tveita wrote:
         | I could see tech like this being a big hit for illustrations
         | for low-budget self-publish book.
         | 
         | Stock photos are all good but sometimes you really need a
         | visual of Illiyana the dragon vampire arriving at the three-
         | towered mountain citadel with two moons overhead, on a budget
         | of $10 or less.
        
         | bredren wrote:
         | Digital matte painting is just about every single new film, tv
         | show, game. Game of Thrones, marvel films, etc.
         | 
         | This is typically done by individual artists, and is time
         | intensive.
         | 
         | See existing workflow here: https://youtu.be/V0qX7qmtMVw
         | 
         | https://conceptartempire.com/matte-painting/
        
         | cblconfederate wrote:
         | the problem is demand for stock images. i m not sure the
         | quality here is good enough, but there's no reason why image-
         | generating ANNs won't keep getting better
        
         | thunkshift1 wrote:
         | As of now maybe.. next version(s ?) will be probably animation
         | to realistic movies.
        
           | toomanyducks wrote:
           | but will it? my _very_ limited experience with animation has
           | been characterized by control: it 's storytelling and
           | creation where the creator is responsible for every fraction
           | of a second. The value of this type of AI is in ceding
           | control to an algorithm and letting it deal with the hard
           | parts. My limited understanding is sort of pointing to a
           | difference in goals of the two projects: one is for control,
           | the other is for ease. And I don't think ease has a very
           | stable place in animation.
        
         | runawaybottle wrote:
         | Bro, they are letting you literally doodle children's art and
         | create solid photo manipulations. This kind of stuff took at
         | least some creativity by hobbyist photoshoppers.
         | 
         | https://www.deviantart.com/high-quality/gallery/45794879/pho...
         | 
         | Believe it or not, it took some effort to take random scenery
         | and create a solid composition. Take my job sure, but Jesus,
         | not my hobby too. Now these people will have to compete against
         | AI scrubs.
        
       | nly wrote:
       | Anyone else remember when a 1.1GB download was a serious
       | commitment?
       | 
       | Now it's a coffee break curiosity.
        
         | marcodave wrote:
         | I remember when downloading 10MB was a serious commitment
         | 
         |  _shakes fist at clouds_
        
           | growt wrote:
           | When I got the first smartphone for my wife (then girlfriend)
           | we set it up when we were in a hotel (without wifi). The 10mb
           | mobile data used for setting up email etc. endet up costing
           | more than the phone :(
        
           | dahart wrote:
           | Damn this makes me feel old. I'm not that old!
           | 
           | I remember when downloading 50KB was a serious commitment,
           | over a phone line, of course. It took long enough that
           | inevitably someone else in the house would try to use the
           | phone and your download would get disconnected.
           | 
           |  _buries head in sand_
        
           | pell wrote:
           | I remember when downloading 10MB would sometimes take
           | multiple tries.
           | 
           | Maybe the internet felt so much more exciting to me back then
           | because it was so much slower.
        
             | freedomben wrote:
             | Indeed. I remember downloading a 3 MB mp3 file and it would
             | take numerous tries and quite some time. Winamp would play
             | the partial though so you could put it on repeat and get a
             | few more seconds of the song on each run. This was back
             | when you could find websites that just offered a collection
             | of mp3s for direct download! The internet was a wild
             | lawless place of free information back then.
        
             | myth_drannon wrote:
             | GetRight download manager to your service.
        
               | bellyfullofbac wrote:
               | I was trying to remember that name. Tried to look for a
               | video of it on YouTube, there's some pickup artist who
               | calls himself Mr. GetRight instead...
               | 
               | But hah, their website is still alive, and they're still
               | selling it for $20: https://www.getright.com/screens.html
        
         | tru3_power wrote:
         | Lol and it would be split up into 10 100mb rar files
        
           | jimmySixDOF wrote:
           | on a zip drive
        
           | auto wrote:
           | This brought back bad memories of spending days getting all
           | those rar files, only for the final decompression to fail.
        
           | nly wrote:
           | And maybe a couple of par files
        
         | 8draco8 wrote:
         | Yes I remember vividly, it was April 2021!
        
         | ZephyrBlu wrote:
         | For some of us it still is.
        
       | mtobeiyf wrote:
       | This reminds me of my "Sketch to Art" project made in 2018:
       | https://github.com/mtobeiyf/sketch-to-art
       | 
       | The idea is pretty much the same, except Nvidia is using a more
       | complex model.
        
         | jiofih wrote:
         | And 327 other projects around the same time. The beauty of open
         | research.
        
       | Qiu_Zhanxuan wrote:
       | of course you need an RTX GPU, how come i'm not allowed to run it
       | on a 1650 Super ? _sigh_
        
       | qvrjuec wrote:
       | Someone used the version of this available online last year to
       | create a 3D video version of the output, super interesting:
       | https://www.youtube.com/watch?v=ZXFmZsv0Ddw
        
       | nl wrote:
       | The code behind this is available here:
       | 
       | https://github.com/mcheng89/gaugan
       | 
       | https://nvlabs.github.io/SPADE/
       | 
       | https://github.com/NVlabs/SPADE
        
       | mjgoeke wrote:
       | Hmm slightly disappointed. I thought I'd make a mashup video game
       | short with generated art.
       | 
       | The output resolution is locked at 512x512. The "target style
       | images" seem to be locked to that handful that come with the
       | application. The brush materials don't include anything man-made.
       | 
       | Am I doing it wrong?
        
       | grouphugs wrote:
       | really wanted to move to amd, but ahem
        
       | deathtrader666 wrote:
       | Looks like it is Windows only. The absence of nVidia GPUs on Macs
       | is really making Macs a weaker dev machine for ML work.
        
         | pier25 wrote:
         | And for artists as well since Canvas is for concept artists.
        
       | w-ll wrote:
       | mspaint to bryce 3d
        
       | [deleted]
        
       | seansaito wrote:
       | _Clicks "Download" on a Mac_
       | 
       | Me: "Why is it downloading an .exe?"
        
       | ksec wrote:
       | I am thinking if this Nvidia Canvas and the Apple Object Capture
       | Capture [1] [2] will make Graphics or 3D Modelling cheaper or
       | taking much less time. Instead of using tools on the computer
       | which was never really good enough for human creation, they now
       | create photos, painting, or models in real world and scan it in a
       | computer for further editing.
       | 
       | [1] https://www.youtube.com/watch?v=88rttSh7NcM
       | 
       | [2] https://www.youtube.com/watch?v=SuNNyjs9BO8&t=1s
        
       | sabujp wrote:
       | This is awesome, now have it automatically build a 3d world with
       | 3d assets
        
       | ma2rten wrote:
       | There is also an online version here:
       | 
       | https://www.nvidia.com/en-us/research/ai-playground/
        
       | thn-gap wrote:
       | Do authors own the copyright of their generated images? I can't
       | see any mention in the FAQ.
        
         | MrYellowP wrote:
         | The real question is "Who is the author?"
         | 
         | Because actually the user isn't. The AI is. AI's don't have a
         | right to copyright. You making a few lines and the AI making
         | the actual image does not make you the creator of the image.
        
           | Extigy wrote:
           | Why not? Can a comparison not be made to writing source code
           | and the output of a compiler?
        
             | alok-g wrote:
             | While the analogy is correct, the binaries generated by
             | compilers does involve integration of creative work beyond
             | that in the code compiled. The binary as such is a
             | 'derivative work' generated from creativity of the authors
             | of the source code, compiler, and standard libraries. What
             | happens is that the copyright licenses coming along with
             | compilers and standard libraries explicitly grant generous
             | permissions to the users of the compilers.
             | 
             | For algorithmic art, likewise the developers of the
             | software typically provide permissive licenses to the users
             | of the software.
             | 
             | AI makes this harder because the works are massively
             | derivative works, which AFAIK, do not have much precedants
             | in law. The question is not easy to answer unless the
             | author (Nvidia in this case) owned copyright over all
             | training data.
        
         | pulse7 wrote:
         | Related: Do authors of a digital camera own the copyright for
         | their (preprocessed) images?
        
           | bruce343434 wrote:
           | Why wouldn't they? It's just a tool. When I write something
           | by pen, do I get the copyright or does the company that made
           | the pen? Me, obviously.
        
             | XCSme wrote:
             | But what if the pen draws by itself? You just say "draw a
             | dog", and it does.
             | 
             | I do think you should always be the copyright owner, unless
             | it's clearly stated in their terms that any image created
             | using their tool is owned by nVidia.
        
               | bruce343434 wrote:
               | But it's still just a tool that functions to your input.
               | An IDE which insert a lot of boilerplate and
               | autocompletions, does it get the copyright to your
               | codebase? Nope.
        
               | bellyfullofbac wrote:
               | Interestingly, if I employ an artist to produce a work
               | (e.g. software code), usually the employment contract
               | would say the copyright belongs to me and not that
               | artist.
               | 
               | "Hey pen, sign this contract."...
        
           | jeffpeterson wrote:
           | I'd just like to point out that this line of inquiry is not
           | some unanswered philosophical question. All of capitalism is
           | focused on this question of ownership. Who owns the picture?
           | The answer is always whoever the parties involved agreed
           | would own it. Both options can exist and they'll have
           | different prices.
           | 
           | This same question often comes up with self-driving cars and
           | "fault", and it seems to regress into the same trap.
           | Ownership of _risk_ is one of the primary concerns of
           | capitalism. The question is not, "who should be at fault?",
           | it is instead "what is the cost of this risk?" and then we
           | buy and sell that risk like everything else (which is also
           | how we determine that cost). If the self-driving advocates
           | are right and self-driving is safer, then the risk will
           | likely cost less than your current insurance.
           | 
           | Of course, it's not always clear. If the parties can't agree
           | who owns a thing, they often use some legal mechanism to
           | resolve their dispute.
        
         | imvetri wrote:
         | The trick.
         | 
         | Good artists copy, great artists steal.
         | 
         | AI does both :D
        
         | yeldarb wrote:
         | I believe the current understanding of GAN copyright is that
         | the "minimum degree of creativity" happens when a human chooses
         | the inputs/outputs and copyright is assigned to the human at
         | that point. Drawing the input image for GauGAN probably
         | suffices.
         | 
         | Fully automated outputs (like pulling an image at random from
         | thispersondoesnotexist.com) would be public domain since non-
         | humans cannot hold copyrights and no creativity was applied.
         | 
         | This is analogous to the "creativity" of a photo being the
         | settings and framing done by the person who set up the shot and
         | is why the famous "monkey selfie" fell under public domain[1].
         | 
         | [1]
         | https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
        
           | alok-g wrote:
           | See my comment below.
           | 
           | https://news.ycombinator.com/item?id=27635481
           | 
           | I think Monkey selfie copyright issue was subtly different.
        
       | ackbar03 wrote:
       | Does anyone know if the backend code for this is made opensource
       | on github or something? So it can run without windows?
        
         | jachee wrote:
         | Given its deep integration with their RTX APIs, I imagine even
         | if the source code _were_ open, the only way to get at the RTX
         | ML-specific stuff is via their Windows driver.
        
       | Jyaif wrote:
       | How did they create the model?
       | 
       | Did they take a bunch of reference pictures where they said "this
       | part here is water, this part here is rocks, this part here
       | grass, etc...", and somehow trained a model from that?
        
         | jffry wrote:
         | From their FAQ [1]:                 Q: How does the AI in
         | Canvas work?       NVIDIA Canvas uses a GAN (Generative
         | Adversarial Network) to turn a rough painting of a segmentation
         | map into a realistic landscape image. 5 million photographs of
         | landscapes were used to train the network on an NVIDIA DGX.
         | Q: Is Canvas related to GauGAN?        Canvas is built on the
         | same research core that NVIDIA showed in GauGAN.
         | 
         | [1] https://nvidia.custhelp.com/app/answers/detail/a_id/5105
        
       | fab1an wrote:
       | This is Windows only, probably due to the lack of support for the
       | relevant GFX stuff on Mac?
       | 
       | Incidentally, does anyone know of a straightforward and quick
       | Windows-in-the-cloud solution? A bit like GeForceNOW but giving
       | you an entire VM without setup et al?
        
         | anaisbetts wrote:
         | Here you go, works for AWS, Azure, and GCP -
         | https://github.com/parsec-cloud/Parsec-Cloud-Preparation-
         | Too.... Parsec is similar technology to Geforce Now (only
         | better imho)
        
       | npunt wrote:
       | Incredible! They finally built a tool to 'Draw the Owl' [1]
       | 
       | [1] https://knowyourmeme.com/memes/how-to-draw-an-owl
        
         | visarga wrote:
         | As incredible as it looks this feat has been demonstrated since
         | a few years ago.
        
           | tmabraham wrote:
           | IDK why this is being downvoted, it was indeed published two
           | year ago, they just apparently repackaged this as an easier-
           | to-use tool: https://arxiv.org/abs/1903.07291
        
             | defaultname wrote:
             | Before there was a dense scientific paper. Now they've
             | released an incredibly simple tool that lets anyone draw
             | photorealistic images with simple strokes.
             | 
             | Saying that is repackaging something into an easier-to-use
             | tool seems like quite a stretch. They didn't put a GUI on
             | curl or something.
        
               | nacs wrote:
               | > They didn't put a GUI on curl or something
               | 
               | I don't think a GUI for curl would be as easy as you
               | imagine. Curl has a lot of power with all the options and
               | protocols it supports.
        
               | bun_at_work wrote:
               | Isn't this a browser?
        
               | lmohseni wrote:
               | https://www.jensroesner.com/wgetgui/wgetgui.png
               | 
               | Here's an image of a wget gui. It's not quite a browser,
               | interesting to look at nonetheless.
        
               | ElijahLynn wrote:
               | Fascinating, yes!
        
           | slavik81 wrote:
           | "The future is already here--it's just not very evenly
           | distributed." ~ William Gibson
        
         | Aeolun wrote:
         | What a horrible website on mobile.
        
           | PaulHoule wrote:
           | I hate the "download" button at the top that points to
           | "#something" on the page that centers on an image that fills
           | the whole screen (on a 4k laptop) so that you can't see the
           | real download button below it.
           | 
           | It's cruel.
           | 
           | I am pumped to try it out however.
        
             | mcdevilkiller wrote:
             | I think they mean Knowyourmeme
        
           | airstrike wrote:
           | Don't worry, it's shit on the desktop too.
        
         | berkayozturk wrote:
         | This is one of the best comments I read online :)
        
       | jp0d wrote:
       | Wow. This is amazing! I'm not holding by breath for Mac OS
       | support as Apple isn't very fond of Nvidia. I'm sure there will
       | be clones in future for MacOs/IOS, Linux and Android.
        
         | qayxc wrote:
         | The model itself is hardware-agnostic, so there's nothing
         | preventing someone from building a frontend for their platform
         | of choice.
         | 
         | Granted, powerful hardware is still required to run inference
         | at acceptable speeds (or at all - I don't know the memory
         | requirements).
        
       | dirtyid wrote:
       | Indistinguishable from magic.
        
       | j4yav wrote:
       | It would be really interesting/fun to run this on a frame by
       | frame output of classic 8 bit video games and see what it does. I
       | know it wouldn't make the game look real within it's own concept,
       | but the ordered/familiar inputs to the AI might generate some
       | interesting video outputs.
        
         | jffry wrote:
         | I'm not entirely sure how this would be achiveved.
         | 
         | When you draw in the app, you use a brush and have to pick
         | materials from a palette (like "sky" or "ground" or "stone
         | wall" etc). It doesn't seem to have any sort of "import an
         | image" feature because what would that even mean in their
         | model.
         | 
         | The approach I would probably consider is to use a modified ROM
         | so that the different sprites in the game are different solid
         | colors. Then I'd write some kind of mouse automation to use
         | those captured images and draw the frame in the app, clicking
         | on the various palette options based on color.
         | 
         | The next challenge is that the Canvas app doesn't let you set
         | individual pixels, the smallest brush is ~10px across on its
         | ~550px canvas. Maybe I'd have to settle for picking a
         | Z-ordering and just drawing everything approximately, or maybe
         | you could do some sort of attempt at a path routing algorithm
         | to draw along the edges of the shapes and fill in the centers.
        
           | yeldarb wrote:
           | An example is here (using the same NVIDIA GauGAN model that
           | backs this Canvas app): https://twitter.com/jonathanfly/statu
           | s/1144735290591981568?l...
           | 
           | Jonathan has played with GauGAN quite a bit (search twitter
           | for "from:@jonathanfly gaugan" to see more).
        
             | j4yav wrote:
             | Nice, this is exactly what I imagined.
        
             | jffry wrote:
             | Cool! For anybody looking for Mario specifically, here it
             | is:
             | https://twitter.com/jonathanfly/status/1158846045285232641
             | 
             | The one with the Pole Position racing game looks pretty
             | cool, with a surprising amount of stability between frames:
             | https://twitter.com/jonathanfly/status/1146569133376573440
        
           | nitrogen wrote:
           | Tile-based 2D games should be fairly easily converted to
           | provide semantic labeling. They already have a low-res grid
           | that says "sky, stone, etc."
        
         | toxik wrote:
         | You'd need some kind of inter frame consistency also, would
         | probably jiggle quite a lot.
        
       | winrid wrote:
       | I bet you could make some nice assets for games with this.
       | Definitely will find a niche in the indie dev scene.
        
         | aspaviento wrote:
         | For the indie dev scene it would be more useful one that
         | generates pixel art which is more common. Indie games rarely
         | have realistic scenarios.
        
           | Hamuko wrote:
           | Just run it through Photoshop's Mosaic filter.
        
           | dannyw wrote:
           | also pixel art is cheaper. realistic scenarios are more
           | expensive. so more value here
        
             | aspaviento wrote:
             | Using realistic scenarios forces you to use realistic
             | assets everywhere. If this tool only does backgrounds, it
             | would raise costs for indie developers. Hence my previous
             | comment.
        
       | shreyshnaccount wrote:
       | Okay so this is like putting image segmentation data into a GAN
       | and getting the opposite result, right? Or is there something I'm
       | missing?
        
       | BiteCode_dev wrote:
       | Nice for a prototype,but given model bias, it will create a
       | creativity bubble like the google search bubble but for visuals.
        
         | pram wrote:
         | You can easily generate a landscape and then do a paintover.
         | People already do that with sketch up 3D models for
         | backgrounds. I don't think any professional would just
         | literally copy and paste the thing.
        
         | w1nk wrote:
         | Would you mind elaborating a bit more on what you mean? It's
         | very fashionable to be concerned about model bias at the
         | moment, but it's not clear to me what the issue you're
         | describing would be? Something like: trees would end up looking
         | too much like the same tree?
        
           | jalk wrote:
           | The "worry" here is that everything produced will look
           | similar and hence will become boring at some point.
        
             | w1nk wrote:
             | Right, that was the assumption I was alluding to at the end
             | of my comment. That said, it still doesn't fully resolve
             | the question and unfortunately leaves the statement in
             | handwaving territory still. 'boring' isn't really a
             | measurement we can take and discuss super effectively, but
             | we do have actual metrics across visual datasets that span
             | basically all of what you might see as a human.
             | 
             | By chance, are you aware of any research on this topic?
        
       | 41209 wrote:
       | Good find, I think I'm going to take this out with me to go
       | drawing this weekend and see what I can do.
        
       | MileyCyrax wrote:
       | I wonder how difficult it would be to make something similar that
       | generated 3D models. Most of the examples look like they'd make
       | good video game levels.
        
         | Cthulhu_ wrote:
         | I think the theory's all there, it just needs reference
         | material on the one hand and the work to be put in on the
         | other. With the new Unreal 5 engine, I think there is a lot of
         | room for technology where an artist sketches out a rock and
         | tools come in to generate the small details - much like there's
         | tools like speedtree and co nowadays to procedurally generate
         | content.
        
         | brundolf wrote:
         | Related: https://www.dungeonalchemist.com/
        
           | spoiler wrote:
           | Dungeon Alchemist seems really cool (I'm a backer), but I'm
           | not entirely sure that it is related. DA is basically
           | procedurally generated furnishing (with a few params), but it
           | doesn't create 3D models from what I understand, it "just"
           | shuffles around furniture.
        
         | speps wrote:
         | Have a good read :) https://github.com/yenchenlin/awesome-NeRF
        
         | fsloth wrote:
         | Well, I think there is enough interesting research to put
         | things in place. Not in single model. But, we have
         | 
         | 0. This neural thing, of course, to create landscape-like 2D
         | projections of a plausible scene.
         | 
         | 1. Wave-function collapse models that synthesize domain data
         | quite nicely when parametrized with artistic care - this is a
         | "simpler" example of the concept.
         | https://github.com/mxgmn/WaveFunctionCollapse
         | 
         | 2. Fairly good understanding how to synthesize terrain.
         | Terragen is a good example of this (although not public
         | research, the images drive the point home nicely)
         | https://planetside.co.uk/
         | 
         | So, we could use the source image from this as a 2D projection
         | of an intended landscape as a seed to a wave-function collapse
         | model that would use known terrain parametrization schemes to
         | synthesize something usable (so basically create a Terragen
         | equivalent model).
         | 
         | I think that's it plausibly more or less. But it's a "research"
         | level problem still, I think, not something one can cook up by
         | chaining the data flow from a few open source libraries
         | together.
        
         | bredren wrote:
         | I wondered the same. There is some solid competition in this
         | area right now, without AI assisted asset generation.
         | 
         | Unreal 5 has a new, free, 3d model library integrated as Quixel
         | Bridge. [1]
         | 
         | Kitbash 3D, a company selling modular 3D sets used regularly in
         | Beeple's 2d provides mid-res, theme-based sets for customized
         | use.
         | 
         | Neither take into account the idea of fully featured 3d objects
         | being built from basic primitive using ML.
         | 
         | It makes sense that it will go this direction though, because
         | it means designers can get unique 3D assets customized to the
         | size and dimensions with less work.
         | 
         | Couple this with Apple's photogrammetry in iOS 15 it seems
         | original 3D assets available for training data will swell
         | greatly.
         | 
         | [1] https://youtu.be/d1ZnM7CH-v4 @ 4:34
        
       | blondin wrote:
       | this looks amazing! i wonder why the beta is windows only
       | though...
        
         | fortran77 wrote:
         | Perhaps other operating systems aren't advanced or powerful
         | enough? Mac doesn't support NVIDIA.
        
         | CoolGuySteve wrote:
         | I'm using an Nvidia RTX 3090 on Linux with the proprietary
         | drivers for machine learning and man it fucking sucks as a
         | desktop lately.
         | 
         | So my best guess is nobody at Nvidia uses the Linux desktop as
         | a workstation.
         | 
         | 1) My HDMI screen hasn't been able to wake from sleep for over
         | a year now, the only way to make it wake is to switch to a text
         | tty and then back to X11.
         | 
         | 2) Wayland still isn't supported. The default Ubuntu 18.04 gdm
         | doesn't even work so on first boot with the proprietary driver
         | everything seems broken.
         | 
         | 3) Since Firefox 89 switched to accelerated rendering by
         | default, windows randomly disappear and various video players
         | have lock contention, drop frames at 60fps, and downscale video
         | on a fucking $1600 video card.
         | 
         | 4) HDMI audio crackles and pops with a 2 second delay after a
         | few hours and I have to restart pulseaudio on the command line.
         | 
         | 5) I file support tickets on Nvidia's website and the company
         | never responds, they don't even dupe them with some other old
         | ticket.
        
           | jiofih wrote:
           | "But everything just works" says the Linux enthusiast after
           | fiddling with their xorg config in the morning.
        
             | mhh__ wrote:
             | That is true but I would like that point out that windows
             | still has issues on my bog standard Intel/Nvidia rig - e.g.
             | Linux can't sleep properly, but windows either fails to
             | resume properly or randomly wakes me up at night by revving
             | turning back on and revving the fans.
             | 
             | Similarly, my new iPad pro is great until you need to do
             | something apple haven't approved of (e.g. I can't watch a
             | bunch of movies I have had copies of for years due to apple
             | not letting VLC ship certain codecs)
        
           | isatty wrote:
           | (1) works fine for me over DP, didn't try HDMI. (4) sounds
           | (heh) like a pulseaudio problem.
        
           | mceachen wrote:
           | I recently switched to the 465 driver (on Ubuntu 20.04) and
           | had issues: try downgrading back to 460 if you're in the same
           | boat.
        
             | nonameiguess wrote:
             | That happened to me yesterday on my work laptop. System
             | 76's help documentation said to chroot in from rescue
             | media, uninstall the drivers, then reinstall, and that
             | worked fine, so it's now running 465 perfectly well. No
             | idea why the straight upgrade path doesn't work.
             | 
             | But that's completely an Ubuntu problem, not NVIDIA. Like a
             | (currently) higher up comment says, NVIDIA on Linux works
             | fine as long as you're running the latest version of
             | everything. My main desktop was built last April and I've
             | been running Arch with RTX 2070 and the latest NVIDIA
             | drivers ever since first boot and it has never given me any
             | trouble, video or audio. My display is a 50 inch OLED
             | connected via HDMI and audio a 5-channel soundbar with
             | external subwoofer using eARC from the display. Everything
             | is fine using GNOME defaults.
             | 
             | NVIDIA provides the nvidia-xconfig tool to autogenerate the
             | X configuration, but you don't need it. It runs fine with
             | no config. Wayland has worked for over a year, too. You can
             | go look at the PKGBUILD file for Arch's PulseAudio
             | installer and it isn't doing anything special, either, just
             | applying the suggest default from PulseAudio's
             | documentation making the ALSA default module pulse.
             | 
             | The only reason NVIDIA on Linux gives people so many
             | problems is they're trying to run old versions of
             | everything on enterprise-oriented Linux distros or "long-
             | term support" without purchasing support. If you want the
             | latest hardware, use the latest software.
        
           | jcelerier wrote:
           | > The default Ubuntu 18.04 gdm doesn't even work
           | 
           | I mean, using ubuntu 18.04 means using ~4/5 years old
           | software which only gets "security updates" (not even patch
           | updates, e.g. they use a Qt LTS from 2017 and don't even
           | update the _patch_ version, it 's still 5.9.5 while Qt's is
           | 5.9.9), why would you expect things to work correctly with a
           | 1 year old graphics card. On archlinux wayland with an nvidia
           | card works pretty much fine.
        
             | CoolGuySteve wrote:
             | This has been broken since 2019. I'm running Ubuntu 20.04
             | with the 5.11 kernel and the 460/465 drivers and all these
             | problems are still happening.
             | 
             | And also, yes, I expect a 5 year old operating system to
             | still work. Windows 10 does and it came out in 2015. These
             | are professional tools for my fucking job.
        
               | jcelerier wrote:
               | > And also, yes, I expect a 5 year old operating system
               | to still work. Windows 10 does and it came out in 2015.
               | 
               | but the windows 10 you run in 2021 is super different
               | from the windows 10 you installed in 2015, there are ton
               | of (sometimes fairly breaking) updates :
               | 
               | https://en.wikipedia.org/wiki/Windows_10_version_history
               | 
               | running an up-to-date win10 is basically equivalent to
               | updating to every ubuntu release, LTS or not. Kernel is
               | different, libc is different, system APIs implementations
               | are different, everything is updated every few months -
               | even the start menu pretty much changes all the time.
        
             | schmorptron wrote:
             | Do they not port the HWE to older LTS releases?
        
           | phantom0308 wrote:
           | The software focused teams all use Linux workstations afaik,
           | look at their job boards and blind. Their embedded systems
           | (robotics / av) are all Linux as well.
        
             | CoolGuySteve wrote:
             | I simply do not believe that given how bad their drivers
             | are.
             | 
             | I would not be surprised if most or all of their Linux
             | engineers ssh into Linux from a Windows machine given how
             | stable their command line stuff is in comparison to the
             | graphics (once you figure out the correct permutation of
             | userland/kernel pieces to get CUDA+cudnn+TF working
             | anyways).
        
               | MereInterest wrote:
               | Their recommended method of installing cuda includes a
               | 64-bit version, but not a 32-bit version. Nvidia's cuda
               | packages are marked as incompatible with debian's nvidia-
               | driver-* packages, so installing it uninstalls the 32-bit
               | version. As a result, I need to choose between steam
               | (which uses the 32-bit graphics library) and an updated
               | cuda version (since Ubuntu 20.04's repo is pinned at
               | 10.2).
        
               | my123 wrote:
               | Install the cuda-toolkit- package instead of the cuda-
               | package in that usecase.
        
           | qq4 wrote:
           | I ended up selling my Nvidia card for an AMD one. I was
           | having so many problems with Linux like you're describing,
           | and now they're all gone :)
        
             | CoolGuySteve wrote:
             | Yeah I can't remember ever having any problems with Intel
             | graphics on all the laptops I've owned.
             | 
             | It's night and day how much Intel cares about Linux
             | compared to Nvidia.
        
               | 1_player wrote:
               | Indeed it's night and day how Intel performs compared to
               | Nvidia as well.
        
               | CoolGuySteve wrote:
               | Yeah you're right, even low power Intel gpus can render
               | an X11 desktop with audio wayyyyy faster and with less
               | artifacts than the proprietary nvidia driver.
        
               | HelloNurse wrote:
               | I'll never forget Intel for lying about OpenGL support in
               | some old laptop drivers for Windows.
        
         | yellowfish wrote:
         | Why wouldn't it be?
        
           | coolspot wrote:
           | Because most real-world CUDA research happens on Linux with
           | Python and Jupyter?
        
             | alphachloride wrote:
             | Most end-users are on Windows, however.
        
               | nonameiguess wrote:
               | That may not be the case for much longer. The press
               | release on their final financial report from last year:
               | 
               | https://nvidianews.nvidia.com/news/nvidia-announces-
               | financia...
               | 
               | Data center revenue at $6.7 billion. Gaming at $7.7
               | billion. But data center grew 124%, gaming 41%. If that
               | keeps up, data center passes gaming this year.
        
               | jamra wrote:
               | I don't think data centers count as end users. End users
               | are human beings that use a system.
        
               | nonameiguess wrote:
               | That doesn't make any sense to me. Human ML/AI
               | researchers are also users and NVIDIA clearly
               | intentionally targets them as a market segment. They
               | don't only care about pleasing gamers running Windows.
        
               | rrss wrote:
               | in the context of a desktop app, it seems pretty clear
               | 'alphachloride was referring to desktop users.
               | 
               | how is the existence of big datacenters relevant to what
               | platforms nvidia will support for a desktop app?
        
             | whywhywhywhy wrote:
             | and most artists who utilize CUDA use Windows.
        
           | phantom0308 wrote:
           | The entire deep learning / AI industry relies on running GPU
           | compute on Linux, mostly CUDA on Nvidia GPUs.
        
       | account_created wrote:
       | To all non Windows users
       | 
       | http://nvidia-research-mingyuliu.com/gaugan/
        
         | EvanKnowles wrote:
         | Looks like it has CORS issue.
        
           | littlestymaar wrote:
           | It works for me. (FF Linux)
        
         | JoblessWonder wrote:
         | Whoa. Not just all non-Windows users but browser based. That's
         | neat. I was interested in trying it out but didn't want to
         | download the software.
        
         | fab1an wrote:
         | Compared to canvas, this looks more like a early 2010s style
         | transfer tech demo :) Thanks for the link though!
        
           | littlestymaar wrote:
           | Ah, that's kind of reassuring for canvas, because I was
           | really disappointed when I' tried to play a bit with it. I
           | was like: "meh, how is that even worth a press release?"
        
         | nomel wrote:
         | After painting for 5 minutes: This segmentation map may contain
         | unsupported labels.
        
       | noway421 wrote:
       | Looks great!! Looking forward to Mac OS support!
        
         | zeusk wrote:
         | sarcasm?
         | 
         | This requires RTX cards and afaik Apple hasn't supported Nvidia
         | hardware since like maxwell?
        
           | asteroidbelt wrote:
           | I suspect they don't really need GPU to render it. It is
           | usually training what requires a lot of GPU, not evaluation.
           | So the Nvidia requirement is only to sell more cards.
        
             | etaioinshrdlu wrote:
             | Not true. Big neural nets like these are still dog-slow on
             | CPUs.
        
               | asteroidbelt wrote:
               | Maybe they are. But I suspect 10 core i9 CPU is not much
               | slower than the oldest Nvidia card they list as the
               | requirement.
               | 
               | Don't know much about GPU performance though except
               | random links I have found online which tell that GPU is
               | 3-5 time faster for ML.
        
               | faeyanpiraat wrote:
               | They list nvidia RTX as their minimum requirement.
               | 
               | i9-7980XE: 1.3 teraflops
               | 
               | RTX 2060: 52 teraflops
        
               | [deleted]
        
               | asteroidbelt wrote:
               | I think the flops comparison you've presented is not
               | fair: for nvidia it is "tensor" floops, not generic float
               | multiplication (which is 10 times smaller), while for
               | intel it is any float multiplication.
               | 
               | So for i9 the number would be higher if fma operations
               | used, no?
        
               | programmer_dude wrote:
               | Tensor flops is significant since this is exactly the use
               | case for which it was designed. So IMO the comparison is
               | fair.
        
               | asteroidbelt wrote:
               | It doesn't make sense. Why it is fair to compare matrix
               | multiplication with generic float operations? It should
               | be either comparison of matrix multiplication to matrix
               | multiplication or generic float to generic float.
        
               | etaioinshrdlu wrote:
               | Well, one confounding factor is that CPU Flops are more
               | generic, for any algorithm. GPU Flops as mentioned work
               | better on tensor cases.
               | 
               | However, when we do have tensors, the GPU and CPU would
               | both work to their full potential, and thus the flops
               | comparison ought to be valid.
        
               | [deleted]
        
               | ianhorn wrote:
               | It wouldn't be a smooth app, but it would still render,
               | which would be fun to play with.
        
       | pizza wrote:
       | Interesting potential for dream journaling..
        
         | andai wrote:
         | Heard someone say the other day they use an AI face generator
         | to capture the faces of people they meet in their dreams.
        
       | roschdal wrote:
       | This is almost as nice as going outside in nature.
        
       | rainboiboi wrote:
       | Windows? Can we have linux support please!
        
         | qayxc wrote:
         | https://nvlabs.github.io/SPADE/
        
         | programmarchy wrote:
         | This occurred to me, too. Might be able to get it running on
         | Wine or even Proton [1]
         | 
         | [1] https://github.com/ValveSoftware/Proton/
        
         | visarga wrote:
         | The neural net was probably trained on Linux, they put in extra
         | effort to package it for Windows.
        
       | riztaak wrote:
       | Abstract, abstract painting.
        
       | jskybowen wrote:
       | Curious if this could be used to generate UI mocks. Pass in a
       | whiteboard sketch and you get the mocks out.
        
         | sp332 wrote:
         | https://sketch2code.azurewebsites.net/
        
       | MrYellowP wrote:
       | Am I the only one who thinks this development is a bad idea?
       | 
       | Just extrapolate the obvious into the future. When everyone can
       | create good art, despite being actually completely unskilled and
       | untalented, then good art ceases to exist.
       | 
       | When everyone's an artist no one's an artist. It doesn't matter
       | if we're not there _yet_ , we _will_ get there eventually and at
       | that point it 's too late.
       | 
       | ... not that it's stoppable anyway.
        
         | [deleted]
        
         | imvetri wrote:
         | That is very true my friend. I share the same opinion.
         | 
         | This AI art seems very menial to us, but not for the fresh
         | minds.
         | 
         | This is same, and applies to our generation. When we were given
         | tools to make art, our previous generations would have thought
         | the same.
        
         | stinos wrote:
         | _When everyone can create good art_
         | 
         | That is not what tools like this enable though? Will it not
         | still require at least a bit of artistic sense to get something
         | decent out of it? It just makes the technical aspect much
         | easier. Some will benefit. Not everyone. Unless you're
         | convinced there's a hidden artist in all of us?
         | 
         | It's just like with the introduction of small portable cameras
         | decades ago (film/photo, doesn't matter) especially getting a
         | lot better in the past decade: did we suddenly see great
         | film/pictures being taken all over the place? No. We mainly saw
         | a ton of crap, bad shots, bad home movies, you name it. And
         | then some rather small fraction of people which earlier did not
         | have the means to get quality material or were restricted in
         | other ways, who got their hand on it and were able to
         | deploy/discover their inherent talent. Which they could perhaps
         | have done in other ways, but not as easy.
        
         | csomar wrote:
         | It's already happening. All these logos, designs and brochures
         | looks the same now.
        
           | ramblerman wrote:
           | would you call them good art?
        
         | Aerroon wrote:
         | We'll just get different art instead. Compositions etc.
        
         | tnecio wrote:
         | The same used to be said about photography. What is most
         | important in true art is not being skilled with a paintbrush or
         | Photoshop but ability to evoke different emotions and thoughts.
        
           | sva_ wrote:
           | Exactly. Art has been past the point, where drawing
           | photorealistic pictures was considered artistic talent, for
           | over a century now.
        
         | Mulpze15 wrote:
         | When photography appeared, it made a lot painters obsolete...
         | It forced many to rethink their skills beyond reproducing
         | faithfully nature.
         | 
         | That's the nature of progress and art.
        
         | Lornedon wrote:
         | How is "everyone can create art" a bad thing? That seems like
         | gatekeeping.
         | 
         | It's like arguing against grammar and spell checking, because
         | "if everyone can write good texts, then good texts cease to
         | exist".
         | 
         | Also, imagine what actually talented people can do with tools
         | like this.
        
       | kleiba wrote:
       | I find the demo video terrible, though. Of course, it gets the
       | general idea across but it's too fast-cut to see any details
       | which - for something visual like painting - is kind of the whole
       | point.
        
       | gfiorav wrote:
       | You need an RTX card to run this, so don't bother with the
       | "Driver update" prompt if you don't have one!
        
       ___________________________________________________________________
       (page generated 2021-06-25 23:02 UTC)