https://old.reddit.com/r/Amd/comments/15t0lsm/i_turned_a_95_amd_apu_into_a_16gb_vram_gpu_and_it/ jump to content my subreddits edit subscriptions * popular * -all * -random * -users | * AskReddit * -funny * -todayilearned * -worldnews * -gaming * -mildlyinteresting * -movies * -explainlikeimfive * -pics * -news * -videos * -tifu * -TwoXChromosomes * -LifeProTips * -OldSchoolCool * -dataisbeautiful * -Jokes * -aww * -books * -sports * -Music * -Showerthoughts * -science * -DIY * -askscience * -IAmA * -space * -Futurology * -food * -nosleep * -UpliftingNews * -history * -gadgets * -announcements * -WritingPrompts * -Documentaries * -InternetIsBeautiful * -philosophy * -gifs * -GetMotivated * -creepy * -nottheonion * -EarthPorn * -photoshopbattles * -listentothis * -blog more >> reddit.com Amd * comments * other discussions (3) Want to join? Log in or sign up in seconds.| * English [ ][] [ ]limit my search to r/Amd use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example.com find submissions from "example.com" url:text search for "text" in url selftext:text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results marked as NSFW e.g. subreddit:aww site:imgur.com dog see the search faq for details. advanced search: by author, subreddit... this post was submitted on 16 Aug 2023 292 points (88% upvoted) shortlink: [https://redd.it/15t0] [ ][ ] [ ]remember mereset password login ATF Submit a new link Submit a new text post Get an ad-free experience with special benefits, and directly support Reddit. get reddit premium Amd joinleave1,587,506 readers 1,854 users here now /r/AMD Discord Official AMD Discord FILTER BY: NEWS REVIEW RUMOR PHOTO Latest Drivers & Tech Support * Latest AMD Drivers * AMD Cleanup Utility * Report a bug or issue to AMD * Latest PC build question and Tech Support Megathread General Information Welcome to /r/AMD -- the subreddit for all things AMD; come talk about Ryzen, Radeon, Threadripper, EPYC, rumors, reviews, news and more. /r/AMD is community run and does not represent AMD in any capacity unless specified. Rules Rule 1: PC build questions and Tech Support posts are only allowed in the Questions and Tech Support Megathread - you can find the latest linked in the sidebar or pinned on the front page. You may also want to use /r/AMDHelp, /r/TechSupport, /r/buildapc and AMD's official community support forums Rule 2: e-Begging, asking for free PCs, sponsorships, components, posting referral or affiliate links, as well as asking for buying or selling advice is not allowed on /r/AMD Rule 3: Be civil and follow side-wide rules, this means no insults, personal attacks, slurs, brigading, mass mentioning users or other rude behaviour Discussing politics or religion is also not allowed on /r/AMD Rule 4: All posts must be related to AMD, their technologies or products If a post is made and it is not clear how it relates to AMD, an explanation in the thread will be required that details why it is relevant, if the post lacks said comment, it will be removed Content that is primarily about a competitor, their technologies, products or other industry that merely references AMD, their products or technologies would not meet this criteria Rule 5: Battlestation posts, PC builds and pictures of PC components are only allowed between 0:00 UTC Saturday and 0:00 UTC Monday Pictures of boxes are not allowed under any circumstances; this includes packaging and damaged packaging Rule 6: Posts containing leaks, rumors, speculation or with other information not confirmed by AMD or one of AMD's partners (e.g. Sapphire, XFX, HP, Dell) must be flaired appropriately as rumor Rule 7: Shitposts, memes and other low-effort posts are not allowed on /r/AMD Rule 8: Straw polls and Reddit polls are not allowed -- we encourage discussion and debate. Straw and Reddit polls are simple questions resulting in a yes/no answer or soliciting another ballot-type response Rule 9: Changing the title of posts is not allowed, you must use the suggested Reddit title or copy/paste the title of the original link/ video/article Posts with altered titles will be removed and repeat offenders may be banned Rule 10: Do not use Google AMP links, URL shorteners or post screenshots of Tweets, articles, videos or other content. You must submit the Tweet, article, video or other content as a link post Rule 11: Posts with the benchmark flair must contain multiple games or configurations of hardware. Exceptions will be made for posts containing singular applications or hardware configurations if the post is a benchmark for unreleased AMD hardware or a world record Rule 12: The mods of /r/AMD reserve the right to allow posts/comments that could break any of the rules, when a situation has arisen where the post is funny, educational or useful to the users of /r/AMD. We also reserve the right to remove posts or comments that may not technically break the rules in situations where moderators feel the post is not appropriate If you have a post you believe warrants an exception, please message us via modmail Links * List of FreeSync Monitors * List of FreeSync TVs Related Subreddits /r/AMDHelp /r/buildapc /r/hardware /r/Intel /r/NVIDIA /r/techsupport /r/overclocking /r/battlestations a community for 13 years BTF MODERATORS * message the mods discussions in r/Amd <> X 49 * 14 comments [V5ZRWeUP] Radeon(tm) GPU Detective (RGD) 1.0 is now available 58 * 14 comments [mVCXih1-] AMD to Unveil 'Major' New Radeon Products Next Week at Gamescom 133 * 98 comments I did a repaste on my 7900XT 354 * 57 comments [la8zwyKn] amd driver package browser is a "little" out of date... 21 * 12 comments [Yofaxmm2] AMD Navi 4X concept renders demonstrate the complexity of the high-end RDNA4 GPU design - VideoCardz.com 18 * 36 comments Ask AMD: Will Starfield Bundle continue past Starfield Launch and be available for 7800XT? 294 * 82 comments I turned a $95 AMD APU into a 16GB VRAM GPU and it can run stable diffusion! The chip is 4600G. 5600G or 5700G also works. 41 * 11 comments [IeaGWApy] AMD Ryzen 5 7500F is now available in Germany starting from EUR202 - VideoCardz.com * 1 comment AMD AGESA 1.0.0.7c BIOS update available for Gigabyte AORUS 24 * 9 comments [ZqHRCsK7] Lenovo Legion Go, AMD Phoenix powered handheld with 'joy-cons' has been pictured - VideoCardz.com Welcome to Reddit, the front page of the internet. Become a Redditor and join one of thousands of communities. x 291 292 293 DiscussionI turned a $95 AMD APU into a 16GB VRAM GPU and it can run stable diffusion! The chip is 4600G. 5600G or 5700G also works. ( self.Amd) submitted 1 day ago by chain-77 The 4600G is currently selling at price of $95. It includes a 6-core CPU and 7-core GPU. 5600G is also inexpensive - around $130 with better CPU but the same GPU as 4600G. It can be turned into a 16GB VRAM GPU under Linux and works similar to AMD discrete GPU such as 5700XT, 6700XT, .... It thus supports AMD software stack: ROCm. Thus it supports Pytorch, Tensorflow. You can run most of the AI applications. 16GB VRAM is also a big deal, as it beats most of discrete GPU. Even those GPU has better computing power, they will get out of memory errors if application requires 12 or more GB of VRAM. Although the speed is an issue, it's better than out of memory errors. For stable diffusion, it can generate a 50 steps 512x512 image around 1 minute and 50 seconds. This is better than some high end CPUs. 5600G was a very popular product, so if you have one, I encourage you to test it. I made some videos tutorials for it. Please search tech-practice9805 for on Youtube and subscribe to the channel for future contents. Or see the video links in Comments. Please also follow me on X: https://twitter.com/TechPractice1 Thanks for reading! * 82 comments * share * save * hide * report all 82 comments sorted by: best (suggested) topnewcontroversialoldrandomq&alive (beta) [ ] Want to add to the discussion? Post a comment! Create an account [-]CasimirsBlake 36 points37 points38 points 1 day ago (7 children) It would be very interesting if Oogabooga could run GGML LLM models with a solution like this. I wouldn't expect inferencing to be FAST but it's an option that would not require a discrete GPU. * permalink * embed * save * report * give award * reply [-]ShadF0x 12 points13 points14 points 15 hours ago (1 child) The better question would be "Is it appreciably faster than CPU inference". * permalink * embed * save * parent * report * give award * reply [-]lordofthedrones 1 point2 points3 points 11 hours ago (0 children) If it is not memory bandwidth intensive, it should be. * permalink * embed * save * parent * report * give award * reply [-]upalse 2 points3 points4 points 15 hours ago (0 children) LLM isn't all that great as it is primarily memory bandwidth bound, ie almost no difference from a CPU if your memory bw is mere 12/25Gb/ s. SD needs far more compute for inference - APU with slow memory helps. * permalink * embed * save * parent * report * give award * reply [-]chain-77[S] 1 point2 points3 points 23 hours ago (0 children) Thanks for the suggestion! * permalink * embed * save * parent * report * give award * reply [+][deleted] 18 hours ago* (2 children) [deleted] [-]upalse 0 points1 point2 points 14 hours ago (1 child) KoboldCPP/llama.cpp run on system memory. llama.cpp can run either these days, including splitting layers over multiple CPUs+GPUs - which is what you normally do if you don't have a 24GB card to fit a large model. * permalink * embed * save * report * give award * reply [-]clicata00Ryzen 9 7900X | RX 6900 XT 21 points22 points23 points 1 day ago (11 children) Can you go higher than 16GB of RAM? * permalink * embed * save * report * give award * reply [-]chain-77[S] 18 points19 points20 points 23 hours ago (10 children) It needs motherboard support to go higher. In Windows, they support higher amounts via unified memory. But AMD software support for Windows is not there yet. * permalink * embed * save * parent * report * give award * reply [-]DHJudasAMD Ryzen 5800x3D|Built By AMD Radeon RX 7900 XT 20 points 21 points22 points 19 hours ago (6 children) asrock boards... has basically no limit, i could even select 64GB * permalink * embed * save * parent * report * give award * reply [-]gakphrt 1 point2 points3 points 4 hours ago (1 child) Asrock says their boards go to a maximum of 16GB. All of their docs contradict what you posted. Please correct me if I am wrong. * permalink * embed * save * parent * report * give award * reply [-]DHJudasAMD Ryzen 5800x3D|Built By AMD Radeon RX 7900 XT [score hidden] 2 hours ago (0 children) There have been posts here even showing pictures of asrock boards with up to 64gb+ in the limit... the B550 board i recently shipped out to a customer had the same thing. Documentation is rarely ever accurate, it's just often carbon copied forward. * permalink * embed * save * parent * report * give award * reply [-]chain-77[S] 0 points1 point2 points 9 hours ago (1 child) that's awesome! I have MSI, Asus, and Gigabyte boards, but their max is 16GB. Can you share a screenshot? * permalink * embed * save * parent * report * give award * reply [-]DHJudasAMD Ryzen 5800x3D|Built By AMD Radeon RX 7900 XT [score hidden] 2 hours ago (0 children) i don't have an apu handy at the moment.. but i'd say anyone that gets an asrock board, check the setting PRIOR to update.. because i think the x570 board i recently setup with a temp apu to update to the latest bios for 5800x3D.... the setting was reduced on it. I can't find the screenshot i thought i had of me being able to select up to 64GB+ * permalink * embed * save * parent * report * give award * reply [-]bidet_enthusiast 0 points1 point2 points 7 hours ago (1 child) Which asrock boards? I have a couple of X570 Creator MOBOs coming and it would be awesome to be able to do this! * permalink * embed * save * parent * report * give award * reply [-]DHJudasAMD Ryzen 5800x3D|Built By AMD Radeon RX 7900 XT [score hidden] 2 hours ago (0 children) i had the options on some x370,b350 boards, some b450 pro 4s, and the x570 phantom gaming 4. Though i think the most recent bios update for the 5800x3D i can't say for sure if it reduced it again or what..... * permalink * embed * save * parent * report * give award * reply [-]Kionera5800X3D | 6900XT MERC319 5 points6 points7 points 19 hours ago (2 children) Did you look into SHARK? Most AMD users are using that for Stable Diffusion on Windows. No idea if you can run it on an APU though. * permalink * embed * save * parent * report * give award * reply [-]chain-77[S] 0 points1 point2 points 9 hours ago (1 child) I know SHARK, but haven't tried it. On windows, it works for the ONNX methods, took around 3.5 min for 50 steps. * permalink * embed * save * parent * report * give award * reply [-]Kionera5800X3D | 6900XT MERC319 0 points1 point2 points 8 hours ago (0 children) SHARK runs quite a bit faster, on Windows at least. * permalink * embed * save * parent * report * give award * reply [-]CalarasigaraAMD | R5 5600/RX 6750XT 12 points13 points14 points 1 day ago (0 children) I don't really use my PC for productivity but it's really cool an APU can do that. Props to you for finding that out! * permalink * embed * save * report * give award * reply [-]chain-77[S] 36 points37 points38 points 1 day ago (1 child) The video can be found at https://youtu.be/HPO7fu7Vyw4 * permalink * embed * save * report * give award * reply [-]rael_gc 4 points5 points6 points 11 hours ago (0 children) I was expecting the steps to reproduce what the post title describes... * permalink * embed * save * parent * report * give award * reply [-]cakemates 24 points25 points26 points 1 day ago (2 children) Now I'm wondering if this could be scaled to 200gb of ram and bigger models. * permalink * embed * save * report * give award * reply [-]Snotspat 18 points19 points20 points 1 day ago (0 children) Some day you should ask, so that you can stop wondering. * permalink * embed * save * parent * report * give award * reply [-]Remarkable-Host405 0 points1 point2 points 8 hours ago (0 children) It's probably worth noting that this is only using ddr4, and is a lot slower than gddr5/6, unless I'm missing something * permalink * embed * save * parent * report * give award * reply [-]Remarkable-Llama616 9 points10 points11 points 18 hours ago (0 children) That's about 0.55 iterations per second. For $95, can't really complain. Still beats CPU only by a good amount. Nice find! * permalink * embed * save * report * give award * reply [-]MaximumMeatPotato 7 points8 points9 points 1 day ago (0 children) Nice. Good job! * permalink * embed * save * report * give award * reply [-]lgdamefanstraight>install gentoo 9 points10 points11 points 23 hours ago (0 children) thanks, i can finally have AI girlfriend * permalink * embed * save * report * give award * reply [-]Zghembofanless 3700X | RX5700 3 points4 points5 points 1 day ago (5 children) can be turned into a 16GB VRAM GPU under Linux how exactly? * permalink * embed * save * report * give award * reply [-]chain-77[S] 6 points7 points8 points 23 hours ago (4 children) Will post a tutorial soon! * permalink * embed * save * parent * report * give award * reply [-]GruuMasterofMinions 2 points3 points4 points 16 hours ago (1 child) !remindme 2 weeks * permalink * embed * save * parent * report * give award * reply [-]RemindMeBot 0 points1 point2 points 16 hours ago* (0 children) I will be messaging you in 14 days on 2023-08-31 07:01:16 UTC to remind you of this link 4 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam. ^Parent commenter can ^delete this message to hide from others. --------------------------------------------------------------------- ^Info ^Custom ^Your Reminders ^Feedback * permalink * embed * save * parent * report * give award * reply [-]priv4t0r 0 points1 point2 points 14 hours ago (0 children) would be awesome, would really like to see a docker solution * permalink * embed * save * parent * report * give award * reply [-]DoiF 0 points1 point2 points 14 hours ago (0 children) !remindme 2 weeks * permalink * embed * save * parent * report * give award * reply [-]xander-mcqueen1986 9 points10 points11 points 1 day ago (1 child) Them apu are near gtx 1050 levels on the igpu side and do need 3200 to 3600mt dual channel ram to get the most bandwidth out of them. They do pack a punch for what they are. Give it nice igpu overclock and have a nice little performance gain. * permalink * embed * save * report * give award * reply [-]Cryio5700 XT | R5 3600 | 32 GB | X570 4 points5 points6 points 12 hours ago (0 children) The 5700G is 1030 GDDR5/RX 550 level. A bit over half of a 1050. * permalink * embed * save * parent * report * give award * reply [-]VancityGaming 9 points10 points11 points 1 day ago (6 children) You should post this over on r/LocalLLaMA and r/stablediffusion also take a look at https://github.com/mlc-ai/mlc-llm if you haven't already. This project seems to share your goals about democratizing AI and getting benchmarks for this APU would be useful for them. Also, is there any way to use multiple APUs together? A cheap 8 socket APU board sure would be a nice alternative if it could work with large models. * permalink * embed * save * report * give award * reply [-]LongFluffyDragon 15 points16 points17 points 1 day ago (4 children) "cheap 8 socket board" = what? Also at that point you may as well just get a 1200$ GPU, it will vastly outperform 8 little APUs screaming and fighting over RAM access. Not that you could even do that to begin with. * permalink * embed * save * parent * report * give award * reply [-]Peterowsky 13 points14 points15 points 23 hours ago (3 children) I was reading through quickly and I really hoped they meant 8 core APU, even though the 4600g only has 7 graphics cores, BUT is there any way to use multiple APUs together? A cheap 8 socket APU board makes it pretty clear they actually want 8 sockets which is some... next level "I have no idea how it works". * permalink * embed * save * parent * report * give award * reply [-]nagi6035800X3D | RTX2080Ti custom loop 5 points6 points7 points 16 hours ago (0 children) next level "I have no idea how it works". That's a given for every tech as hyped up as AI is currently. * permalink * embed * save * parent * report * give award * reply [-]VancityGaming -1 points0 points1 point 16 hours ago (1 child) I know that's not how it works, I said it'd be nice if that was an option. Like if I said "it sure would be nice if someone drove over here and gave me a million dollars" I'm not actually expecting that to happen. * permalink * embed * save * parent * report * give award * reply [-]marxr87 1 point2 points3 points 12 hours ago (0 children) best youre going to get is 2. But someone above said asrock boards would allow selecting up to 64gb of vram. So in the future, it COULD make sense to go down that route. 128gb of vram would be pretty dope, even if it is slow, they should also be fairly power efficient. * permalink * embed * save * parent * report * give award * reply [-]nuked24 8 points9 points10 points 1 day ago (0 children) Multi socket boards require links between CPUs, consumer platforms only have links to the chipset. * permalink * embed * save * parent * report * give award * reply [-]evilgeniustodd 10 points11 points12 points 19 hours ago (0 children) "Please also follow me on X" Lost me right there. * permalink * embed * save * report * give award * reply [+]Silverfoxcrest 4 points5 points6 points 1 day ago (4 children) Too bad windows can't do that. * permalink * embed * save * report * give award * reply [-]Calm-Zombie2678 1 point2 points3 points 23 hours ago (3 children) *doesn't * permalink * embed * save * parent * report * give award * reply [-]Silverfoxcrest 0 points1 point2 points 14 hours ago (2 children) Well... What's the difference? * permalink * embed * save * parent * report * give award * reply [-]Calm-Zombie2678 1 point2 points3 points 14 hours ago (1 child) It could do, they just choose not to * permalink * embed * save * parent * report * give award * reply [-]Silverfoxcrest 0 points1 point2 points 11 hours ago (0 children) I wonder why, it would be a +1 for windows. And I would rather have apu support in parallel with gpu instead of a new Windows interface every couple of years. * permalink * embed * save * parent * report * give award * reply [-]Utinnni5600x, TUF Gaming B550-Plus, GTX 1080 4 points5 points6 points 20 hours ago (1 child) What the hell is x * permalink * embed * save * report * give award * reply [-]The_HenryUK 1 point2 points3 points 10 hours ago (0 children) New "Name" for Twitter * permalink * embed * save * parent * report * give award * reply [-]psychoacer 1 point2 points3 points 19 hours ago (3 children) Has everyone learned ball grid array soldering this past month or something? * permalink * embed * save * report * give award * reply [-]Thomas_Jefferman 2 points3 points4 points 18 hours ago (1 child) Ita not vram, but shared ram. This has been a thing for igpus for forever. * permalink * embed * save * parent * report * give award * reply [-]marxr87 0 points1 point2 points 12 hours ago (0 children) ah, that's what i was wondering. but it is still neat if it let's you bypass max vram errors i guess. * permalink * embed * save * parent * report * give award * reply [-]ms--lane 0 points1 point2 points 12 hours ago (0 children) It's not exactly hard, just not cheap to get into. * permalink * embed * save * parent * report * give award * reply [-]Shining_prox 1 point2 points3 points 12 hours ago (1 child) How much does a Radeon vii used nowadays? * permalink * embed * save * report * give award * reply [-]ms--lane 1 point2 points3 points 11 hours ago (0 children) Still expensive since they do 1:4 double precision. * permalink * embed * save * parent * report * give award * reply [-]v3sselofwrath 1 point2 points3 points 4 hours ago (0 children) Interesting, I was about to sell my 5700G as I popped in a 5950x upgrade to the existing system along with a Radeon 7600 but may now have a reason to keep it and get a cheap A520 motherboard. * permalink * embed * save * report * give award * reply [-]ChinoGambino 0 points1 point2 points 20 hours ago (0 children) Wow. Spam on a budget. * permalink * embed * save * report * give award * reply [-]Fruitypuff -3 points-2 points-1 points 22 hours ago (0 children) Ahhh sht here we go again with a GPU shortage and APU shortage for the AI Waifus * permalink * embed * save * report * give award * reply [-]AshleySimp91 -3 points-2 points-1 points 19 hours ago (0 children) Plagiarism on a budget * permalink * embed * save * report * give award * reply [-]threeeddd -1 points0 points1 point 16 hours ago (3 children) I started out with a 2060 6gb, I thought that was fast enough when I began using stable diffusion. I ran into vram limits after awhile, then used a 2080 super 8gb which was almost twice as fast with the same power draw. Now I'm on a 2080ti, still running out of vram. At the same time, it takes minutes to generate larger images. Mind you a 2080ti in FP16 performance is close to a 3080. I'm at the point where I need more vram, and more performance. Luckily, nvlink works with the 2080ti. I hoping the 7900xtx can get some good performance in windows with ROCm. Nvidia not providing more vram kinda sucks this gen. * permalink * embed * save * report * give award * reply [-]janos666 0 points1 point2 points 7 hours ago (2 children) RTX3090 24Gb cards also have 1x NVLink. :) * permalink * embed * save * parent * report * give award * reply [-]threeeddd 0 points1 point2 points 4 hours ago (1 child) Definitely, would love to have that massive vram amount with 2x shared 3090 vram. Power draw is a concern though, especially with the 4000 series offering much better efficiency. I have the 2080ti undervolted to 160w, with little lost in performance. I'm just going to wait until the 4080 comes down in price, it's just a matter of time. And 2080ti are cheaper than ever on the used market. 2x 2080ti are faster than a 4080 in fp16 performance. A total of 22gb vram in nvlink of the 2080ti as well. I'll take a 7900xtx once they get some good performance in stable diffusion. Waiting for the 4080 to get down to 800 dollars, probably by the end of the year. * permalink * embed * save * parent * report * give award * reply [-]tecedu [score hidden] 31 minutes ago (0 children) I would say if you are serious about it go for 4090, its the best for stable diffusion. * permalink * embed * save * parent * report * give award * reply [-]windozeFanboi -1 points0 points1 point 5 hours ago (0 children) I honestly thought you modded and added 16GB VRAM worth of GDDR chips. somehow. Anyway, yes, your point is corrent. AMD APUs are underappreciated. AMD is partly to blame. 6800u / 7840u should be treated better. Particularly 7840u should be very strong at compute. I hope Zen5 rectifies this, and gives us something similar to Apple's LPDDR5 wide 256bit(?) memory interface.Current CPUs are monsters and can barely utilize 60GB/s single core while total DDR5 super overclocked reaches MAAYBE 100GB/sec. Even nVidia's PATHETIC 4060 ti has 300GB/sec bandwidth. If Zen5 and Intel's lunar lake and beyond start working with more channels on consumer CPUs that would be nice. Double DDR5 channels + 30% higher frequencies in 2 years = 200GB/sec bandwidth. Apple's systems are not magic. And they're actually rather weak GPU wise compared to Nvidia/AMD, <100W though. * permalink * embed * save * report * give award * reply [-]cbutters2000 -4 points-3 points-2 points 17 hours ago* (4 children) This is pretty neat, although, if the system is $500 total to throw this together, you might be better off just buying a $200-400 GPU if you already have a pc. You can do a lot with AI even on a 4GB RTX 3050. Sure if you're using something like this you'd have to use models that fit into less ram, but there is still a whole lot you can do with "low" ram. Also you could just get a 4060 16GB version for basically the same price as the 4600G system (And you could game pretty well with it.) * permalink * embed * save * report * give award * reply [-]pleasebecarefulguys 2 points3 points4 points 13 hours ago (0 children) so i dont need nothing else just GPU alone is enough lmao * permalink * embed * save * parent * report * give award * reply [-]nuliknol 1 point2 points3 points 10 hours ago* (0 children) totally concur. a $500 GPU would have like 40 CUs ? , but in RDNA 3.0 it is like equivalent of 70 CUs of Zen 2 that he bought. And for the destop PC you can even find it in a scrap yard, after all you only need a reachable PCI slot. Even PCIe 2.0 will be fine for this task. You should always buy the newest hardware available, and pay the price. 6 nanometer (RDNA3) against 14 nanometer (Zen2) is twice the gain just because you can stick twice more transistors and use twice less power * permalink * embed * save * parent * report * give award * reply [-]marxr87 0 points1 point2 points 12 hours ago (1 child) If you want to recommend a budget discrete option its going to be 2060 or 3060 12gb. Which is probably still better than this in most cases. However, I am really curious to see if this can be done for laptop cpus with integrated gfx, like the 6800h. Because THAT would be really interesting. * permalink * embed * save * parent * report * give award * reply [-]CNR_07R5 3600 | ATI Radeon HD8570 | AMD Radeon RX 6700XT | SuSE Linux 0 points1 point2 points 11 hours ago (0 children) Used Radeon MIs are pretty cheap and fast for SD. * permalink * embed * save * parent * report * give award * reply [-]rael_gc -2 points-1 points0 points 11 hours ago (2 children) Initially I got excited for this. But there is no tutorial, even the video is a the influencer wannabe video. * permalink * embed * save * report * give award * reply [-]chain-77[S] 1 point2 points3 points 11 hours ago (1 child) I will upload it soon. My other videos include detailed steps for AMD softwares * permalink * embed * save * parent * report * give award * reply [-]rael_gc 0 points1 point2 points 11 hours ago (0 children) I've found it: in BIOS, disable CSM, check if boot works, then increase video memory. * permalink * embed * save * parent * report * give award * reply [-]mule_roany_mare 0 points1 point2 points 22 hours ago (0 children) This is really interesting. I'm really hoping the market rewards this upcoming generation of powerful APUs & they continue to grow. There is a lot of good in good enough. There's also rumor (AMD patent & a whitepaper or two) that AMD is supplementing their raytracing tech with something new. Current RT uses BVH trees which are pretty useless for everything but RT, but the new technique uses massive matrix multiplication which has way more potential to be useful outside RT. * permalink * embed * save * report * give award * reply [-]Beautiful_Car8681 0 points1 point2 points 22 hours ago (1 child) I'm layman on the subject. Could this be done with Davinci Resolve for video rendering? It would be interesting because some Fusion effects consume a lot of vram. * permalink * embed * save * report * give award * reply [-]CNR_07R5 3600 | ATI Radeon HD8570 | AMD Radeon RX 6700XT | SuSE Linux 0 points1 point2 points 11 hours ago (0 children) What does Davinci Resolve use for GPU compute? OpenCL? * permalink * embed * save * parent * report * give award * reply [-]BlueSwordMBoosted 3700X/RX 580 Beast 0 points1 point2 points 18 hours ago (0 children) I wonder how fast a Zen 4 Phoenix APU would be with 8200MT/S VRAM... * permalink * embed * save * report * give award * reply [-]ThirtyFPSgamer 0 points1 point2 points 14 hours ago (0 children) I use my R5 5600Gs iGPU to play warzone at a stable 60, VRAM set to 8gb * permalink * embed * save * report * give award * reply [-]marxr87 0 points1 point2 points 13 hours ago (0 children) Please tell me this can be done on amd laptop chips. I have a 6800h. That would be an even bigger deal imo, since you can't upgrade components. I was already looking for a reason to put 32gb in my system... * permalink * embed * save * report * give award * reply [-]Ruin-Capable 0 points1 point2 points 9 hours ago (0 children) This sounds very similar to why Apple Silicon SOCs are so good for running large LLMs. I have a 64GB Macbook Pro, and it runs llama-2-70b much better than my Intel based system with a 24GB graphics card because I load then entire model into VRAM. * permalink * embed * save * report * give award * reply [-]flibbledeedo [score hidden] 1 hour ago (0 children) https://github.com/GPUOpen-LibrariesAndSDKs/AGS_SDK/ https://gpuopen.com/ ags-sdk-5-4-improves-handling-video-memory-reporting-apus/ Which says: "For APUs, this distinction is important as all memory is shared memory, with an OS typically budgeting half of the remaining total memory for graphics after the operating system fulfils its functional needs. As a result, the traditional queries to Dedicated Video Memory in these platforms will only return the dedicated carveout - and often represent a fraction of what is actually available for graphics. Most of the available graphics budget will actually come in the form of shared memory which is carefully OS-managed for performance." The implication seems to be that you can have an arbitrary amount of graphics RAM, which would be appealing for AI use cases, even though the GPU itself is relatively underpowered. Still, the question remains open, how to precisely control APU/GPU memory allocation on Linux and what is the limitations? * permalink * embed * save * report * give award * reply [-]HyperShinchanR5 5600X | RTX 2060 | 32GB DDR4 - 3733 CL18 [score hidden] 1 hour ago (0 children) Well, Stable Diffusion actually scales pretty well with low VRAM GPUs, right now using a modified ComfyUI workflow my RTX 2060 can generate a 40 steps DPM2 sampler 1024x1024 SDXL 1.0 model picture in less than two minutes (around 115 seconds). * permalink * embed * save * report * give award * reply * about * blog * about * advertising * careers * help * site rules * Reddit help center * reddiquette * mod guidelines * contact us * apps & tools * Reddit for iPhone * Reddit for Android * mobile website * <3 * reddit premium * reddit coins Use of this site constitutes acceptance of our User Agreement and Privacy Policy. (c) 2023 reddit inc. All rights reserved. REDDIT and the ALIEN Logo are registered trademarks of reddit inc. Advertise - technology [pixel] p Rendered by PID 66 on reddit-service-r2-loggedout-dfcb84f8f-q4n8v at 2023-08-17 23:02:11.462873+00:00 running ec387f4 country code: US.