[HN Gopher] Proxmox VE: Import Wizard for Migrating VMware ESXi VMs
       ___________________________________________________________________
        
       Proxmox VE: Import Wizard for Migrating VMware ESXi VMs
        
       Author : aaronius
       Score  : 157 points
       Date   : 2024-03-27 16:33 UTC (6 hours ago)
        
 (HTM) web link (forum.proxmox.com)
 (TXT) w3m dump (forum.proxmox.com)
        
       | itopaloglu83 wrote:
       | Very swift move by Proxmox. For context, VMware recently
       | increased their prices as much as 1200% for some customers.
        
         | rwmj wrote:
         | Tons of products like this have existed for years. Virt-v2v
         | (which I wrote) since 2007/8, Platespin, Xen XCP, AWS's
         | tooling, Azure Migrate etc.
        
           | itopaloglu83 wrote:
           | Yes, that's true. But this is not about the product but about
           | business practices of Broadcom. They tend to do sharp price
           | increases when they purchase a product line.
           | 
           | ServeTheHome talked about this a while ago.
           | https://youtu.be/peH4ic7g5yc
        
       | whalesalad wrote:
       | I did this recently and it was honestly a walk in the park. Was
       | quite pleasantly surprised when all my vm's just booted up and
       | resumed work as normal. The only thing I was worried about was
       | the mac addresses used for dedicated dhcp leases, but all of that
       | "just worked" too!
        
       | Denote6737 wrote:
       | Proxmox striking whilst the iron is still hot. Impressive.
        
       | lelandbatey wrote:
       | Important because VMWare's been acquired by Broadcom (November
       | 22, 2023) and Broadcom's been turning the screws on customers to
       | get more money. Many folks are looking for alternatives. More
       | context:
       | 
       | 2024/02/26 Can confirm a current Broadcom VMware customer went
       | from $8M renewal to $100M
       | https://news.ycombinator.com/item?id=39509506
       | 
       | 2024/02/13 VMware vSphere ESXi free edition is dead
       | https://news.ycombinator.com/item?id=39359534
       | 
       | 2024/01/18 VMware End of Availability of perpetual licensing and
       | associated products https://news.ycombinator.com/item?id=39048120
       | 
       | 2024/01/15 Order/license chaos for VMware products after Broadcom
       | takeover https://news.ycombinator.com/item?id=38998615
       | 
       | 2023/12/12 VMware transition to subscription, end of sale of
       | perpetual license https://news.ycombinator.com/item?id=38615315
        
       | blaerk wrote:
       | I really hope the crazy prize increase of vmware products will
       | end the use of esxi and the rest of the vsphere suite, it is one
       | of the worst applications and apis i have ever had the
       | displeasure of working with!
        
         | candiddevmike wrote:
         | VMware has a track record of pretty great reliability across a
         | _vast_ array of hardware. Yes, the APIs suck, but they 're a
         | case study on tech debt: vSphere is basically the Windows
         | equivalent of datacenter APIs. They chose the best technology
         | at the time (2009, which meant SOAP, Powershell, XML, etc) and
         | had too much inertia to rework it.
        
           | mianos wrote:
           | Not to mention how flakey it is at scale. There is always
           | some vmware guy who replies to me saying how good it is, but
           | if you have thousands of VMs it is a random crapshoot.
           | Something you just don't see with say AWS and Azure at
           | similar scale. It reaks of old age and hack on hack over many
           | years, and that is saying something when compared to AWS.
        
         | zettabomb wrote:
         | I can't concur. VMware was the leader in virtualization
         | technology for a long time, and honestly nothing is quite as
         | simple to start with as ESXi if you've never used a type 1
         | hypervisor before. I'm not so familiar with the APIs, so
         | perhaps you're correct in that sense.
        
           | nolok wrote:
           | > nothing is quite as simple to start with as ESXi if you've
           | never used a type 1 hypervisor before
           | 
           | Not sure about where ESXi is at lately on that level, but
           | latest proxmox is really, really simple to start with if
           | you've never used an hypervisor. You boot on the usb drive,
           | press yes a few times, open the ip:port they give you and
           | then you can click "create vm", next next next here is the
           | iso to boot from and that's it.
           | 
           | Any tech user who has some vague knowledge about virtual
           | machine or even run virtualbox on his computer could do it,
           | and the more advanced fonctions (from proper backups and
           | snapshot to multi node replication and load balancing) are
           | absurdly simple to figure out in the UI.
           | 
           | I can't talk about the performance or quality of one against
           | the other, but in pure difficulty to approach proxmox is
           | doing very very good.
        
             | mavhc wrote:
             | also does zfs raidz boot in the installer
        
               | MrDarcy wrote:
               | Also does ceph in the GUI for near instant live
               | migrations.
        
         | fh973 wrote:
         | I really hope that the price increase creates a business
         | opportunity for new technology. This space has been plagued by
         | subpar "free" alternatives (Openstack, Kubernetes) for a
         | decade.
        
         | SV_BubbleTime wrote:
         | We went from $66 last year to $3600 this year.
         | 
         | There won't be another year.
        
         | kazen44 wrote:
         | i would disagree with you there, especially because there is
         | very little on the sdn front which matches NSX-T in terms of
         | SDN capabilities, this is something in which vmware has been
         | ahead, the only other people with the same capabilities seem to
         | be hyperscalers.
        
           | c0l0 wrote:
           | Take a look at Proxmox SDN features:
           | https://pve.proxmox.com/pve-docs/chapter-pvesdn.html (some of
           | it is still in beta, I think).
           | 
           | I think it comes _pretty_ close - close enough for probably
           | most but the very largest of users, who, I think, should
           | probably have tried to become hyperscalers themselves,
           | instead of betting the farm and all the land around it on
           | VMware (by Broadcom).
        
             | kazen44 wrote:
             | the thing it is mainly missing is multi-tenancy self
             | service. (ipam integration seems very nice though).
             | 
             | NSX allows you to create seperate clusters which hosts VM's
             | which run the routing and firewalling functionality.
        
           | oneplane wrote:
           | NSX-T and what hyperscalers do is essentially orchestration
           | of things that already exist anyway. The load balancing in
           | NSX is mostly just some openresty and Lua which as been
           | around for quite a while. Classic Q-in-Q and bridging also
           | does practically all of the classic L2 & L3 networking that
           | tends to be touted as 'new', while you could even do that
           | fully orchestrated when Puppet was the hot new thing back in
           | the day.
           | 
           | Some things (that were created before NSX) may have come from
           | internet exchanges and hyperscalers, like openflow, P4, and
           | FRR, but were really not missing parts that were required to
           | do software defined networking. If anything, the only thing
           | you really needed for SDN was Linux, and the only real
           | distinction between SDN and non-SDN was hardwired ASICs in
           | the network fabric (well, not hard-hardwired, but with
           | limited programmability or 'secret' APIs).
        
         | moondev wrote:
         | By application do you mean vCenter? It's in an entirely
         | different league than proxmox.
         | 
         | https://i0.wp.com/williamlam.com/wp-content/uploads/2023/04/...
         | 
         | https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....
        
           | MrDarcy wrote:
           | It's not in a different league. I've used both in production.
           | As others have said vSphere breaks down with thousands of
           | VM's, and worse the vSwitch implementation is buggy and
           | unreliable as soon as you add more than a couple to a
           | cluster.
        
             | moondev wrote:
             | > the vSwitch implementation is buggy and unreliable as
             | soon as you add more than a couple to a cluster.
             | 
             | Next time try dSwitch (distributed switch) instead of
             | vSwitch. It's designed for cluster use and much more
             | powerful (and easier to manage across hosts). Manually
             | managing vSwitches across a cluster sounds like torture.
        
         | oneplane wrote:
         | The VMWare APIs are indeed pretty bad, even the ones on their
         | modern products for some reason (i.e. NSX etc.) where they did
         | adopt more modern methods but still managed to pull a Microsoft
         | with 'one API for you, a different API for us'.
         | 
         | Being pretty bad doesn't mean they don't work of course, but
         | when the best a product has to offer is clickops, they have
         | missed the boat about 15 years ago.
        
       | zer00eyz wrote:
       | I have been running proxmox at home for a few months now.
       | 
       | It has been, to say the least, an adventure. And I have nothing
       | but good things to say about Proxmox at this point. Its running
       | not only my home related items (MQTT, Homeassitant), it also
       | plays host to some of the projects I'm working on (postgres, go
       | apps, etc...) rather than runing some sort of local dev.
       | 
       | If you need to exit vmware, proxmox seems like a good way to go.
        
         | rcarmo wrote:
         | I've been doing that for almost two years now, including ARM
         | nodes (via a fork). It's been awesome, and even though I am
         | fully aware Proxmox does not match the entire VMware feature
         | set (I used to migrate VMware stuff to Azure), it has been a
         | game changer in multiple ways.
         | 
         | Case in point: just this weekend a drive started to die on one
         | of my hosts (I still use HDDs on older machines). I backed up
         | the VMs on it to my NAS (you can do that by just having a Samba
         | storage entry defined across the entire cluster), replaced the
         | disk, restored, done.
        
         | irusensei wrote:
         | I appreciate projects like Proxmox but it must be also said
         | that you can achieve same functionality sans the UI with tools
         | available on most Linux distributions: libvirt, lx{c,d}, podman
         | etc.
        
           | zer00eyz wrote:
           | True...
           | 
           | And Proxmox is just skin on lxc and quemu/kvm.
           | 
           | I will say that as I have just started playing with the lxc
           | api, having the Proxmox UI as a quick and easy visual cross
           | check has been lovely.
           | 
           | Podman is an amazing alternative to docker, cant say enough
           | good things about it.
        
           | RamRodification wrote:
           | A big one hiding in that "etc" I think is Ceph. Proxmox has a
           | very nice UI for setting it up easily.
        
         | eddieroger wrote:
         | I think "adventure" is how I'd put it, too. Perhaps that which
         | I found most surprising was the difference in defaults between
         | the two. ESXi gave me what I considered pretty good defaults,
         | where Proxmox were more conservative or generic (struggling to
         | find the right word). For example, I was surprised that I had
         | to pick an option for CPU type instead of it defaulting to
         | host, which I would have expected. Saying that, I never checked
         | on ESXi, but I never had reason to look in to performance
         | disparities there.
         | 
         | Once I found the surface, I have really grown to like it,
         | expanding my footprint to use their backup server, too. Proxmox
         | makes you work for it, but is worth it.
        
           | SpecialistK wrote:
           | > I was surprised that I had to pick an option for CPU type
           | instead of it defaulting to host
           | 
           | I believe the rationale for this is to prevent issues when
           | migrating to different hosts that may not have the same CPU
           | or CPU features. Definitely a more "conservative" choice -
           | maybe it should be a node-wide option or only default to a
           | generic CPU type when there is more than 1 node.
        
         | d416 wrote:
         | Your experience is very relatable. My first Proxmox adventure
         | began with installing Proxmox 8 on 2 hetzner boxes: one with
         | CPU, one with GPU. Spent two straight weekends on the CPU box
         | and just when I was about to give up on proxmox completely I
         | had a good night's sleep and things finally 'clicked'. Now I'm
         | drinking the proxmox koolaid 100% and making it my go-to OS.
         | 
         | For the GPU box I completely abandoned the install after
         | attempting to do the gymnastics around GPU passthrough. I like
         | Proxmox but I'm not a masochist - Looking forward to the day
         | when that just works.
        
       | shrubble wrote:
       | I am researching whether to buy puts on AVGO (Broadcom, owner of
       | VMware) since I believe their Vmware revenue will crater in 12
       | months or so. They took on 32 billion in debt to buy VMW also,
       | which will tank their stock price, I think.
        
         | candiddevmike wrote:
         | I'd exercise caution, in my experience, it'll take years for
         | companies to transition from VMware to somewhere else. In the
         | interim, their revenue will most likely pop as they're
         | squeezing the shit out of these unlucky souls.
        
           | mvdwoord wrote:
           | I concur, being close to the fire it will take years for
           | large organizations to move off their VMware stacks. Inertia
           | of large organizations is a thing, but mostly, there are so
           | many custom integrations made with other systems, lots of
           | them tied up in the vSphere stack.
           | 
           | SDN is one thing but the amount of effort put in vROPS / vRA
           | / vRO etc is not easily replaced. Workflows integrating with
           | backups, CMDB, IAM, Security and what not have no catch all
           | migration path with some import wizards.
           | 
           | Meanwhile, Broadcom will happily litigate where necessary and
           | invoice their way to a higher stock price.
           | 
           | $0.02
        
         | bityard wrote:
         | All of AVGO/Broadcom's moves with VMware have been to keep
         | revenue somewhat steady by focusing on their biggest customers
         | locked into their ecosystem, while drastically cutting back
         | everything else to lower expenses. This should produce
         | excellent short-term financial results which the market will
         | very likely reward with a higher stock price over the next year
         | or two. The board and C-suites know what they are doing.
         | 
         | Of course, destroying the trust they had with their customers
         | means the long-term prospects of the VMware are not so good.
        
           | gonzo wrote:
           | so they sell the husk of VMware back to Dell when they're
           | done.
        
         | gruturo wrote:
         | It never takes as little as you (or others, myself included)
         | think it should. Big companies have a lot of inertia and
         | changing anything which is working today, even if it saves a
         | lot, attached your name to the risk it will fail horribly, so
         | you'd be reluctant to suggest it, esp. since it's usually not
         | your own money (your budget maybe, but not _your_ own money).
         | 
         | Broadcom knows this very well and likely turned the price screw
         | exactly right - just before the breaking point for the critical
         | mass of their customers.
         | 
         | What I think will lead to the eventual implosion of VMware's
         | market share, on a longer timescale, is the removal of free
         | ESXi. Many people acquire familiarity with small
         | scale/home/demo labs or PoC prototypes, then they recommend
         | going with what they're familiar with. This led Microsoft where
         | they are now, by always giving big discounts to students and
         | never going too hard on those running cracked copies. They saw
         | it as an investment and they were bloody right. If the product
         | had been better it would completely dominate now, but even as
         | shoddy as it is, it's a huge cash cow.
        
       | adr1an wrote:
       | For the sake of completeness, xcp-ng is an alternative to migrate
       | VMware ESXi VMs too!
        
       | rwmj wrote:
       | Does this do the hard bit, ie installing virtio drivers during
       | conversion?
        
         | justinclift wrote:
         | Oh, that would be a smart move. :)
         | 
         | If it doesn't, any idea if it's something they could automate
         | easily?
        
           | rwmj wrote:
           | In recent versions:                 virt-customize -a
           | disk.img --inject-virtio-win <METHOD>
           | 
           | https://libguestfs.org/virt-customize.1.html
           | 
           | However they'll also be missing out on all the other stuff
           | that virt-v2v does.
        
         | bityard wrote:
         | All of the most popular Linux distros tend have the virtio
         | drivers installed by default.
        
           | rwmj wrote:
           | Not in the initramfs which is rather important if you want
           | them to boot without having to use slow emulated IDE. Then
           | there's Windows guests.
        
       | tiberious726 wrote:
       | Anyone try to replace vsphere with the high-availabilty add-on to
       | RHEL?
        
       | matthew-wegner wrote:
       | I'm in game development, and I run ESXi for two reasons:
       | 
       | * Unmodified macOS guest VMs can run under ESXi (if you're
       | reading this on macOS, you have an Apple-made VMXNet3 network
       | adapter driver on your system--see /System/Library/Extensions/ION
       | etworkingFamily.kext/Contents/PlugIns/AppleVmxnet3Ethernet.kext )
       | 
       | * Accelerated 3D has reasonable guest support, even as pure
       | software. You wouldn't want to work in one of those guest VMs,
       | but for any sort of build agent it should be fine, including
       | opening i.e. Unity editor itself in-VM to diagnose something
       | 
       | Does anyone know where either of these things stand with Proxmox
       | today?
       | 
       | I imagine macOS VM under Proxmox is basically a hackintosh with
       | i.e. OpenCore as bootloader?
        
         | mysteria wrote:
         | The SPICE backend has decent OpenGL 3D support with software
         | rendering, it's slow but it works for simple graphics. It's
         | intended for 2D so the desktop's pretty fast IMO. That only
         | works for Linux and Windows guests though, not Apple ones.
         | 
         | MacOS VMs do work in Proxmox with a Hackintosh setup but you
         | pretty much have to passthrough a GPU to the VM if you're using
         | the GUI. Otherwise you're stuck with the Apple VNC remote
         | desktop and that's unbearably slow.
        
           | zozbot234 wrote:
           | For paravirtualized hardware rendering you can use virtio-
           | gpu. In addition to Linux, a Windows guest driver is
           | available but it's highly experimental still and not very
           | easy to get working.
        
         | bonton89 wrote:
         | The lack of 3D paravirtual devices is a real sore spot in kvm.
         | To my knowledge, virgl still isn't quite there but is all there
         | is so far in this space. VMware has the best implementation IMO
         | and everything else is a step down.
        
         | rufugee wrote:
         | Does this work with VMWare Workstation as well? I'd love to run
         | macOS in a VM on my Linux desktop for the few apps I have to
         | use on macOS...
        
           | zerkten wrote:
           | It has in the past for me, but I haven't run it since 2021.
        
         | moondev wrote:
         | Nested virtualization also works great on ESXi for macOS guests
         | ( so you can run docker desktop if so inclined ). I believe
         | this is possible with proxmox as well with CPU=host but have
         | not tried it.
         | 
         | For graphics, another cool thing is intel iGPU pci passthrough
         | - I have had success with this when running esxi on my 2018 mac
         | mini https://williamlam.com/2020/06/passthrough-of-integrated-
         | gpu...
        
         | t3rra wrote:
         | nobody asked.
        
         | oneplane wrote:
         | Apple has been adding a lot of virt and SPICE things IIRC. Some
         | of it isn't supported in VMware (it lacks a bunch of standard
         | virt support), but the facilities are growing instead of
         | shrinking which is a good sign.
         | 
         | On Proxmox you can do the same. You're going to need OpenCore
         | if you're not on a Mac indeed. But if you're not on a Mac
         | you're breaking the EULA anyway.
        
         | mrpippy wrote:
         | Note that macOS guest support ended with ESXi 7.0:
         | https://kb.vmware.com/s/article/88698
         | 
         | Running macOS is only supported/licensing-compliant on Apple-
         | branded hardware anyway, and with the supported Intel Macs
         | getting pretty old this was inevitable anyway.
        
           | moondev wrote:
           | Mac mini 2018 is still the best Mac for vms
           | 6 cores 12 threads        64GB DDR        nvme        4
           | thunderbolt3 ports for pci expansion        10GbE onboard nic
           | boots ESXi        boots Proxmox        boots or virtualizes
           | windows        boots or virtualizes linux        boots or
           | virtualizes macos        iGPU passthrough        Supports
           | nested virt
        
       | bluedino wrote:
       | It's the ecosystem.
       | 
       | Sure, your organization is spending another million dollars on
       | VMware this year, but what are the options?
       | 
       | * Your outsourced VMware-certified experts don't actually know
       | that much about virtualization (somehow)
       | 
       | * Your backup software provider is just now researching adding
       | Proxmox support
       | (https://www.theregister.com/2024/01/22/veeam_proxmox_oracle_...)
       | 
       | * A few years ago you 'reduced storage cost and complexity' by
       | moving to VMware vSAN, now you have a SAN purchase and data
       | migration on your task list
       | 
       | * The hybrid cloud solution that was implemented isn't compatible
       | with Proxmox
       | 
       | * The ServiceNow integration for VMware works great and is saving
       | you tons of time and money. You want to give that up?
       | 
       | * Can you live without logging, reporting, and dashboards until
       | your team gets some free time?
        
         | zozbot234 wrote:
         | With a million dollar per year to play with, you should
         | ultimately be able to replace all of these. Especially since
         | it's not like Proxmox is lacking its own third-party support
         | options (but it being built on FLOSS tech still leaves you with
         | a lot more flexibility).
        
         | mavhc wrote:
         | Your outsourced experts are actually just people with google.
         | 
         | Proxmox on zfs means zfs snapshot send/receive, simple. I made
         | my own immutable zfs backup system for PS5
        
         | oneplane wrote:
         | All of those points would also assume:
         | 
         | * You are big enough to need that and actually implement it
         | 
         | * You have the budget to do so
         | 
         | * You actually have the need to do that in-house
         | 
         | If you are at that scale but you don't have the internal
         | knowledge, you were going to get bitten anyway. If you are not
         | at that scale, you were already bitten and you shouldn't have
         | been doing it anyway.
        
       | Helmut10001 wrote:
       | I use Proxmox since 5 years at home. Mostly docker nested in
       | unprivileged LXCs on ZFS for performance reasons. I love the
       | reliability. Proxmox has never lost me. They churn out constant
       | progress without making too much noise. No buzzwords, no
       | advertising, just a good reliable product that keeps getting
       | better.
        
         | dusanh wrote:
         | Unprivileged LXCs? Interesting, I thought containers would
         | require privileged LXC. At least, that it my takeaway from
         | trying to run Podman in a nesting enabled, but unprivileged LXC
         | under non-root user. I kept running into
         | 
         | > newuidmap: write to uid_map failed: Operation not permitted
         | 
         | I tried googling it, tried some of the solutions, but reached
         | the conclusion that it's happening because the LXC is not
         | privileged.
        
       | F00Fbug wrote:
       | I spent 15 years managing a VMware-centric data center. I ran the
       | free version at home for at least 5 years. When I ran out of
       | vCPUs on my free license I switched to Proxmox and the migration
       | was almost painless. This new tool should help even more.
       | 
       | For most vanilla hosting, you could get away with Proxmox and be
       | just fine. I've been running it for at least 5 years in my
       | basement and haven't had a single hiccup. I bet a lot of VMware
       | customers will be jumping ship when their licenses expire.
        
       | moondev wrote:
       | If proxmox supported OVA and OVF it would be huge. It seems
       | technically possible as there is a new experimental kvm backend
       | for virtual box which supports OVA.
       | 
       | At the end of the day OVA is just machine metadata packaged as
       | XML with all required VM artifacts, there is also some cool
       | things like supporting launch variables. Leveraging the format
       | would bring a bunch of momentum considering all the existing OVAs
       | in the wild
        
         | the_swd wrote:
         | Proxmox documentation does mention OVF support
         | https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Import_OV...
         | 
         | Seems a bit barebones, as in no support for a nice OVF
         | properties launch UI, but one should be able to extract an OVA
         | to an OVF and VMDK an manually edit the OVF with appropriate
         | properties.
         | 
         | I actually had plans this week to try exactly that...
        
           | Underphil wrote:
           | I've set up a ton of virtual appliances that way. It's just a
           | regular ZIP file with the config and vmdk(s).
        
           | moondev wrote:
           | Interesting thanks for sharing. Surfacing this in the UI
           | would be great if it works well for sure.
           | 
           | Another handy feature is the ContentLibrary for organizing
           | and launching OVA/OVF, as well as launching OVA directly from
           | a URL without needing to download it next to the cli.
           | 
           | This makes me think there could be an opportunity in
           | "PhotoPea (kvm gui) for vCenter" - in the same manner
           | photopea is a clean room implementation of the photoshop
           | UI/UX
        
         | RamRodification wrote:
         | From the post in case you missed it:
         | 
         |  _Q: Will other import sources be supported in the future?_
         | 
         |  _A: We plan to integrate our OVF /OVA import tools into this
         | new stack in the future. Currently, integrating additional
         | import sources is not on our roadmap, but will be re-evaluated
         | periodically._
        
       | rafaelturk wrote:
       | Proxmox is great! I just wish that they had a better initial
       | plan, plans start at EUR1020.
        
         | subract wrote:
         | I see plans with access to the Enterprise repos starting at
         | EUR110/yr, and plans with 3 support tickets starting at EUR340.
         | EUR1020 is the starting price for a plan with a 2hr SLA.
         | 
         | https://shop.proxmox.com/index.php?rp=/store/proxmox-ve-comm...
        
       | luzer7 wrote:
       | Does anyone have a good _basic_ guide on LVM/LVM Thin? I'm having
       | a hard time wrapping my head around LVM and moving the vmdk to
       | it. Mainly a Window admin with some Linux experience.
       | 
       | I understand that LVM holds data in it but when I make a Windows
       | VM in proxmox it stores the data in a LVM partition(?) as opposed
       | to ESXi or Hyper-V making a VHD or VMDK.
       | 
       | Kinda confusing .
        
         | abbbi wrote:
         | proxmox is using LVM for direct attached raw volumes. LVM is
         | just a logical volume manager for linux, which gives you more
         | features than using old fashioned disk partitioning. I guess
         | they chose this path for windows virtual machine migration
         | because windows running on vmware before, does usually not have
         | the required virtio drivers installed to support the qemu
         | hypervisors virtio solution for disk bus virtualization out of
         | the box. It would mean the hypervisor has to simulate IDE or
         | SCSI bus which comes with great overhead perfomance wise (in
         | the case of migration)
         | 
         | So an direct attached lvm volume is the best solution
         | performance wise. In the vmware world this would be an direct
         | attached raw device either from local disk or SAN.
         | 
         | For fresh install on proxmox its better to chose qcow as disk
         | image format with virtio-scsi bus (comparable to vhdx, vmdk,
         | qemus disk format) and add virtio drivers during windows setup.
        
       ___________________________________________________________________
       (page generated 2024-03-27 23:00 UTC)