[HN Gopher] Proxmox Virtual Environment 9.0 with Debian 13 released
___________________________________________________________________
Proxmox Virtual Environment 9.0 with Debian 13 released
Author : speckx
Score : 153 points
Date : 2025-08-05 13:57 UTC (9 hours ago)
(HTM) web link (www.proxmox.com)
(TXT) w3m dump (www.proxmox.com)
| Takennickname wrote:
| "It's also possible to install Proxmox VE 9.0 on top of Debian."
|
| Has that always been the case? I have a faint memory of trying
| once and not being able to with Proxmox 7.x
| robeastham wrote:
| I'm pretty sure it's been the case since at least 7.0, as I've
| done it a few times on hosts such as Scaleway that only offered
| a Debian base image for my machine.
| ChocolateGod wrote:
| Yeh. It's useful when trying to install onto partition setups
| the built in installer doesn't support OOTB.
|
| But, things like proxmox-boot-tool may not work.
| rcarmo wrote:
| I've done it a few times--8.x for sure, maybe earlier, but I've
| now been using it for too long to remember accurately.
| oakwhiz wrote:
| I do it this way every time.
| SirMaster wrote:
| Yes it's always been the case. I installed Proxmox 3.4 (based
| on Debian 7) this way originally, and have been upgrading ever
| since with no issues.
| carlhjerpe wrote:
| You can even install Proxmox on NixOS now (no official support
| ofc) though https://github.com/SaumonNet/proxmox-nixos
|
| Which I think is really cool since it means their stuff is
| "truly open-source" :)
| zozbot234 wrote:
| Somewhat annoyingly, Proxmox relies on a non-Debian kernel for
| at least some of its features. This definitely made a
| difference w/ Bookworm (which was on the 6.1 kernel release
| series), not sure about Trixie (which will be on 6.12).
| throw0101c wrote:
| > _Has that always been the case? I have a faint memory of
| trying once and not being able to with Proxmox 7.x_
|
| We did it for 7.x [1] and it worked fine (since upgraded things
| in-place to 8.x).
|
| [1]
| https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11...
| sschueller wrote:
| The official release of Debian Trixie is not until the 9th...
| piperswe wrote:
| Trixie is under a heavy freeze right now; just about all that's
| changing between now and the 9th are critical bug fixes. Yeah
| it's not ideal for Proxmox to release an OS based on Trixie
| this early, but nothing's really going to change in the next
| few days on the Debian side except for final release ISOs being
| uploaded
| zozbot234 wrote:
| They might drop packages between now and the stable release.
| An official Debian release won't generally drop packages
| unless they've become totally unusable to begin with.
| piperswe wrote:
| Given that Proxmox operates their own repos for their
| custom packages and users don't typically install their own
| packages on top of Proxmox, if a package they need gets
| dropped due to RC bugs (etc) they can upload it to their
| own repo
| tlamponi wrote:
| We manage anything, including package builds, ourselves if
| the need should arise; we also monitor Debian and release
| critical bugs closely, we see no realistic potential for
| any Proxmox relevant package to disappear, at least nothing
| higer compared to that happening after the 9th.
|
| FWIW, we got staff members that are also directly involved
| with Debian which makes things a bit easier.
| znpy wrote:
| Debian repositories gets frozen months in advance before a
| release, and pretty much only security patches are imported
| after that. Maybe some package gets rebuilt, or stuff like
| that. No breaking changes.
|
| I wouldn't expect much changes, if any a all, between today
| (Aug 5th) and the expected release date (Aug 9th).
| Pet_Ant wrote:
| Yeah, but what is the rush? I mean 1) what if something
| critical changes, and 2) I could easily see some setting
| somewhere being at "-rc" which causes a bug later.
|
| Frankly, not waiting half a week is bright orange flag to me.
| tlamponi wrote:
| The linked forum post has an FAQ entry, this was a
| carefully weighted decision with many factors playing a
| role, including having more staff available to manage any
| potential release fall-out on our side. And we're in
| general pretty much self-sufficient for any need that
| should arise, always have been that way and provide
| enterprise support offerings that back our official support
| guarantees if your org would have the need for that.
|
| Finally, we provide bug and security updates for the
| previous stable release for over a year, so no user has any
| rush to upgrade now, they can safely choose any time
| between now and until August 2026.
| cowmix wrote:
| Yeah, it's wild how many projects--especially container-based
| ones--have already jumped to Debian Trixie as their "stable"
| base, even though it's still technically in testing. I got
| burned when linuxserver.io's docker-webtop suddenly switched
| to Trixie and broke a bunch of my builds that were based on
| Bookworm.
|
| As you said, Debian 13 officially lands on August 9, so it's
| close--but in my (admittedly limited) experience, the testing
| branch still feels pretty rough. I ran into way more
| dependency chaos--and a bunch of missing or deprecated
| packages--than I expected.
|
| If you're relying on container images that have already moved
| to Trixie, heads up: it's not quite seamless yet. Might be
| safer to stick with Bookworm a bit longer, or at least test
| thoroughly before making the jump.
| sgc wrote:
| When did you run into your problems? Is there a chance they
| are largely resolved at this point?
| riedel wrote:
| We are really happy with proxmox for our 4 machine cluster in the
| group. We evaluated many things, they were either to light or to
| heavy for our users and/or our group of hobbyist admins. A while
| back we also set up a backup server. Forum is also a great
| resource. I just failed to contribute a pull request via their
| git email workflow and I am now stuck with a non-upstreamed patch
| to the LDAP Sync (btw. the code there is IMHO not the best part
| of PVE). In general, while the system works great as a monolith,
| extending it is IMHO really not easily possible. We have some
| cludges all over the place (mostly using the really good API),
| that could be better integrated, e.g. with the UI. At least I did
| not find a way to e.g. add a new auth provider easily.
| woleium wrote:
| Can't it use pam? so many options for providers there.
| riedel wrote:
| It was mostly about syncing groups with proxmox. Worked by
| patching the LDAP provider to support our schema. Comment was
| more about the extensibility problem when doing this.
| Actually when you say this, I wonder how PAM could work, only
| ever used it for providing shell access: we typically do not
| have any local users on the machine. Never used PAM in a way
| not providing any local execution privileges (which is the
| whole point of a VM host).
| yla92 wrote:
| > The Proxmox VE mobile interface has been thoroughly reworked,
| using the new Proxmox widget toolkit powered by the Rust-based
| Yew framework.
|
| First time hearing about Yew (yew.rs). First time hearing about
| it. Is it like writing frontend code in Rust and compiled to WASM
| ? Is anyone using it (other than Proxmox folks, of course).
| dylanowen wrote:
| I'm using it for a browser extension, just because I wanted to
| code more in rust. It's great at what it does and has all the
| same paradigms from React. The best use case though, would be
| if all your code is already rust. If you have a complex UI I'd
| probably use react and typescript.
| tlamponi wrote:
| > Is it like writing frontend code in Rust and compiled to WASM
| ?
|
| Exactly, it's actually quite lightweight and stable plus mostly
| finished, so don't let the slower upstream releases discourage
| you from ever trying it more extensively.
|
| We build a widget library with our products as main target
| around Yew and native web technologies, you can check out:
|
| https://github.com/proxmox/proxmox-yew-widget-toolkit
|
| And the example repo:
|
| https://github.com/proxmox/proxmox-yew-widget-toolkit-exampl...
|
| For code and a little bit more info. We definitively need to
| clean a few documentation and resource things up, but we tried
| to make it so that it can be reused by others without tying
| them to our API types or the like.
|
| FWIW, the in-development Proxmox Datacenter Manager also uses
| our Rust / Yew based UI, it's basically our first 100% rust
| project (well, minus the Linux / Debian foundation naturally,
| but it's getting there ;-)
| xattt wrote:
| Ugh, I'd love to make the leap, but I don't want the headache of
| trying to get SR-IOV going again for my integrated Intel
| graphics.
| zozbot234 wrote:
| Why not run virtio-gpu in the guest?
| zamadatix wrote:
| Windows.
| xattt wrote:
| Would Plex QuickSync transcoding work?
| throw0101c wrote:
| I've heard good things about XCP-ng [1] as well: anyone use both
| that can lay out the pros/cons of each?
|
| [1] https://en.wikipedia.org/wiki/XCP-ng
| nirav72 wrote:
| I can't speak for pros and cons with XCP-Ng. I've been meaning
| to try out XCP-ng. But feel like there just isn't a large
| enough community support around it yet. At least not like
| Proxmox, which has seen a surge in usage and popularity after
| the Broadcom fiasco.
| unixhero wrote:
| Maybe a surge of newcomers, but the community was strong
| before and irrespective of the fiasco.
| nick__m wrote:
| We tried both at work and thet were more or less equivalent but
| proxmox appears to have more momentum behind it. Also
| distributed storage in proxmox is based on ceph while xcpng use
| the obscure xostor
| throw0101c wrote:
| Highlights of the release (release notes):
|
| * https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_9.0
| grmone wrote:
| "Proxmox VE is using a newer Linux kernel 6.14.8-2 as stable
| default enhancing hardware compatibility and performance."
|
| kernel.org don't even list version 6.14 anymore. do they backport
| security patches on there own?
| Arrowmaster wrote:
| I don't know what they are currently doing, but historically
| Proxmox uses Debian as the OS base but with Ubuntu as the
| kernel source. So they rely on the Ubuntu security team
| backporting security patches for the kernel.
| whiztech wrote:
| Seems like it still is: https://git.proxmox.com/?p=pve-
| kernel.git;a=shortlog
| nativeit wrote:
| I wouldn't expect it to be, as kernel.org don't list
| distribution kernels.
|
| (Ignore this if it's irrelevant and I'm missing the point,
| which is always a distinct possibility)
|
| > Many Linux distributions provide their own "longterm
| maintenance" kernels that may or may not be based on those
| maintained by kernel developers. These kernel releases are not
| hosted at kernel.org and kernel developers can provide no
| support for them.
|
| > It is easy to tell if you are running a distribution kernel.
| Unless you downloaded, compiled and installed your own version
| of kernel from kernel.org, you are running a distribution
| kernel. To find out the version of your kernel, run uname -r:
| PeterStuer wrote:
| I have only recently moved to proxmox as the Hyper-V licensing
| became too opressive for hobby/one-person projects use.
|
| Can someone tell me wether proxmox upgrades are usually smooth
| sailing, or should I prepare for this being an endeavour?
| thyristan wrote:
| Never had a problem with them. Just put each node in
| maintenance, migrate the VMs to another node, update, move the
| VMs back. Repeat until all nodes are updated.
| woleium wrote:
| if you are using hardware pass through for e.g. nvidia cards
| you have to update your VMs as well, but other than that
| pretty painless in my experience (over 15 years)
| zamadatix wrote:
| The "update" step is a bit of a "draw the rest of the owl" in
| the case of major version updates like this 8.x -> 9.x
| release. It also depends how many features you're using in
| that cluster as to how complicated the owl is to draw.
|
| That said, I just made it out alright in my home lab without
| too much hullabaloo.
| pimeys wrote:
| Did you do the backup and full reinstall or just with apt?
|
| I should do the same update this weekend. .
| redundantly wrote:
| I like Promox a lot, but I wish it had an equivalent to VMware's
| VMFS. The last time I tried, there wasn't a way to use shared
| storage (i.e., iscsi block devices) across multiple nodes and
| have a failover of VMs that use that storage. And by failover I
| mean moving a VM to another host and booting it there, not even
| keeping the VM running.
| SlavikCA wrote:
| Proxmox has built-in support for CEPH, which is promoted as
| VMFS equivalent.
|
| I don't have much experience with them, so can't tell if it's
| really on the same level.
| thyristan wrote:
| Proxmox with Ceph can do failover when a node fails. You can
| configure a VM as high-availability to automatically make it
| boot on a leftover node after a crash:
| https://pve.proxmox.com/wiki/High_Availability . When you add
| ProxLB, you can also automatically load-balance those VMs.
|
| One advantage Ceph has over VMware is that you don't need
| specially approved hardware to run it. Just use any old
| disks/SSDs/controllers. No special extra expensive vSAN
| hardware.
|
| But I cannot give you a full comparison, because I don't know
| all of VMware that well.
| woleium wrote:
| Yes, you can do this with ceph on commodity hardware (or even
| your compute nodes, if you are brave), or if you have a bit of
| cash, something like a netapp to do NFS/iSCSI/NVME-oF
|
| Use any of these with the built in HA manager in proxmox
| redundantly wrote:
| As far as I understand it, Ceph allows you to create
| distributed storage by using the hardware across your hosts.
|
| Can it be used to format a single shared block device that is
| accessed by multiple hosts like VMFS does? My understanding
| is this isn't possible.
| nyrikki wrote:
| Ceph RBD can technically support multi-writer access
| through exclusive locking, but it won't be the same as
| multi-writer.
|
| You can set up a radosgw outside of the proxmox and use
| objects.
|
| But ceph is fundamentally a distributed object store,
| shared LUNs with block level multi-writer is fundamentally
| a tightly coupled solution.
|
| If you have a legacy need that has OCFS or a quorum drive,
| the underlying tools proxmox is an abstraction can
| sometimes be used as these types of systems tend to be
| pets.
|
| But if you were just using multi-writer because it was
| there, there are alternatives that are typically more
| robust under the shared nothing model like Ceph uses.
|
| But it is all tradeoffs and horses for courses.
| aaronius wrote:
| That should have been possible for a while. Get the block
| storage to the node (FC or configure iSCSI), configure
| multipathing in most situations, and then configure LVM (thick)
| on top and mark it as shared. One nice thing this release
| brings is the option to finally also have snapshots for such a
| shared storage.
| redundantly wrote:
| I tried that, but had two problems:
|
| When migrating a VM from one host to another it would require
| cloning the LVM volume, rather than just importing the group
| on the other node and starting the VM up.
|
| I have existing VMware gusts that I'd like to migrate over in
| bulk. This would be easy enough to do by converting the VMDK
| files, but using LVM means creating an LVM group for each VM
| and importing the contents of the VMDK into the LV.
| aaronius wrote:
| Hmm, staying with iSCSI. You should create one large LUN
| that is available on each node. Then it is important to
| mark the LVM as "shared". This way, PVE knows that all
| nodes access the same LVM, so copying the disk images is
| not necessary on a live migration.
|
| With such a setup, PVE will create LVs on the same VG for
| each disk image. So no handling of multiple VGs or LUNs is
| necessary.
|
| The multipathing PVE wiki page lines out the whole process:
| https://pve.proxmox.com/wiki/Multipath
| pdntspa wrote:
| That, and configuring mount points for read-write access on the
| host is incredibly confusing and needlessly painful
| nativeit wrote:
| Still use/love Proxmox daily. Congrats to the team on the latest
| release!
| avtar wrote:
| I would see Proxmox come up in so many homelab type conversations
| so I tried 8.* on a mini pc. The impression I got was that the
| project probably provides the most value in a clustered
| environment or even on a single node if someone prefers using a
| web UI. What didn't seem very clear was an out-of-box way for
| declaring VM and container configurations [1] that could then be
| version controlled. Common approaches seemed to involve writing
| scripts or reach for other tools like Ansible. Whereas something
| like LXD/Incus makes this easier [2] by default. Or maybe I'm
| missing some details?
|
| [1] https://forum.proxmox.com/threads/default-settings-of-
| contai...
|
| [2]
| https://linuxcontainers.org/incus/docs/main/howto/instances_...
| cyberpunk wrote:
| There are various terraform providers for proxmox.
| whalesalad wrote:
| Yeah, this is the way. You end up treating Proxmox like it is
| AWS and asserting your desired state against it.
| krisknez wrote:
| I would love if Proxmox had a UI for port forwarding. I hate
| doing it through the terminal. I like how LXD has a UI for that.
| BodyCulture wrote:
| Seems like it still has no official support for any kind of disk
| encryption, so you are on your own if you fiddle that in somehow
| and things may break. Such a beautiful, peaceful world where disk
| encryption is not needed!
___________________________________________________________________
(page generated 2025-08-05 23:01 UTC)