Subj : Re: CEPH To : tassiebob From : deon Date : Sat Oct 12 2024 17:38:15 Re: Re: CEPH By: tassiebob to deon on Mon Oct 07 2024 07:37 pm Howdy, > Really interested to hear how you go with this. It's one of those things I > knew was there but have never tried (partly because I didn't have the time > to sort it out if it went sideways). So this weekend, I did some updates to the hosts running ceph (updating packages, etc), and rebooted each host after their updates (one at a time). While I didnt do much testing for stuff being accessible while a host was down, it all appeared to be ok - even though there was a delay I guess while ceph figured out a node was down and had to shuffle around who was the next "master" to handle the IO. Pretty happy with this setup - I was prevously using a proprietary file system, which I had to nurse if I rebooted nodes - and occassionally drives would go offline, especially if there was busy I/O going on (all three nodes are VMs of the same host). With Ceph, I did nothing, it sorted itself out and made the cluster healthy again on its own. Gluster was equally problematic for different reasons. But both of those filesystems are now disabled and no longer used. Even the nfs client recovered on its own. (I normally hate NFS, because my experience has always been if the nfs server goes down, the clients normally useless unless they reboot - and sometimes it requires are hard reboot.) So the only thing I need to figure out (learn) if single node dies, rebuilding back the third node and hopefully not loosing data along the way. I'll tackle that when I get to it... ;) ....лоеп --- SBBSecho 3.20-Linux * Origin: I'm playing with ANSI+videotex - wanna play too? (1337:2/101) .