Man kann #opensource kleinreden wie man will aber eine solche Uptime habe ich bei noch keinem #esxi hinbekommen!
In der #Schweiz sind etwas mehr als 130 Instanzen betroffen (Stand: gestern)
Attacken auf #VMware #ESXi: Immer noch zehntausende Server verwundbar | Security https://www.heise.de/news/Attacken-auf-VMware-ESXi-Immer-noch-zehntausende-Server-verwundbar-10307632.html #Patchday #VMwareESXi
Kritische Lücke in #VMware #ESXi, #Fusion und #Workstation wird missbraucht | Security https://www.heise.de/news/Kritische-Luecke-in-VMware-ESXi-Fusion-und-Workstation-wird-missbraucht-10303639.html #Patchday #Virtualisierung #virtualization #VMwareESXi #VMwareFusion #VMwareWorkstation
Check the alarms on all ESXi Hosts via Powershell http://dlvr.it/TJKSbF via PlanetPowerShell #PowerShell #ESXi #vCenter #Coding
That's so freaking weird lol... on one of my #Proxmox nodes, I had started a test VM (#RockyLinux/#RHEL), and I was installing a single package on it, and for wtv reason that ended up powering off ALL other VMs on that Proxmox node other than that one test node.
I've never had this happen before in my #homelab, on Proxmox or #ESXi, but it's incredibly concerning for sure lol. I have some critical stuffs on it too like my #TrueNAS server, some of #Kubernetes nodes, etc. and to have them just... power off like that without any alerts or logs explaining it, not on the GUI anyway, is insane.
A new version of check_esxi_hardware, an #opensource monitoring plugin to monitor the hardware of Broadcom #ESXi servers, was just released.
The latest version improves exception handling from the pywbem Python module and also added HTTP exception handling.
To ensure backward compatibility for users using older and newer pywbem versions, the #monitoring plugin now requires the "packaging" Python module.
More details in the blog post.
I have an old NUC 11 that I think is failing. I run ESXi on it, the free version. It has 32GB of ram, 1x 1TB SSD and 1x 2TB SSD. Can anyone recommend a good replacement?
I am sure a lot of people will also recommend Proxmox over ESXi, but I have to run some images that are only ESXi.
Any suggestions would be most welcome
BlackLock ransomware onslaught: What to expect and how to fight it https://www.helpnetsecurity.com/2025/02/18/blacklock-ransomware-what-to-expect-how-to-fight-it/ #enterprise #ransomware #ReliaQuest #Don'tmiss #extortion #Hotstuff #Windows #VMware #News #ESXi #SMBs #tips
Ransomware ataca ESXi a través de túneles SSH ocultos
https://blog.segu-info.com.ar/2025/01/ransomware-ataca-esxi-traves-de-tuneles.html
ESXi ransomware attacks use SSH tunnels to avoid detection – Source: securityaffairs.com https://ciso2ciso.com/esxi-ransomware-attacks-use-ssh-tunnels-to-avoid-detection-source-securityaffairs-com/ #rssfeedpostgeneratorecho #informationsecuritynews #ESXiransomwareattacks #ITInformationSecurity #SecurityAffairscom #CyberSecurityNews #PierluigiPaganini #SecurityAffairs #SecurityAffairs #BreakingNews #SecurityNews #hackingnews #CyberCrime #Cybercrime #ransomware #hacking #Malware #ESXi
#Ransomware gang uses #SSH tunnels for stealthy #VMware #ESXi access
#ESXi #ransomware attacks use SSH tunnels to avoid detection
https://securityaffairs.com/173487/cyber-crime/esxi-ransomware-attacks-use-ssh-tunnels-to-avoid-detection.html
#securityaffairs #hacking #malware
Always fun when you're doing the same thing as always, and suddenly a new problem appears...
In this case: A Fortigate VM cluster on VMware. There's a dvSwitch portgroup for the HA network with the recommended configuration for such a setup, connected to a dedicated network adapter in either VM.
The Fortigates start talking on their HA network, and after a couple of minutes, both dvSwitch ports go into the blocked state (not at the same time). I have rarely even seen that happening at all? And not with any of the other Fortigate VM clusters we run on our infrastructure?
No idea what's up there, and I have not found any events that shed light on a reason, though ESXi logs do say the dvSwitch port is being blocked. Thanks, that's great?
Looks like some network tracing is in my near future...
(Note: I do know all about the shit that Broadcom is pulling, and if I was in a position to migrate our platform to a different hypervisor, I would. No need to tell me.)
@PC_Fluesterer ESXi ransomware attacks are increasingly threatening organizations using VMware virtualization. These attacks exploit vulnerabilities, often initiated through phishing or known weaknesses. Once inside, attackers escalate privileges, deploy ransomware, and compromise backups, severely impacting operations and reputation. To counter this, organizations must implement robust defenses: https://www.sygnia.co/blog/esxi-ransomware-attacks/ #CyberSecurity #Ransomware #ESXi
One thing I find surprisingly difficult to determine is just how much storage space left I have to allocate to my VMs on #Proxmox. On #ESXi, this thing is displayed quite clearly on the main dashboard of the web interface and the capacity used/allocated shown is clearly accurate or what I'd expect based on manual calculations on just how much space I've allocated for each disk.. but on Proxmox, I'm not quite sure where I'd find this?
There seems to be several places that show some form of storage space, but none of them seem to be what I'd expect? For example, I have a 100GB disk that I've assigned a total of 30GB of space to 3 VMs, I'd expect some place I can check that I have 70GB left of space I could supply to my VMs.. but I'm not seeing anything like that at all. Instead, what I could find is something that tells me I have ~90GB of space left rather than 70.
#Proxmox clustering question - I plan to set up a cluster with 2 nodes (A
and B
) and a Qdevice (Q
). I don't plan to add shared storage, and would prefer to go the replication route with local disks instead for "HA".
As of right now, node A and B have a somewhat similar storage setup (somewhat bcos capacity may be just slightly different, due to different SSDs) - node A has a ZFS Mirror pool with 2x 1TB disks (1x NVME of vendor 1, 1x SATA of vendor 2). Node B has a ZFS Mirror pool with 2x 1TB disks (2x NVME of vendor 1).
^ I suspect, with this setup, my plan for replication should work just fine. However, I do plan on adding another disk; 1x 1TB SATA of vendor 3, to node A. I've never done this on Proxmox before, only on #ESXi, so I'm not too sure what the disk format will be (or rather, what kind of setup I should pick for it) - but I imagine it has to be a single disk ZFS, separate from the Mirror pool?
The idea is, some VMs will be on the ZFS Mirror pool, while some will be on that additional disk (on node A) - some might even be a combination of both (i.e. a VM with its primary disk on ZFS Mirror, but also a secondary disk on the additional disk). With that assumption, how would VM replication work in said cluster, or will it just not?
Would appreciate any insights on this as I try and devise a "plan" for my #homelab. Atm, I've set up node A but in the process of migrating some VMs over from node B as I intend to empty and reinstall Proxmox on node B, to prepare it to join node A in a cluster.
I've successfully migrated my #ESXi #homelab server over to #Proxmox after surprisingly a little bit of (unexpected) trouble - haven't really even moved all of my old services or #Kubernetes cluster back into it, but I'd say the most challenging part I was expecting which is #TrueNAS has not only been migrated, but also upgraded from TrueNAS Core 12 to TrueNAS Scale 24.10 (HUGE jump, I know).
Now then. I'm thinking what's the best way to move forward with this, now that I have 2 separate nodes running Proxmox. There are multiple things to consider. I suppose I could cluster 'em up, so I can manage both of them under one roof but from what I can tell, clustering on Proxmox works the same way as you would with Kubernetes clusters like #RKE2 or #K3s whereby you'd want at least 3 nodes, if not just 1. I can build another server, I have the hardware parts for it, but I don't think I'd want to take up more space I already do and have 3 PCs running 24/7.
I'm also thinking of possibly joining my 2 RKE 2 Clusters (1 on each node) into 1... but I'm not sure how I'd go about it having only 2 physical nodes. Atm, each cluster has 1 Master node and 3 Worker nodes (VMs ofc). Having only 2 physical nodes, I'm not sure how I'd spread the number of master/worker nodes across the 2. Maintaining only 1 (joined) cluster would be helpful though, since it'd solve my current issue of not being able to use one of them to publish services online using #Ingress "effectively", since I could only port forward the standard HTTP/S ports to only a single endpoint (which means the secondary cluster will use a non-standard port instead i.e. 8443
).
This turned out pretty long - but yea... any ideas what'd be the "best" way of moving forward if I only plan to retain 2 Proxmox nodes - Proxmox wise, and perhaps even Kubernetes wise?