Hi,
Is there any strong reasons why we should not configure DRS for the cluster which is serving for VDI machines?
Thank you,
Vkmr.
Hi,
Is there any strong reasons why we should not configure DRS for the cluster which is serving for VDI machines?
Thank you,
Vkmr.
i am wondering how long it takes others to run the task of cloning a vm.
esxi5.5
thanks
Hi there,
I wanted to know whether there is provision to provide HA at storage level in same cluster. ex: If storage mapped to a Host fails, then do the vms change their storage to the next available storage in cluster.@
I have some comprehension question concerning HA Resource Reservation. Let's assume I have a vSphere cluster with two nodes. Each nodes has 100 GB of RAM. So total RAM = 200 GB in my cluster.
I configure Admission Control with failover capacity 50% :
OK, now there is for example one VM with 20 GB of RAM and 100% RAM reservation. In addition there are 15 other VMs with 10 GB RAM each and they have all no reservation.
To keep the calculation simple let's assume there is no overhead Memory. So I think the Resource Reservation calculation is as follows:
Available Reservation: 200 GB
Used Reservation: 120 GB (50% from Admission Control and 20 GB from one VM)
Available Reservation: 80 GB
HA state of the cluster should be green, right?
Now a node failure happens. I have now 100 GB RAM left in the cluster. So the VM with the 20 GB Reservation is powered on - memory is guaranteed. All remaining 15 VMs with no reservations have to share the remaining Memory of 80 GB, so they compete for memory.
Is my assumption correct that Admission Control calculates the resources by considering the configured host failover capacity plus all memory reservations? And if there are enough resources to satisfy the needed amount of memory, the admission control state is okay?
Are the failover resources reported as okay even if there are plenty of other vms with no memory reservation as above-mentioned that have enough memory when the cluster is healthy but have to compete for memory in case of a host failure?
What would happen, if I configure the performance degradation VMs tolerate to 0 %... would there be a warning in case of a loss of 50% memory if the memory of each vm would become less than the memory allocated at the healthy state?
Will the reported Worst Case Allocation value for the VMs with no reservation getting lower if the resources are short in case of a host failure? (In my lab, every VM reports its configured amount of Memory as Worst Case Allocation.... so what must happen that the Worst Case Allocation Value is getting lower? (I know, if there are memory reservations, the worst case allocation will be at least the Reservation)
best regards
Roland
Hi,
We have two clusters of Dell PowerEdge FC830 and three clusters of HP ProLiant DL560 Gen9 servers with the Intel(R) Xeon(R) CPU E5-4667 v4 @ 2.20GHz processors. I would like to set up a shared lun between these five clusters so we can balance load without downtime. With the processors being the same model, the vendor and family requirements are clearly met, but I haven't been able to find any documentation stating this is a safe practice.
As a test I added an nfs lun to the five clusters, spun up a test VM, and initiated some vMotions via the web client. I saw no compatibility warnings or errors and the vMotion appeared to be successful. I then proceeded to test all possible vMotions between these clusters while pinging the network gateway from the test VM and pinging the test VM from a prod VM in the same vCenter on another network. During the vMotions I would see the ping time go from sub 1ms to as high as 20ms for 1 or two packets during the peak of the vMotion. The results of the ping tests are below and show packets dropping during the peak of some vMotions.
c07 HP dl560 Intel(R) Xeon(R) CPU E5-4667 v4 @ 2.20GHz
c08 HP dl560 Intel(R) Xeon(R) CPU E5-4667 v4 @ 2.20GHz
c09 HP dl560 Intel(R) Xeon(R) CPU E5-4667 v4 @ 2.20GHz
c10 Dell fc830 Intel(R) Xeon(R) CPU E5-4667 v4 @ 2.20GHz
c11 Dell fc830 Intel(R) Xeon(R) CPU E5-4667 v4 @ 2.20GHz
c11 > c07 0 packets dropped from test VM to gateway, 1 packets dropped from prod VM to test VM
c07 > c11 1 packets dropped from test VM to gateway, 0 packets dropped from prod VM to test VM
c07 > c10 0 packets dropped from test VM to gateway, 0 packets dropped from prod VM to test VM
c10 > c07 0 packets dropped from test VM to gateway, 3 packets dropped from prod VM to test VM
c10 > c08 1 packets dropped from test VM to gateway, 0 packets dropped from prod VM to test VM
c08 > c10 0 packets dropped from test VM to gateway, 0 packets dropped from prod VM to test VM
c08 > c11 0 packets dropped from test VM to gateway, 0 packets dropped from prod VM to test VM
c11 > c08 0 packets dropped from test VM to gateway, 1 packets dropped from prod VM to test VM
c11 > c09 0 packets dropped from test VM to gateway, 0 packets dropped from prod VM to test VM
c09 > c11 0 packets dropped from test VM to gateway, 1 packets dropped from prod VM to test VM
c10 > c09 0 packets dropped from test VM to gateway, 1 packets dropped from prod VM to test VM
c09 > c10 1 packets dropped from test VM to gateway, 1 packets dropped from prod VM to test VM
My fear is that the latency and packet loss may be enough to have a negative impact on our production applications. I know I'll need to do some production testing, but I'd like to avoid certifying every application in our environment if possible?
Has anyone gone down this road before or currently do this in their environment? Any tips or suggestions for further testing?
Thanks,
Nate
Hi community!
If I turn on the Fully Automated setting on a Datastore Cluster using SDRS-
Will SDRS always run? I don't want it to run the moment I turn it on and I have no SDRS Scheduling configured on the "SDRS Scheduling" tab of the Datastore cluster.
All I want initially is initial placement or "load balancing" of VMs across datastores in the datastore cluster as I move VMs to it.
Any wisdom/guidance really appreciated.
Off to study up on SDRS some more!
Hi
I am trying to migrate a vm from one host and it datastore to another host. There is no cluster vcenter server. Both hosts have individual datastore. I am automating migration and using Relocate or Migrate methods. but i am getting error when the request for migrating vm is submitted.
A specified parameter was not correct: spec.pool
Please suggest me what wrong i am doing here ?
resource_pool = vm.resourcePool | |
print destination_host | |
print vm | |
print resource_pool | |
migrate_priority = vim.VirtualMachine.MovePriority.defaultPriority |
msg = "Migrating %s to destination host %s" % (inputs['vm_name'], inputs['destination_host']) | |
print msg | |
print destination_host | |
print migrate_priority | |
print vm |
#Live Migration :: Change host only | |
#import pdb;pdb.set_trace(); | |
#print help(vm.Migrate) | |
print dir(vm.Migrate) | |
task = vm.Migrate(host=destination_host, priority=migrate_priority) | |
print task | |
# Wait for Migrate to complete | |
wait_for_task(task, si) |
except vmodl.MethodFault, e: | |
print "Caught vmodl fault: %s" % e.msg | |
return 1 | |
except Exception, e: | |
print "Caught exception: %s" % str(e) | |
return 1 |
Hi,
I'm running 2 VMs on separate hosts as Windows Print Server to make a nice Cluster. Those 2 Vms share a LUN on the NAS.
So each VMs has 1 network card on public LAN and 1 network card on iSCSI LAN.
The problem we just dicovered is that each time vCenter made a vMotion on one of these VMs, each VMs request the "Master control" of the Cluster.
It seems that during the vMotion, the network connection is cut and so the cluster went in trouble.
Does any1 know what I should modify on my vCenter topoligy to correct this "cut effect" or at least reduce it to the extreme minimum time ?
All feedbacks are welcome.
Thx
Is there a way to allow VM-VM affinity rules to use the same SHOULD behavior that VM-Host rules can leverage rather than MUST behavior?
VM to Host DRS rules have the option to "should run on this host" or "must run on this host" (also should not and must not). VM to VM affinity rules appear to only allow the "must" operative.
Is there a way to override this behavior whether by an advanced attribute or via the CLI to allow the same "should" behavior for VM to VM affinity rules as the VM to Host rules? The vSphere Resource Management documentation uses the language "DRS tries to keep" in reference to VM-VM affinity rules, but the witnessed behavior is that it will not allow a powered-off VM to power-on if the DRS rule cannot be met. The HA Admission control is disabled for the cluster in question.
DRS cannot find a host to power on or migrate the virtual machine. This operation would violate a virtual machine affinity/anti-affinity rule.
I am having difficulty doing a vmotion migration, or, more accurately, a relocation via vmotion. I am using the vcenter web client. I am getting the error that the "migrations failed because the ESX host were not able to connect over the vmotion network."
From one ESX host, (we'll call it ESX A)I can see that it can ping the target ESX host (ESX B) vmotion IP but it can only ping it if I specify the vmk port for mgmt, not vmotion, on ESX A.
Below vmk1 is the vmotion port IP and vmk0 is the mgmt port IP. vmotion is only enabled on
vmk1. But, since I CAN ping the target host vmotion port IP (192.168.175.40) using vmk0, I'm thinking I may can just enable vmotion on the mgmt port, and it could work. (And disable vmotion on the vmotion port) My main concern with this is freaking out the host if I enable vmotion on the mgmt port as I don't want to bring down this production system.
Or, is there a way to correct the routing to where vmk1 can ping then ultimately vmotion over the vmotion network like it should work?
[root@ucs-c1-s1:~] esxcfg-route -l
VMkernel Routes:
Network Netmask Gateway Interface
192.168.142.0 255.255.255.0 Local Subnet vmk0
192.168.159.0 255.255.255.0 Local Subnet vmk1
default 0.0.0.0 192.168.142.1 vmk0
[root@ucs-c1-s1:~] vmkping -I vmk0 192.168.175.40
PING 192.168.175.40 (192.168.175.40): 56 data bytes
64 bytes from 192.168.175.40: icmp_seq=0 ttl=62 time=0.256 ms
64 bytes from 192.168.175.40: icmp_seq=1 ttl=62 time=0.254 ms
--- 192.168.175.40 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.254/0.255/0.256 ms
[root@ucs-c1-s1:~] vmkping -I vmk1 192.168.175.40
PING 192.168.175.40 (192.168.175.40): 56 data bytes
--- 192.168.175.40 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
[root@ucs-c1-s1:~]
Hi Community,
Hope you can help with context of renaming vmdk files during Storage vMotion.
There is no mention of advanced parameter "provisioning.relocate.enableRename" after vSphere 5.X version (in advanced settings of vCenter server). It was defaulting to true, but it was still configurable in some of 5.X versions
In 6.X versions the renaming of vmdk files during Storage vMotion is implicitly true. 6.X doesn't allow this parameter in advanced setting of vCenter. After trying with a parameter as "config.provisioning.relocate.enableRename", it is allowed to enter it, but has no influence on renaming configuration.
What happened to this configuration parameter in 6.X versions? I'm wondering is there a configuration option for renaming of vmdk's during Storage vMotion at all in 6.X versions, or it is implicitly and mandatory true.
Thank you in advance,
IK
Hello all. We are current running a vCenter 5.5U2 instance after upgrading from 5.1.
vMotion jobs fully automated or manual is failing with the following error. pbm.fault.pbm fault.summary
The only online fixes I have been able to find refers to fixes for this same error but in the vCenter appliance. We are running VC on Windows 2008 R2.
Has anyone seen this before and or have an on hand fix it. Thanks in advance.
Hello,
Is High availability feature available in vSphere evaluation version?
Thanks,
Shelly
Is a live VMotion doable with Horizon 6.1 and vSphere 6.0 assuming both hosts have the same graphics card?
All,
I have an 8 host cluster, 6 of the hosts are 5.5 and 2 are 5.1 (temporary until network team can update a vm that only supports 5.1)
I noticed that DRS was not working, after investigating a found a DRS rule that only included the 2 5.1 hosts.
Is VMware smart enough to create that rule? I certainly did not create it and the other 2 admins are telling me did not either.
All, we have customers that keep telling me VMotion is causing application downtimes. They may or may not be correct.
I know that the VMotion penalty can vary from 1.1 second to .5. I would like to be able to test that in my lab and produce the results.
Is there any way that I can test this to show our application owners?
Thanks!
Is it possible to trigger Memory Reclamation in VM's Manually ( I know it gets triggered automatically as soon as particular threshold level is reached)
when disk IOPS limit is configured vMotion of a VM become very slow and when this limit is changed to unlimited it is working faster. Anyone who can help me to understand this behavior?
i need some Examples of DRS Priority Levels like i am aware of
Must Rules comes under Priority 1
Should Rules comes under Priority 2
Now Examples of Priority 3, 4, 5? if anyone have examples please share it with me.
Hello,
I'm in the process of learning vmware, and I'm setting up vMotion via a vlan on my network.
I'm having issues where the vMotion of a VM stalls at 14%, and then eventually fails.
The thing I'm unsure about, is does my storage NAS (using NFS) have to have a port on the same vlan as the vMotion assigned ports on my esxi servers ?
I've created a dvSwitch (called dvNetMotion) that has 1 port per server (3 servers total) assigned to it, its vlan id set to 200, and each server is assigned an IP on a separate subnet from the rest of my network.
I thought that vMotion is only used between the servers so I was thinking this should work. However, I'm now wondering if vMotion also needs to communicate with the Storage server (which is connected via the same path as the VM network, it's a lab
so I'm not too worried about bandwidth).
I'm trying to setup vlans on my switch (Quanta LB4M) but it's a pita to get setup correctly.
any helps/suggestions are welcome, thanks!