Hi Matt
I implemented something similar to what was recommended here in our live environment yesterday, and so far it seems to work as we'd hoped.
Thanks for the help with this, really appreciate it
Cheers
Hi Matt
I implemented something similar to what was recommended here in our live environment yesterday, and so far it seems to work as we'd hoped.
Thanks for the help with this, really appreciate it
Cheers
Thanks.
So, update the Virtual Machine needs only for new hardware features? No more secure?
What ever licences you are using they will need to be upgraded in the vmware portal. eg the 5.8 vrops licences will not work with 6.x vrops and the 5.5 vcloud suit licence will not work with 6.x vrops
6.x vrops will run and fully supported with vsphere 5.5
Are you using Player 12.5.1? There were some issues with networking in 12.5.0.
Thanks for the suggestion, that is exactly what is happening. I logged in and the cluster (only the 1 node) was at "Failed to Start". I hit Bring Online, and now it is sitting at "Waiting for Analytics".
I have 7 node cluster with ESXi 6.2 and VSAN enabled all flash, currently i have storage policy with Fault Tolerance method RAID-1 (Mirroring) - Performance,
i need to change it to Raid-5/6 (Erasure Coding), Can this be done live without affecting the VMs running in the cluster?
Yes, this is certainly possible, and common. One solution is found on Ulli's page - transparent bridge. Sanbarrow.com
Please post your vmx file. Make sure that shared folders are enabled, and defined.
I have my ESXi 6.0 server back up. Do you still want me to try the import on that server?
We fixed this issue with help of VMware support team!
Thanks,
Shan
I have 4 LUNs presented to my host and 1 VM that is mapping them as RDMs. Now I can't store the RDM with the virtual machine the option is greyed out and only wants to select local disk on the ESXi host. The only difference is the other LUNs are 2TB capacity not 2.24TB.
Does anyone have any suggestions in what I might be able to do to map the other LUNs?
Thanks!
Hey dude, I think we are missing the Remove-VmwFile function is not attached.
VDS only
I feel like I've researched this before and never found a solution. I know about the `esxcli vsan cluster leave` command, but that only works if you have access to the host. What if you have a entirely dead machine that you need to force out of a cluster? Is there a process for that?
This is a DR site and I just realized the VMFS was version 3, so after upgrading it to 5 it works as expected.
Take a look here: VSAN Part 10 - Changing VM Storage Policy on-the-fly - CormacHogan.com
Have tried deleting the snapshots and pointing the VM to those files?
ServerName_1.vmdk
ServerName_4.vmdk
ServerName_5.vmdk
Due to budget constraints we are forced to do a step upgrade of our SAN infrastructure that supports our vSphere 6 environment. We have just purchased a new host that has 4 x 10GB Ethernet cards as well as purchased additional 10GB cards to upgrade another existing host to 4 x 10GB Ethernet. We have also purchased 2 Netgear 10GB X5716T switches to replace our old 1GB switches. We are planning to upgrade our existing HP P2000 MSA with 4 x 1GB to a P2040 version with 2 x 10GB, however that is not going to happen this budget cycle. My question relates to the fact that we will be operating in a mix 1GB and 10GB environment for the time being. After spending a few hours researching this I have not come across an article that describes our scenario and I have some concerns.
Here is what our current environment (after the upgrades) looks like:
6 x HP Hosts (Two hosts with 4 x 10GB NICs & the other four hosts have only 1GB NICs)
2 X Netgear 10GB X5716T switch
1 X HP MSA P2000 Gen3 with dual controllers each containing 4 x 1GB NICs
We have configured two NICs per host in separate vSwitchs that handle the iSCSI traffic. One NIC connecting the host to first switch and the other to the 2nd switch. Our plan is to utilize 2 of the new 10GB NICs in each of the upgraded hosts for iSCSI traffic. Thus we would implement a mixed environment of both 1GB and 10GB connections. I assume all the overhead will be on the switch in dealing with traffic coming from 10GB hosts to 10GB switch then getting choked down to 1GB going to the MSA.
Would this scenario cause any unforeseen issues? Will it work at all?
Thanks in advance for any insight anyone can provide.
Did you really try the batch I recommended ?
The name "vmnetbridge" is not a service name as you would find in services.msc for example but you should find it in registry as HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\VMnetBridge.
If vmnetbridge is not present in your installation brdged network would have never worked.
So please open a cmd and enter "net start vmnetbridge"
You should get a reply like one of these:
- access is denied
- the service name is invalid
- the service has been started successfully
- the service is already running
If you get "access denied" try again with a cmd launched as administrator
Ulli