Quantcast
Channel: VMware Communities: Message List
Viewing all 236436 articles
Browse latest View live

Re: VM data consumption on vSAN Storage

$
0
0

Yes your last sentence is indicative to what happens when I tried it.

 

vMotion does not work if the destination LUN has less space then the current space consumed by the VM on vSAN.

 

This should not be the case as it only needs the free space with a size of the disk as seen by the Guest OS/size as provisioned (and some additional space for swap file etc..)


SCP between 4.1 and 6.7 no matching KexAlgo

$
0
0

So after many years I have built a new 6.7 machine to replace my 4.1 host, I now need to migrate my VM's to the new host.

 

But SCP on 4.1 won't connect with error "No matching KexAlgo"

 

I have come across it before on some old distros but can't find how to fix it on vSphere???

 

Anyone got the solution?

 

Thanks

 

Rob

Re: SCP between 4.1 and 6.7 no matching KexAlgo

$
0
0

If I understand this correctly, you want to move the VMs from 4.1 local datastore to 6.7 host local datastore? Or Is it a shared datastore? That's one big leap from 4.1 to 6.7

 

Cheers,

Supreet

failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x0 0x0.

$
0
0

Hello,

We have a dedicated server with NVMe drives.

We got the error from the vmkernel:

 

2018-08-25T07:57:55.549Z cpu7:2097662)ScsiDeviceIO: 3029: Cmd(0x459a40ef0440) 0x93, CmdSN 0x4bbe9 from world 2108876 to dev "t10.NVMe____INTEL_SSDPE
2MX450G7_CVPF7453000Z450RGN__00000001" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x0 0x0.

 

So it causes damage after a while and every VM on this datastore crashed.

 

I had checked the problem in here, But I didn't understand what should I do.

Re: failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x0 0x0.

$
0
0

0x93 is a vendor specific SCSI command for Write Same VAAI functionality. In this case, the controller (disk) is reporting a check condition when the ESXi is issuing a write zero command. Not sure if this functionality is supported. You can run the command <esxcli storage core device vaai status get> to check if it supports VAAI functionality. If the zero status shows as unsupported, you can disable Write Same functionality using the command <esxcli system settings advanced set --int-value 0 --option /DataMover/HardwareAcceleratedInit>. Having said that, I don't think the VMs would have crashed due to non-functional VAAI primitive. Are you sure this is the cause?

 

More about Write Same functionality - WRITE SAME | Cody Hosterman / VMware Knowledge Base

 

Please consider marking this answer as "correct" or "helpful" if you think your questions have been answered.

 

Cheers,

Supreet

Re: SCP between 4.1 and 6.7 no matching KexAlgo

$
0
0

Local to Local over network.

 

It's a home system so it's never really needed upgrading.

Re: failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x0 0x0.

$
0
0

they are supporting.

 

t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001
   VAAI Plugin Name: 
   ATS Status: unsupported
   Clone Status: unsupported
   Zero Status: supported
   Delete Status: supported
t10.NVMe____INTEL_SSDPE2MX450G7_BTPF807303DX450RGN__00000001

   VAAI Plugin Name: 
   ATS Status: unsupported
   Clone Status: unsupported
   Zero Status: supported
   Delete Status: supported

 

When VMs are crashed, I got this errors in vmkernel log:

 

83)NMP: nmp_ThrottleLogForDevice:3618: last error status from device t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001 repeated 160 times

7)nvme_ScsiCommand: queue:1 busy                                   
69)nvme_ScsiCommand: queue:2 busy                                   
99)nvme_ScsiCommand: queue:3 busy                                   
69)nvme_ScsiCommand: queue:0 busy                                   
69)nvme_ScsiCommand: queue:1 busy                                   
37)nvme_ScsiCommand: queue:2 busy                                   
37)nvme_ScsiCommand: queue:3 busy                                   
21)nvme_ScsiCommand: queue:0 busy                                   
44)nvme_ScsiCommand: queue:1 busy                                   
21)nvme_ScsiCommand: queue:2 busy                                   
21)nvme_ScsiCommand: queue:3 busy                                                                                                                   
21)nvme_ScsiCommand: queue:0 busy                                   
21)nvme_ScsiCommand: queue:1 busy

 

2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:140 qid:4                                                                   
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_SyncCompletion: no sync request                                                                          
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:141 qid:4                                                                   
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_SyncCompletion: no sync request                                                                          
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:142 qid:4                                                                   
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_SyncCompletion: no sync request                                                                          
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:143 qid:4                                                                   
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_SyncCompletion: no sync request                                                                          
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:144 qid:4    

 

2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:60 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:24 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:8 qid:4                                                                     
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:91 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:6 qid:4                                                                     
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:88 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:74 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:72 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:90 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:65 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:53 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:63 qid:4                                                                    
2018-08-25T08:08:59.255Z cpu0:2097454)nvme_DisableQueue: Cancelling I/O:50 qid:4  

 

2018-08-25T08:08:47.754Z cpu7:2097454)nvme_TaskMgmt: adapter:1 type:abort                                                                                                                       
2018-08-25T08:08:47.754Z cpu7:2097454)nvme_TaskMgmt: waiting on command SN:4c55a                                                                                                                
2018-08-25T08:08:47.755Z cpu0:2097183)ScsiDeviceIO: 3029: Cmd(0x459a898340c0) 0x28, CmdSN 0x4c55a from world 0 to dev "t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001" failed H:0x
2018-08-25T08:08:47.761Z cpu1:2100036)HBX: 3033: 'datastore1': HB at offset 3407872 - Waiting for timed out HB:                                                                                 
2018-08-25T08:08:47.761Z cpu1:2100036)  [HB state abcdef02 offset 3407872 gen 345 stampUS 6247906187 uuid 5b80f5b9-95bf4dba-f27a-ac1f6b01063c jrnl <FB 9> drv 24.82 lockImpl 3 ip 145.239.3.79] 
2018-08-25T08:08:57.754Z cpu0:2097454)nvme_ResetController: adapter:1                                                                                                                           
2018-08-25T08:08:57.762Z cpu1:2100036)HBX: 3033: 'datastore1': HB at offset 3407872 - Waiting for timed out HB:                                                                                 
2018-08-25T08:08:57.762Z cpu1:2100036)  [HB state abcdef02 offset 3407872 gen 345 stampUS 6247906187 uuid 5b80f5b9-95bf4dba-f27a-ac1f6b01063c jrnl <FB 9> drv 24.82 lockImpl 3 ip 145.239.3.79] 
2018-08-25T08:08:59.254Z cpu0:2097454)nvme_ResetController: adapter:1 disabled, clear queues and restart

 

 

2018-08-25T08:08:44.386Z cpu0:2097183)NMP: nmp_ThrottleLogForDevice:3689: Cmd 0x28 (0x459a898340c0, 0) to dev "t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001" on path "vmhba3:C0::T0:L0" Failed: H:0xc D:0x0 P:0x0 Invalid sense data: 0x0 0x0

2018-08-25T08:08:44.386Z cpu10:2097700)NMP: nmp_ResetDeviceLogThrottling:3519:: last error status from device t10.NVMe____INTEL_SSDPE2MX450G7_CVPF7453000Z450RGN__00000001 repeated 10678 times 

 



vSphere 6.5 + few NIC 10G = slow network speed

$
0
0

Hello, colleagues!

1. We have:

- VCSA 6.5.0.21000

- a few ESXi, 6.5.0, 8935087

- 2 physical switches DELL FORCE10 S4810 (switches are combined in LAG)

- 2 NIC 10G Intel X520 (82599) on each host (each NIC is connected to different physical switches)

- everywhere included MTU9000

 

2. At the level of dSwith is configured:

- association of NIC in LAG (active mode, Load Balancing: source and destination ip address tcp/udp port and vlan)

- Private VLAN port group, all the necessary settings are also made on physical switches.

 

3. Test machines Windows Server 2012R2 with adapters vmxnet3 (VM version 13) and VMware-tools-10.2.5-8068406 have been prepared. All machines are located on the same subnet and in the same port group.

 

So, should I get the total speed between hosts around 20G and a similar speed between virtual machines?

 

But:

- I'm testing iperf between hosts - speed about 8G

- I'm testing iperf between VMs hosted on the same host - speed 3-3.5G

- I'm testing iperf between VMs hosted on different hosts - speed only 1-2G

 

What could be the problem?


vSphere 6.5 + NIC 10G = низкая скорость сети

$
0
0

Здравствуйте, колеги!

1. Имеем:

- VCSA 6.5.0.21000

- a few ESXi, 6.5.0, 8935087

- 2 физических свитча DELL FORCE10 S4810 (свитчи объеденены в LAG)

- 2 NIC 10G Intel X520 (82599) на каждом хосте (каждый NIC подключен в разные физические свитчи)

- везде включено MTU9000

 

2. На уровне dSwith настроено:

- объединение NIC в LAG (active mode, Load Balancing: source and destination ip address tcp/udp port and vlan)

- портгруппа Private VLAN, на физических свитчах также сделаны все необходимые настройки.

 

3. Подготовлены тестовые машины Windows Server 2012R2 с адаптерами vmxnet3 (VM version 13) и VMware-tools-10.2.5-8068406. Все машины размещены в одной подсети и в одной портгруппе.

 

Таким образом, я должен получить общую скорость между хостами около 20G и похожую скорость между виртуальными машинами?

 

Но:

- тестирую iperf между хостами - скорость около 8G

- тестирую iperf между VM, размещенными на одном хосте - скорость 3-3,5G

- тестирую iperf между VM, размещенными на разных хостах - скорость 1-2G

 

В чем может быть проблема?

Re: SCP between 4.1 and 6.7 no matching KexAlgo

$
0
0

Prehaps a temporary FREENAS will be built.

Re: SCP between 4.1 and 6.7 no matching KexAlgo

$
0
0

Can you share the screenshot of the command syntax you are using and the error being reported?

 

Cheers,

Supreet

Re: failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x0 0x0.

$
0
0

Looks like the below user too had a similar issue and got around it by disabling the VAAI primitives -

 

https://www.reddit.com/r/vmware/comments/7khhfy/esxi_65u1_psod_intelnvme/

 

You may want to give it a shot. Also, we see 'H:0xc' events which indicate a transient error with the storage. In such scenarios, the commands will be reissued. Updating the NVMe controller driver/firmware to the latest version is also an important step towards isolating the cause.

 

Please consider marking this answer as "correct" or "helpful" if you think your questions have been answered.

 

Cheers,

Supreet

Re: vCenter 6.7 for Windows unattended installation

$
0
0

Hello, thank you for the settings. They worked. I added the following settings. The attached vc.json does not even need the hostname specified. It uses the one configured in the system. The system has a static ip and is joined to an active directory previous to vCenter installation.

"ceip": {        
     "ceip.enabled": false,
     "ceip.acknowledged": true
}

Cheers,

Thomas

Re: VAMI no longer up after update to 6.5.0.22000 Build Number 9451637

$
0
0

Follow up:

the patch update failed because the root password (or the admin password) needed to be updated.

You can see this in /storage/log/vmware/applmgmt/software-packages.log

So I updated the root and the admin passwords, installed the update via ISO, and now I am on version 6.5.0.22000 Build Number 9451637.

 

BUT:

 

VAMI was still not up after a reboot. All services looked OK in the VCSA console, but vami-lighttp was disabled in the OS. To start it, 2 options:

1) go to the console/shell and run "service vami-lighttp start", every time you restart VCSA

2) go to the console/shell and run "chkconfig vami-lighttp on", and reboot. (once)

 

I marked this as the correct answer as it is more complete, but I would have been unable to get here without the help from a.p.

Re: VAMI no longer up after update to 6.5.0.22000 Build Number 9451637

$
0
0

The main goal for a community is to get help, and to help others.

It is absolutely ok to mark your own reply with the solution as the correct answer.

The solution you've posted will definitely help others. Thanks for sharing it!

 

André


Re: vSphere 5.5 - Shared Value for VM

$
0
0

I find the problem..

 

it turned out to be a hardware controller and Cache Ratio setting. In my case it was HP P410 Hw Controller

 

 

/opt/smartstorageadmin/ssacli/bin/ssacli ctrl all show config detail | head -40

Cache Board Present: True

  Cache Status: OK

  Cache Status Details: A cache error was detected. Run a diagnostic report for more information.

  Cache Ratio: 100% Read / 0% Write

  Drive Write Cache: Disabled

to fix it, the Cache Ratio must be forced by 50/50

 

/opt/smartstorageadmin/ssacli/bin/ssacli ctrl slot=2 modify cr=50/50 forced

 

and then suddenly the error disappears, and we can change it to the optimal setting for us.. for example 25 % Read / 75% Write

 

/opt/smartstorageadmin/ssacli/bin/ssacli ctrl slot=2 modify cr=25/75
/opt/smartstorageadmin/ssacli/bin/ssacli ctrl all show config detail | head -40

Smart Array P410 in Slot 2

   Bus Interface: PCI

   Slot: 2

[....]

   Cache Board Present: True

   Cache Status: OK

   Cache Ratio: 25% Read / 75% Write

   Drive Write Cache: Enabled

   Total Cache Size: 256 MB

   Total Cache Memory Available: 144 MB

 

dd if=/dev/zero of=/mnt/trash/test-file bs=999k count=10k; rm /mnt/trash/test-file 

10240+0 readed records

10240+0 writed records

10475274240 bytes (10 GB, 9,8 GiB) copied, 27,8135 s, 377 MB/s

 

 

And everything works!

 

VM+vdisks limitation for migrate

$
0
0

Dear all

Hi

 

is there any limitation for migrate a vm+disks from one esxi host to other esxi hosts? for example maximum vdisk or .....

 

Br

Re: VM+vdisks limitation for migrate

$
0
0

I'm not aware of such a limit. Any VM configuration that's supported on both - source, and target host - should work.


André

vmware vSphere

$
0
0

Salve!

 

Vorrei provare vmware vSphere versione enterprise plus dove posso trovare il link? che in questa versione enterprise plus c'è la vGPU in questa versione.

Le mie domande da farvi è; questo software è in grado di monitorare  la macchina virtuale dopo aver creato su vSphere su uno smartphone android da remoto? è lavorarci su qualsiasi dispositivo sia tablet e sia su smartphone android o IOS a distanza?

Grazie e attendo risposte.

Re: VM data consumption on vSAN Storage

$
0
0

Hello andvm,

 

 

Verify what SP(Storage Policy) the VM has and that it is compliant with this (e.g. check that it doesn't have an FTT=0 SP and/or is non-compliant).

What datastore option are you using when performing the SvMotion? - Do ensure you are applying something valid and please show a screenshot of the what this shows at this validation point.

Are you 100% positive that your 1.5TB SAN LUN actually has 1.5TB space available? e.g. not Thin/Over-provisioned. Are you getting a pre-check failure saying not enough space, if so where and please share.

Are you using anything that isn't so savvy of SPBM such as the C# client.

Are the vmdk Objects attached to the VM all 1MB-aligned? (e.g. when you look at them under Edit Settings do they show as 2TB or for example 2000.12823467GB)

 

 

Bob

Viewing all 236436 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>