[VMware] Huananzhi X79-8D ESXi use RDM (Raw Device Mapping) mount local SATA NTFS file type

在用華南金牌 X79-8D 用 ESXi 有時會想拿舊硬碟資料存取,總是想到要用RDM,卻發現無法使用 RDM 方式。但仔細檢查明明在BIOS (boot time press Del )有啟用 Intel VT-d 卻還是不行?

找尋到有另一個方式也是可以達到RDM效果。

Workaround :
step01. ssh ESXi

step02. # ls /dev/disksesxcfg-mpath -l ls -al /vmfs/devices/disks
找尋到像 " t10.ATA_____HD1000320AS_________________________________________XXXXXXXX " 字串 device-name

step03.建立一個RDM link
command : vmkfstools -r <source> (space) <destination>
e.g.
vmkfstools -r /vmfs/devices/disks/ t10.ATA_____HD1000320AS_________________________________________XXXXXXXX / vmfs/volumes/目標VM目錄 /RDM-disk.vmdk

vmkfstools -z /vmfs/devices/disks/ t10.ATA_____HD1000320AS_________________________________________XXXXXXXX “/vmfs/volumes/目標VM目錄/RDM-disk.vmdk" (因為要接近實體我推薦用 -z )

(註)
-r –createrdm /vmfs/devices/disks/…
-z –createrdmpassthru /vmfs/devices/disks/…

step03.編輯欲使用Local SATA VM,右鍵 “Edit Settings" > Add > HDD > Existing hard disk 找到剛建立連結 RDM-disk.vmdk

step04.打開VM console , then check disk manager , online disk .

Done.

Reference :
1. osiutino’s Blog – 在 ESXi 利用 RDM 直接掛載實體硬碟到 VM ( HP MicroServer N36L 適用)
2. www.vmwarearena – 2 Simple ways to Create Virtual Compatibility RDM Disks
3. 暉獲無度的步烙閣 – 在 ESXi 6.0 新增 USB 外接硬碟當 Datastore
4. homecomputerlab – VMware SATA disk Raw Device Mapping (RDM)

[VMware] Huananzhi X79-8D install ESXi 6.7 U3 fail ?

即然有了洋垃圾伺服器裝個ESXi也不為過,興奮之際直接下載ESXi 6.7 U3 installer 使用了 Rufus create esxi usb boot 安裝發現載入一半會出現如下畫面

Error message :
Shutting down firmware services…
Page allocation error: Out of resources
Failed to shutdown the boot services.
unrecoverable error

Workaround:

雖然看來似乎 UEFI mode 造成,但試過改為 Legacy Mode似乎也一樣,
有論壇討論到幾個方式 > 更新 bios / 關閉 UEFI / legacy Only boot / disable VT-d (mandatory) ;但不符合我環境都無效。
那換個方式先裝舊版再升級總可以吧。X-)


Step01. use ESXi 6.7 U2 install (PS:華南金牌 X79 內建網卡是 Realtek 8168記得手動自製 image 唷 !)

Step02. // update patch ‘ ESXi670-201911001.zip / Build 15018017 / MD5 checksum 8d3ef79c9275bc97f9ce081b70e901c6 ‘
PS: 這問題在這更新檔才能得已解決, Please reference this

> esxcli network firewall ruleset set -e true -r httpClient

> esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep -i ESXi-6.7.0-2019

> esxcli software profile update -p ESXi-6.7.0-20191104001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot -index.xml

Step03. 更新後重開檢查版本是否為ESXi 6.7 U3 + 最新 Patch 
ESXi 6.7.0 , Build 13006603

Reference :
1. virtusolve.home.blog – ESXi Host cannot boot after upgrading to 6.7 U3- Page allocation error:Out of resources
2. GitHub – “Multiboot buffer is too small." after upgrade to ESXi-6.7.0-20181002001-standard (Build 10302608) #1

[VMware]Huananzhi X79-8D onboard NIC (Realtek 8168) custom ESXi 6.7x image

Pre-ready
Step01.
PS > Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

Step02.
PS> Install-Module -Name VMware.PowerCLI

Step03.
Download " ESXi-Customizer-PS-v2.6.0.ps1 " , download link (至今是最新作者說是最後一版)

Step04.
Download Realtek 8168 for ESXi driver , download link
Reference : https://vibsdepot.v-front.de/wiki/index.php/Net55-r8168#Direct_Download_links

Step05.
Download VMware vSphere Hypervisor (ESXi) Offline Bundle (PS: not .ISO installer boot cd !)
e.g. I want to install ESXi 6.7 Update 2 , filename : update-from-esxi6.7-6.7_update02.zip

Step06.
PS> Set-ExecutionPolicy Unrestricted

Step07.
PS> .\ESXi-Customizer-PS-v2.6.0.ps1
(If you don’t care warning message ,if yes , please type as below command-lin >
PS> Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $true

Step08.
PS> ./ESXi-Customizer-PS-v2.6.0.ps1 -izip update-from-esxi6.7-6.7_update03.zip -dpt net55-r8168-8.045a-napi-offline_bundle.zip -load net55-r8168

-------------------------------執行過程---------------------------------
PS D:\temp> ./ESXi-Customizer-PS-v2.6.0.ps1 -izip update-from-esxi6.7-6.7_update03.zip -dpt net55-r8168-8.045a-napi-offline_bundle.zip -load net55-r8168

安全性警告
只執行您信任的指令碼。來自網際網路的指令碼雖然可能很有用,但是這個指令碼有可能會傷害您的電腦。若信任此指令碼,請使用
Unblock-File Cmdlet 來允許執行指令碼,而不顯示此警告訊息。您要執行 D:\temp\ESXi-Customizer-PS-v2.6.0.ps1 嗎?
[D] 不要執行(D)  [R] 執行一次(R)  [S] 暫停(S)  [?] 說明 (預設值為 "D"): R

This is ESXi-Customizer-PS Version 2.6.0 (visit https://ESXi-Customizer-PS.v-front.de for more information!)
(Call with -help for instructions)

Logging to C:\Users\duke\AppData\Local\Temp\ESXi-Customizer-PS-3020.log ...

Running with PowerShell version 5.1 and VMware PowerCLI version 11.5.0.14899560

Adding base Offline bundle update-from-esxi6.7-6.7_update03.zip ... [OK]

Connecting additional depot net55-r8168-8.045a-napi-offline_bundle.zip ... [OK]

Getting Imageprofiles, please wait ... [OK]

Using Imageprofile ESXi-6.7.0-20190802001-standard ...
(dated 08/08/2019 09:57:28, AcceptanceLevel: PartnerSupported,
Updates ESXi 6.7 Image Profile-ESXi-6.7.0-20190802001-standard)

Load additional VIBs from Online depots ...
   Add VIB net55-r8168 8.045a-napi [New AcceptanceLevel: CommunitySupported] [OK, added]

Exporting the Imageprofile to 'D:\temp\ESXi-6.7.0-20190802001-standard-customized.iso'. Please be patient ...


All done.
-------------------------------   END  ---------------------------------

Reference:
1. networkguy – Installing Realtek Driver on ESXi 6.7
2. 流水上面的一塊葉 – Vmware ESXi 6.7 make your unsupported NIC work – 在不支援的Network card 上面可以進行安裝
3. S小魚仔S 使用 ESXi-Customizer-PS 封裝 Esxi 6.5 Realtek、ACHI 驅動程式
4. 可丁丹尼 @ 一路往前走2.0 – ESXi 新增Realtek網路驅動
5. VMware Front Experience – ESXi-Customizer-PS

[VMware] Cluster enable EVC function,but add host in fail ?原來是 BIOS CPU Monitor / MWait 功能在搞鬼0-o

因應客戶日後主機擴增在建立叢集會先啟用現階段CPU EVC Level ,誰不知要將主機拉入 Cluster 卻失敗,錯誤訊息內容為

" 主機的 CPU 硬件應支持群集當前的 Enhanced vMotion Compatibility 模式,但主機現在缺少某些必要的 CPU 功能。請檢查主機的 BIOS 配置,確保未禁用必要的功能(例如 Intel 的 XD、VT、AES 或 PCLMULQDQ,或者 AMD 的 NX)。有關詳細信息,請參見知識庫文章 1003212

主機為 Lenovo SR530
這若確認EVC設定 EVC Cluster basline 為 " Intel “Skylake" Generation “,那問題點在於主機 BIOS 裡 MONITOR / MWAIT 是關閉,需啟用之才行。
因應最佳效能將 Choose Operating Mode 改為 Maximum Performance (但這項會造成 Monitor / MWait關閉,除非改為 Custom 才能去啟用之或最快是用出機預設值 “

假如BIOS無此選項則是查驗新版BIOS是否支援去更新它。或將EVC Level 調降成 Nehalem 或更早版本的 EVC 群集

BIOS (F1) > System Settings > Choose Operating Mode > Efficiency – Favor Performance 選項預設會開 MONITOR / MWAIT

Reference :
1. vMotion/EVC incompatibility issues due to AES/PCLMULQDQ (1034926)
2. Enhanced vMotion Compatibility (EVC) processor support (1003212)
3. Huawei 文档编号: EKB1100006339 BIOS未开启Monitor/Mwait特性,导致VMWare启用EVC特性报错

[VMware] VM 安裝Windows Server 品牌隨機版(ROK;Reseller Optional Kit)

因底層是VMware 因此會造成虛擬機無法direct 上層 BIOS資訊導致會無法辨視是 Vendor ROK activate.

Resolution :

step01. modify vm.vmx

step02. add > SMBIOS.reflectHost ="True"

step03. vim-cmd vmsvc/reload {該機VMID}

Reference:
1. ESX Server 3.0.1, Patch ESX-1002095; Updates to VMware-esx-vmx and VMware-esx-tools; Support for OEM Windows SLP (1002095)
2. How to: VMWARE: pass mainboard BIOS to VM
3. 51CTO博客 – VMware vSphere ESXi 6.0 激活OEM Windows

[VMware] vSphere vMotion storage fail pending 33% , “Failed waiting for data. Error 195887179. Connection reset by peer"

自 vSphere 6.0開始 vMotion還會檢查 vmkernel MTU是否有和實體 Switch / Router MTU是否匹配;若不匹配則會很容易有搬移失敗可能。

Resolution :
1. check vSwitch / DSwitch MTU
2. check vmkernel MTU

Reference :
1. After upgrading to ESXi 6.x, vMotion fails with the error: Failed waiting for data. Error 195887179. Connection reset by peer (2120640)

[Brocade] Fabric Switch ssh login message “Sorry! Max remote sessions for login:admin is 2″

在維護時發現admin session maximum 2 佔滿一時進不去,會出現如下畫面

Step01. 此時要用 root 密碼預設為 fibranne ,進入用 serial-console 或 ssh 都行

Step02. > who // 查詢

Step03. > killtelnet // 會出現讓你選擇要刪除那一個 session 選後數字再按下 “y" 即可。

Done.

[Windows] Windows 10 – 1903 , Let Intel I350-T4 use Teaming & Vlan ?

Intel 的 Teaming 工具莫過於用它 ANS ( Advanced Network Services ) 可以利用GUI方式建立 Teaming (Bonding) 及創建 Vlan 虛擬網卡。但在 Windows 10 1709 後再也不能用 ANS ..Orz

sample:

在Google後一些文章及Intel官方是採取 PowerShell 方式依舊可以達成。只是用PowerShell方式過程等待會很久….久……………………久。

Resolution :

Step01. 以目前為例最新 24.2 – 2019/8/16 驅動 “PROWinx64.exe"

Step02.載入Intel PowerShell PROSet 模組
PS C:\> Import-Module -Name “C:\Program Files\Intel\Wired Networking\IntelNetCmdlets\IntelNetCmdlets"

Step03.驗證有匯入 Intel PowerShell PROSet 模組
PS C:\> Get-IntelNetAdapter

<Create Teaming (Bonding)>
Step04. 我都是模式是 LACP ( 802.3ad)
(Create)
PS C:\> New-IntelNetTeam -TeamMemberNames “Intel(R) Ethernet Server Adapter I350-T4″,"Intel(R) Ethernet Server Adapter I350-T4 #2″ -TeamMode IEEE802_3adDynamicLinkAggregation -TeamName “BOND"

(Remove)
PS C:\> Remove-IntelNetTeam -TeamName “BOND"

(Modify)
PS C:\> Set-IntelNetTeam -TeamName “BOND" -NewTeamName “TEAM"

(Add member)
PS C:\> Add-IntelNetTeamMember -TeamName “BOND" -Name “Intel(R) Ethernet Server Adapter I350-T4 #3″
PS C:\> Add-IntelNetTeamMember -TeamName “BOND" -Name “Intel(R) Ethernet Server Adapter I350-T4 #4″
PS: 以上TeamMemberNames不是用所謂 alias-name或 fready-name,而是網卡真正的名稱唷!

Intel TeamMode有分六種模式
* AdapterFaultTolerance (網卡 Active / Standby )
* AdaptiveLoadBalancing
* IEEE802_3adDynamicLinkAggregation ( LACP )
* StaticLinkAggregation (這個綁法不用交換器設定)
* SwitchFaultTolerance (適用刀鋒伺服器網卡 Active / Standby )
* VirtualMachineLoadBalancing (適用 Hyper-V類型 )

<Create Teaming Vlan >
Step05.
** 適用 Teaming environment **
use Single method
PS C:\> Add-IntelNetVLAN -ParentName “BOND" -VLANID 11
PS C:\> Add-IntelNetVLAN -ParentName “BOND" -VLANID 12
PS C:\> Add-IntelNetVLAN -ParentName “BOND" -VLANID 13
use Multipiple method
PS C:\> Add-IntelNetVLAN -ParentName “BOND" -VLANID (111..115)
PS C:\> Remove-IntelNetVLAN -ParentName “BOND" -VLANID (111..115)

PS C:\>
** 適用 Single Intel NIC **
(Create)
PS C:\> Add-IntelNetVLAN -ParentName " Intel(R) Ethernet Server Adapter I350-T4 #4 " -VLANID 11
(Delete)
PS C:\> Remove-IntelNetVLAN -ParentName “Intel(R) Ethernet Server Adapter I350-T4 #4″ -VLANID 11
(Modify)
PS C:\> Remove-IntelNetVLAN -ParentName “Intel(R) Ethernet Server Adapter I350-T4 #4″ -VLANID 11 -NewVLANID 81

當用的非Intel PowerShell指令
Get-NetIPConfiguration
Get-NetIPInterface

對於Intel PowerShell查詢請用 Get-Help IntelNet
e.g. > Get-Help Set-IntelNetVlan

Reference :
1. How to Set up Teaming with an Intel® Ethernet Adapter in Windows® 10 1809?
2. Intel Adapter User Guide for Intel® Ethernet Adapters
3. 適用于 Windows PowerShell * 軟體的 Intel® PROSet
4. Quirky Virtualization – Automating Intel Network Adapter VLAN configuration
5. Metonymical Deflection – Windows10 VLAN Interface設定
6. 如何在 Windows® 10 1809 中與 Intel®乙太網路介面卡建立搭配?
7. 如何使用 Windows®10(內部版本1809)下的 Intel®乙太網路介面卡設定 Vlan?

[NetApp] Simulate Data OnTAP 7-mode ver 8.x archive link

http://mysupport.netapp.com/NOW/download/tools/simulator/ontap/8.2.1/vsim_netapp-7m.tgz


http://mysupport.netapp.com/NOW/download/tools/simulator/ontap/8.1.4/vsim_netapp-7m.tgz


https://mysupport.netapp.com/NOW/download/tools/simulator/ontap/8.1.2/vsim-DOT812-7m.tgz


https://mysupport.netapp.com/NOW/download/tools/simulator/ontap/8.0.1/vsim-DOT801-7m.zip