[VMware] vRealize Operations Manager 6.3 ~ 7.5 Enable SSH service

vRealize Operations Manager 6.x 基本上OS底層是 SUSE Linux Enterprise 11。所以尚未像 vCenter Appliance 是以 VMware Photon OS ;所以不是用VMware式的按下 ALT + F2 進入啟用SSH。

step01. open vROM console

step02. ALT + F1

step03. 預設帳號 ‘ root ‘ ,密碼 ‘空白 (blank)’ 按下後立即輸設定新密碼。

step04. # service sshd start

step05. # chkconfig sshd on

step06. use like ‘putty’ testing

Reference :

VMware KB – Enabling SSH access in vRealize Operations Manager 6.x and later (210051

vGyan.in : vRealize Part 7 – Enable SSH on vROPS

[HPE] ProLiant DL380p Gen8 iLO4 GUI show “Embedded Flash/SD-CARD: Failed restart.."

客戶反應iLO4反應有點怪怪,雖作過拔除電源斷電一分鐘但狀況依舊。且感覺它都時常會有 hang state ,用 ping iLO4卻是有回應不掉封包。在 iLO4 登入畫面會常顯示 “iLO Self-Test report a problem with :Embedded Flash/SD-CARD: Failed restart..view details on Diagnostics page. " 及 “Connection with iLO cannot be established. If you recently made changes to the network configurations,you may need to refresh this page to re-negotiate an SSL connection."

HPE原廠建議solution as below list

step01.
upgrade iLO4 to 2.44 or newer (因 2.44才支援能格式化 SD-Flash media {NAND} )

step02.
NAND Format Methods

* From the iLO 4 GUI (requires iLO 4 firmware version 2.61 or newer)
* From the Onboard Administrator (for servers in HPE BladeSystem c3000/c7000 Enclosures only)
* From Windows OS (using the HPQLOCFG.exe utility)
* From Windows PowerShell (using HPE iLO cmdlets for PowerShell)
* From Linux or VMware (using the hponcfg utility)

而我選擇用下載 HPQLOCFG.exe utility from “Software – Lights-Out Management v5.0.0" 需安裝在本機電腦上唷。

step03. 編輯腳本 force_format.xml (可隨意命名只要在用 hpqlocfg.exe 指定對的.xml名稱即可)

<RIBCL VERSION="2.0″>
<LOGIN USER_LOGIN="iLO4最高權限帳號" PASSWORD="iLO密碼">
<RIB_INFO MODE="write">
<FORCE_FORMAT VALUE="all" />
</RIB_INFO>
</LOGIN>
</RIBCL>

step04.
在格代化前 (1.升級iLO4 v2.53以上 , 2.重開iLO ,3.接上電源不開機狀態下作格式化)

“c:\Program Files (x86)\Hewlett-Packard\HP Lights-Out Configuration Utility\HPQLOCFG.exe" -f force_format.xml -s iLO的IP -u iLO最高權限 -p iLO密碼

**若成功在命令提示列會看到 “Forcing a format of the partition after the iLO reset“字樣,且在iLO4 event log “Embedded Flash/SD-CARD: One or more storage devices have been formatted."
最後有格式化成功查看 iLO4 > Information > System Information > Firmware > Intelligent Provisioning (版本會變成 N/A 狀態)

step05. 下載 " HP Intelligent Provisioning Recovery Media " 或"多版本 HPE IPRM
(PS: 對應版本如下
Gen8 servers supports Intelligent Provisioning 1.x
Gen9 servers supports Intelligent Provisioning 2.x
Gen10 servers supports Intelligent Provisioning 3.x
)

step06. 開機按下 F11 Boot menu

step07.進入後選第一個 “Intelligent Provisioning Recovery Media"
會陸續看到三步驟
> Verifying system settings This may take up to 30 seconds
> Running flash process Please wait until process is complete.
> Update Complate – you must reboot to apply your changes.

step08. 再把iLO4 restart ( Information > Diagnostics > reset )

step09. 再查看 iLO health 已不是 degrade ,其二. Intelligent Provisioning firmware有版本出現不在是 N/A state .

Done.

(若以上幫不了你則請更換主機板吧…..God bless you …..)

Reference:
1. HPE ProLiant Gen8 Servers – How to Reinstall or Upgrade Intelligent Provisioning

2. HPE Document ID: c04996097 ,v10 – Advisory: (Revision) HPE Integrated Lights-Out 4 (iLO 4) – HPE Active Health System (AHS) Logs and HPE OneView Profiles May Be Unavailable Causing iLO Self-Test Error 8192, Embedded Media Manager and Other Errors

3. HPE Document ID: a00048622en_us ,v5 – Advisory: (Revision) HPE Integrated Lights-Out 4 (iLO 4) – How to Format the NAND Used to Store AHS logs, OneView Profiles, and Intelligent Provisioning

4. HPE Document ID: a00047494en_us ,v1 – Notice: HPE Integrated Lights Out (iLO) 4 – RESTful Command to Allow an Auxiliary Power-Cycle Is Available in Firmware Version 2.55 (and Later)

5. 狸貓先生愛廢話講堂 – HPE Server 動手做 – ProLiant Gen9 的 Embedded Flash 故障造成 Intelligent Provisioning 無法啟動問題

[Windows]狂Ping指令,類Cisco Fast Ping 效果

下載PSTool工具 https://bit.ly/2Kys8

指令 > psping.exe -t -i 0 192.168.1.1

psping 64.exe -t -i 0 192.168.1.1
(註) 參數一定要如上不可隨便變動。否則僅是預設 ping 四次唷!


指令解釋:

– i > Usage for ICMP ping.
-t >Usage for TCP ping.
– l > Usage for latency test.
– b > Usage for bandwidth test.
-nobanner > Do not display the startup banner and copyright message.

[VMware] ESXi 6.5.x migrate VM but available hosts missing one host ?

因客戶機器有些異動後幫他上線後,客戶要將原本上面機器搬回該機,卻發現可用主機居然莫名看不到?換成vSphere Web Client (Flash)也是如此。

查了KB看來僅會發生在 ESXi 6.5系列。

Workaround:

step01. vSphere Web client 在要搬移機器 > Launch Remote console,立即關閉該VM console.

step02. 再作一次搬移動作.

Reference:
1.vMotion not showing all available hosts in the Cluster in vSphere 6.5 (57230)
2. Troubleshooting the migration compatibility error: The VMotion interface is not configured (or is misconfigured) on the destination host (1003827)
3. Understanding and troubleshooting vMotion (1003734)

[Nutanix] WinSCP connect CVM use SFTP protocol

自 AOS 5.5 開始預設 SFTP Port 2222 是關閉。

step01. @CVM$ allssh modify_firewall -f -o open -i eth0 -p 2222 -a // 開啟防火牆

step02. use WinSCP > SFTP > IP ,Port 2222 // 帳/密 admin / prism-password

(補充)
* Linux / Mac *
$ sftp -P 2222 admin@cluster-vip:/container-name 連線
$ put test.vmdk

[Nutanix] AHV decrease CVM memory

有時在用Nutanix CE版本資源總是很缺乏,一個節點用大家常用 16GB (16384 MB) 時,CVM就佔用 12GB 將近就75%記憶體資源都它在用..Orz

因此,當你這環境還想在每一台都建一台VM,那就只有新增AHV memory 或降低 CVM memory.我是沒什麼資源只能選後者。

Resolution:

step01. @cvm$ cluster stop

step02. @cvm$ sudo shutdown -P now
Or
@ahv# virsh shutdown cvm-name

step03. @ahv# virsh list –all | grep -i cvm

step04. @ahv# virsh dumpxml {cvm-name} | egrep -i “cpu|memory" // 10G ,10485760 KiB

step05. @ahv# virsh setmaxmem {cvm-name} –config –size 10GiB
@avh# virsh setmem {cvm-name} –config –size 10GiB

step06. @ahv# virsh dumpxml {cvm-name} | egrep -i “cpu|memory" // 再確認一次是否更改 10G OK.

step07. @ahv# virsh start {cvm-name}

step08. @cvm$ cluster status

step09. @cvm$ cluster start

Reference:
1. AHV 5.0 – Changing CVM Memory Configuration (AHV)

[VMware] vRealize Operations Manager 6.0 ~ 7.0 change IP

更改IP不單單只是變更一組IP重啟服務即可。

Resolution :

step01. Guest shutdown vRealize Operations Manager machine

step02. Edit Settings > Options > vApp Options > Properties > 輸入新 IP / 新的Gateway

step03. Power-on vRealize Operations Manager machine

step04. /opt/vmware/share/vami/vami_config_net // 輸入新IP

step05. Guest Restart Realize Operations Manager machine

step06. service vmware-casa stop

step07. /storage/db/casa/webapp/hsqldb/casa.db.script

step08. service vmware-casa start

step09. cd /usr/lib/vmware-vcopssuite/utilities/sliceConfiguration/bin

step10. $VMWARE_PYTHON_BIN ./vcopsConfigureRoles.py –adminCS=新的IP

step11. 更改以下三個位置有舊IP都置換成新IP
/usr/lib/vmware-vcopssuite/utilities/sliceConfiguration/data/roleState.properties

/usr/lib/vmware-vcops/user/conf/gemfire.properties

/usr/lib/vmware-vcops/user/conf/persistence/persistence.properties

/usr/lib/vmware-vcops/user/conf/gemfire.locator.properties

/usr/lib/vmware-vcops/user/conf/gemfire.native.properties

step12. service vmware-casa stop

step13. cp /usr/lib/vmware-vcops/user/conf/cis.properties /usr/lib/vmware-vcops/user/conf/cis.properties.bak

step14. vi /usr/lib/vmware-vcops/user/conf/cis.properties 置換成新IP

step15. vi /etc/hosts 置換成新IP

step16. service vmware-casa start

step17. Log in to vRealize Operations Manager admin UI as the local admin user

step18. Click Bring Online under Cluster Status

** vROM 6.1 and up **
vi /usr/lib/vmware-vcops/user/conf/cassandra/cassandra.yaml 置換成新IP

** vROM 6.7 and up **
vi /etc/apache2/listen.conf 置換成新IP

Reference :
1. VMware KB – Change the IP Address of a vRealize Operations Manager 6.x or later Single Node Deployment (2108696)

[DIY] Huananzhi X79-8D Burn / Monitor tools

買了洋垃圾總還是得確認它的極限,雖短期看不出穩定性至少要幾個工具測試整體搭配是否安心。

燒機工具

  • Memtest86+Memtest86 : 第一個我先測試購買來的記憶體搭在華南金牌 X79-8D 双路主機板是否OK..基本上我最少測48小時以上沒問題。
  • Prime95 : 這套可以把CPU及Memory溫度真的可以燒到CPU 70度,而Memory達95~100度 (在無機殻狀況下,但CPU是有搭載 Cooler Master 410R)
  • OCCT : 因購買双路主板基本上購買了 500W且要有 80+認證及必要有4PN+4PIN CPU供電線,電源供應器,用這個測一下承載力是否OK。
  • BurnInTest : 試用版即大概可以測試一下全盤跑過一遍。

監控工具

  • Core Temp : 這個看來是大家公認較準CPU溫度監測,且還可以設定就啟動之。
  • HWMonitor :師出同門CPU-Z的產品,我比較它監看和Core Temp溫度差不多,且溫度監看部份有很多。e.g. CPU , VGA , Memory ,System board …

驗證產品工具

  • CPU-Z : 除了開機BIOS查看也會用這工具來驗證一下及跑分.
  • HWiNFO:用它來細看所以元件Part Number 及生產年月及序號.

最後,洋垃圾就是要賭一下人品。Good Luck ….

Reference :
1. Intel 適用于 Intel® NUC 的診斷與效能工具
2. OCCT騙人?電表不準?看POEWR測試必讀!不看懂保證後悔!!! | 滄者極限
3.簡單生活 – OCCT – 專屬超頻玩家監控電源供電穩定測試軟體下載/使用教學@免安裝中文版
4. Mobile01 版主 nichic – Prime95 CPU穩定度燒機測試軟體分享教學 CPU+RAM

[VMware] Windows 1903/1909 run VMware Workstation / Workstation Pro error “VMware Workstation and Device/Credential Guard are not compatible. VMware Workstation can be run after disabling Device/Credential Guard"

近日當在用Windows 10 1903/1909 在跑VM時會跳出錯誤訊息"VMware Workstation and Device/Credential Guard are not compatible. VMware Workstation can be run after disabling Device/Credential Guard"。

那工法稍稍多了點….Orz
不是單單什麼開啟 Windows boot menu Hyper-V auto 或 Hyper-V off之類就可以解決。

<Resolution>

step01. Win + R

step02. cmd.exe

step03. gpedit.msc

step04.
Computer Configuration > Administrative Templates > System > Device Guard > Turn on Virtualization Based Security (Disabled)
電腦設定 > 系統管理範本 > 系統 > Device Guard > 開啟虛擬化安全性 (已停用)

step05.
cmd.exe

step06.
mountvol X: /s

copy %WINDIR%\System32\SecConfig.efi X:\EFI\Microsoft\Boot\SecConfig.efi /Y

bcdedit /create {0cb3b571-2f2e-4343-a879-d86a476d7215} /d “DebugTool" /application osloader

bcdedit /set {0cb3b571-2f2e-4343-a879-d86a476d7215} path “\EFI\Microsoft\Boot\SecConfig.efi"

bcdedit /set {bootmgr} bootsequence {0cb3b571-2f2e-4343-a879-d86a476d7215}

bcdedit /set {0cb3b571-2f2e-4343-a879-d86a476d7215} loadoptions DISABLE-LSA-ISO,DISABLE-VBS

bcdedit /set {0cb3b571-2f2e-4343-a879-d86a476d7215} device partition=X:

mountvol X: /d

step06.
bcdedit /set hypervisorlaunchtype off

(註)bcdedit /set hypervisorlaunchtype auto // 日後要用 Hyper-V 執行重開即可。


step07. 重開機記得要緊盯螢幕錯過以下動作還是無法啟動VMware Workstation 唷!!要按下WinKey或F3 才是真的達到關閉。
Virtualization Based Security Opt-out Tool
Do you want to disable Virtualization based security ?
Disabling this functionality changes the security configuration of Windows.
For the correct action in your oragization, contact your administrator before disabling.

** Press the Windows key or F3 to disabled protection . ESC to Skip this step. **


Reference :
0. “VMware Workstation and Device/Credential Guard are not compatible" error in VMware Workstation on Windows 10 host (2146361)
“Error 1402. Could not open key: UNKNOWN" while installing vCenter Server on Windows (1029282)
1. Dixin’s Blog – Run Hyper-V and VMware virtual machines on Windows 10
2. Leo Yeh’s Blog – 解決問題 Windows 10 (2)
3. 程式前沿 – 解決VM 與 Device/Credential Guard 不相容。在禁用 Device/Credential Guard 後,可以執行 VM 的方法
4. 小歐ou | 菜鳥自救會 – 設定開機選項選擇使用 VMWare 或 Hyper-V
5. ITREAD01 – VMware Workstation and Hyper-V are not compatible. 解決方案
6. 每日頭條 – VM與Device/Credential Guard解決方案

[VMware] Huananzhi X79-8D ESXi use RDM (Raw Device Mapping) mount local SATA NTFS file type

在用華南金牌 X79-8D 用 ESXi 有時會想拿舊硬碟資料存取,總是想到要用RDM,卻發現無法使用 RDM 方式。但仔細檢查明明在BIOS (boot time press Del )有啟用 Intel VT-d 卻還是不行?

找尋到有另一個方式也是可以達到RDM效果。

2019-12-20補充 : 根據原廠KB 1017530 描述到本地控制器大多不符合RDM硬體需求因此預設是禁用。所以和Intel VT-d 不相干。

Workaround :
step01. ssh ESXi

step02. # ls /dev/disksesxcfg-mpath -l ls -al /vmfs/devices/disks
找尋到像 " t10.ATA_____HD1000320AS_________________________________________XXXXXXXX " 字串 device-name

step03.建立一個RDM link
command : vmkfstools -r <source> (space) <destination>
e.g.
vmkfstools -r /vmfs/devices/disks/ t10.ATA_____HD1000320AS_________________________________________XXXXXXXX / vmfs/volumes/目標VM目錄 /RDM-disk.vmdk

vmkfstools -z /vmfs/devices/disks/ t10.ATA_____HD1000320AS_________________________________________XXXXXXXX “/vmfs/volumes/目標VM目錄/RDM-disk.vmdk" (因為要接近實體我推薦用 -z )

(註)
-r –createrdm /vmfs/devices/disks/…
-z –createrdmpassthru /vmfs/devices/disks/…

step03.編輯欲使用Local SATA VM,右鍵 “Edit Settings" > Add > HDD > Existing hard disk 找到剛建立連結 RDM-disk.vmdk

step04.打開VM console , then check disk manager , online disk .

Done.

Reference :
1. osiutino’s Blog – 在 ESXi 利用 RDM 直接掛載實體硬碟到 VM ( HP MicroServer N36L 適用)
2. www.vmwarearena – 2 Simple ways to Create Virtual Compatibility RDM Disks
3. 暉獲無度的步烙閣 – 在 ESXi 6.0 新增 USB 外接硬碟當 Datastore
4. homecomputerlab – VMware SATA disk Raw Device Mapping (RDM)
5. GitHub , Hengjie – How to passthrough SATA drives directly on VMWare EXSI 6.5 as RDMs
6. VMware KB – Raw Device Mapping for local storage (1017530)
7. vClouds – How to build a 64Gb Low Power and Fast ESXi Home Lab