[NetApp] Simulate Data OnTAP 7-mode ver 8.x archive link





[NetApp] OCUM 6.x/7.x enable diag shell (SSH)

OnCommand Unified Manager (UM) 6.x
OnCommand Unified Manager (UM) 7.0, 7.1

step01. choice 4) Support/Diagnostics

step02. type erds

Remote diagnostic acces is disabled.
Would you like to enable remote diagnostic access? (y/N) y


Enter new UNIX password: 輸入密碼
Retype new UNIX password: 輸入密碼
passwd: password updated successfully

Remote diagnostic access will be disabled after midnight UTC tomorrow (2019-03-08) to use.

Press any key to continue.

step05. 可以利用 WinSCP 或 putty 使用帳號 diag 及剛設定密碼登入OCUM.

[NetApp] 7-mode single interface use multiple IP

ifconfig interface_name [-]alias address

ifconfig e0a alias x.x.x.x

ifconfig e0a -alias x.x.x.x

** sample /etc/rc **
# Modify 2018-10-25 By xxxx
ifgrp create lacp bond0 -b rr e0a e0b e1a e1b
hostname Filer
ifconfig bond0 netmask mediatype auto mtusize 9000
ifconfig bond0 alias
route add default 1
routed on
options dns.enable on
options nis.enable off
setflag smb_enable_2_1 1 # enable SMB2.1 is 1 ; disable is 0
priv set diag; setflag smb_enable_2_1 0; priv set

wrfile /etc/rc , then “Ctrl+c"

source /etc/rc


Reference : NetApp – Create and remove aliases

[NetApp] 2 nodes upper Data lif happen redundant migrate other node not work?

在一般Case案件大多安裝是一套NA,於同事發生一件案例是二套NA FAS8200等於有 4 nodes,在驗證Data Lifs過程會發生lif移到線路down port造成問題。

Sample : node1 & node 2 (pairs) ; node3 & node4 (pairs)

node1 – 拔除nic cable ,lif migrate to node3

node2 – 拔除nic cable , lif migrate to node1 (馬上GG,因為node1線路全部拔除)

Resolution :

::> net int modify -vserver {SVM} -lif {lif-name} -failover-policy broadcast-domain-wide // 預設 SVM Data Lif 都用 system-defined

** 共有五種 Failover policy **

  • broadcast-domain-wide :
    This is the default setting for the cluster management LIF.You would not want to assign this policy to node management LIFs or cluster LIFs because in those cases the ports must be on the same node.
  • system-defined :
    This is the default setting for data LIFs.This setting enables you to keep two active data connections from two unique nodes when performing software updates. It allows for rolling upgrades; rebooting either odd-numbered or even-numbered nodes at the same time.
  • local-only:
    This is the default setting for cluster LIFs and node management LIFs.

    This value cannot be changed for cluster LIFs.

  • sfo-partner-only :
    Only those ports in the failover group that are on the LIF’s home node and its SFO (storage failover) partner node.
  • disabled:
    The LIF is not configured for failover.

    Reference :
    NetApp – Types of failover policies



[Storage] 增加NetApp DOT 7.x / 8.x / 9.x Filer被ping (ICMP Packet) 數量

因客戶需求有APP需每秒輸送出1000個ICMP packet 數量來判別Filer是否存在;因為原廠預設是針對Client單一能每秒150 ICMP packets來防止DoS ( denial-of-service) 攻擊.因此需提升接受單一Client能每秒1000個ICMP Packet.

<< Resolution >>


[7 mode]
options ip.ping_throttle.drop_level <數量> // default 150 ; Maximum 4294967295 (42億多)

[Clustered mode]
<ONTAP 8.x>
::> system run -node {nodename} -command “options ip.ping_throttle.drop_level <數量>"
<ONTAP 9.x>
system run -node {nodename} -command “options ip.ping_throttle.drop.level  <數量>

假若要不設限可以設為 ‘0’
<ONTAP 8.x>
system run -node {nodename} -command “options ip.ping_throttle.drop_level 0

<ONTAP 9.x>
system run -node {nodename} -command “options ip.ping_throttle.drop.level 0"

Checking the ping throttling threshold status
::> netstat -p icmp


Reference :

1. NetApp – Increasing the ping throttling threshold value

2. NetApp Document ID : FA1394

[Storage] HDS 存儲設備常見縮寫 / 預設帳號/密碼

  • DKC (Disk Controller)
  • DKU (Disk Unit)
  • DKA (Disk Adapter)         // Back-end , 連接內部硬碟
  • CHA (Channel Adapter)  // Front-end , 連接外部主機
  • SM (Share Memory)
  • CM (Cache Memory)
  • CP (PCB) // 處理器主板
  • SVP (Service Processor) // 系統控制台;用於設備的管理
  • HDU (Hard Disk Unit)
  • HDD (Hard Disk Driver)

  • VSP  maintenance / raid-mainte 或 etniam-diar , raid-maintenance
  • VSP G1000 , maintenance / raid-maintenance
  • USP Administrator
  • SVP / raid-login

  • Hi-Track , administrator / hds

[NetApp] ESXi NFS use Thin Provisioning

* NFSv3 must be enabled on the storage system
* NFSv4.1 is available only on ONTAP 9.0

* VMware vSphere 5.0 or later must be available

1.download NetApp VAAI Plug-in ; 載點 https://nt-ap.com/2HxiF4T

2.install NetApp VAAI Plug-In @ESXi
> esxcli software vib install -n NetAppNasPlugin -d /NetAppNasPlugin.zip

3.@NetApp type command
::> vserver nfs modify –vserver {SVM-name} -vstorage enabled
> options nfs.vstorage.enable on
<7-Mode CLI for vFiler units>
> vfiler run vfiler_name options nfs.vstorage.enable on

4. verify install state
> esxcli software vib list | grep -i netapp

5. verify vaai enable (value是否為 1 (enable);若否請到 6. )
> esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove
> esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit

6. enable vaai
> esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedInit
> esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedMove

7.(options verify)
> vmkfstools -Ph /vmfs/volumes/onc_src/
> mkfstools -Ph /vmfs/volumes/46db973f-cca15877


[NetApp] Microsoft Windows Server 2012 use NFSv3 mount volume ,error “Network error – 53”

因客戶反應今日Windows Server 2012 R2 用NFSv3 掛載 NetApp FAS2552A 有錯誤訊息 “Network error – 53 "

當下確認幾點 :

1. Client with storage eachother ping > OK

2. confirm create temporary LIFs role belong ‘Data’

3.confirm SVMs options allowed-protocols have ‘nfs’

4.confirm export-policy & export-policy rule is OK

5.confirm Cluster-Mode ONTAP over than 8.3.x (PS: C-mode 8.3.1 just support NFSv3)

最後都沒問題原來是有幾個參數需調整這樣Windows NFSv3 mount才能運作



::> set -privilege diagnostic


::*>vserver nfs show -vserver {SVM} -fields v3-ms-dos-client,enable-ejukebox,v3-connection-drop


v3-ms-dos-client (預設 disabled)

enable-ejukebox (預設 true)

v3-connection-drop (預設 enabled)


vserver nfs modify -vserver {SVM} -v3-ms-dos-client enabled -enable-ejukebox false -v3-connection-drop disabled


*Windows Client*

mount -o mtype=hard \\NetApp-NFS-LIF-IP\Volume Z:\


** 2019-10-17 add **
檢測查機 NFS client 設定值 > nfsadmin client config

[NetApp] restrictions for anonymous users (IPC$)

有時弱掃NetApp IPC$ (PS: 禁止null session作訪問時) 或異常IPC$數值造成了Storage 歸類在處理Other JOBs進而造成CPU過高

** Clustered-mode **

::>set -privilege advanced

> vserver cifs options modify -vserver {SVM} -restrict-anonymous no-access

no-restriction (Default) / 0 (7-mode)
no-enumeration / 1 (7-mode)
no-access (完全限制) / 2 (7-mode)

::*> vserver cifs options show -vserver {SVM}

::*> set -privilege admin


** 7-mode **

options cifs.restrict_anonymous 2

(註) Windows如何建立Null Session
C:\> net use \\IP_ADDRESS\ipc$ “" /user:""



IPC$ 為共享"命名管道"的資源,它是為了讓進程間通信而開放的命名管道,可以通過驗證用戶名與密碼獲得相應的權限,在遠程管理計算機與查看計算機的共享資源時使用.

1. Configuring access restrictions for anonymous users (Clustered-mode)

2. Configuring access restrictions for anonymous users (7-mode)