ALUA Rule for DataCore

When you are using DataCore or other Storage Devices / Vendor for your VMware Environment you should check this out here:

ESXi 6.7 hosts with active/passive or ALUA based storage devices may see premature APD events during storage controller fail-over scenarios (67006)
https://kb.vmware.com/s/article/67006

To Change the ALUA Rules on ESXi-Server running VMware ESXi 6.5 / 6.7 here the snippet

esxcli storage nmp satp rule list -s VMW_SATP_ALUA | grep DataCore
##REMOVE OLD RULE###
esxcli storage nmp satp rule remove -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c tpgs_on -P VMW_PSP_RR
### ADD NEW RULE###
esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c tpgs_on -P VMW_PSP_RR -O iops=10 -o disable_action_OnRetryErrors
esxcli storage nmp satp rule list -s VMW_SATP_ALUA | grep DataCore

But please check the actual DataCore FAQ 1556 before using this setting:
The Host Server – VMware ESXi Configuration Guide

Hope that Helps!

multipath.conf for DataCore and EMC

In 2017 I had a customer who uses DataCore as Storage-System (still working great! ;)) and we needed as well to connect not only VMware ESXi Servers to this great Storage-System, no in this Case as well two TSM ISP-Servers with Shared Storage running on SLES 11 SP3/4 (not sure) and with this Post I want to share with you the working multipath.conf for DataCore. Please find here the multipath.conf (in the attachment rename from .txt to .conf)

defaults {
                polling_interval 60
}
blacklist {
        devnode "*"
}
blacklist_exceptions {
        device {
                vendor          "DataCore"
                product         "Virtual Disk"
                }
        device {
                vendor          "DGC"
                product         "VRAID"
                }
        devnode         "^sd[b-z]"
        devnode         "^sd[a-z][a-z]"
}

devices {
        device {
                vendor "DataCore"
                product "Virtual Disk"
                path_checker tur
                prio alua
                failback 10
                no_path_retry fail
                dev_loss_tmo infinity
                fast_io_fail_tmo 5
                rr_min_io_rq 100ac
                # Alternative option ‚Äď See notes below
                # rr_min_io 100
                path_grouping_policy group_by_prio
                # Alternative policy - See notes below
                # path_grouping_policy failover
                # optional - See notes below
                # user_friendly_names yes
                }
        device {
                vendor "DGC"
                product "VRAID"
                path_checker tur
                prio alua
                failback 10
                no_path_retry fail
                dev_loss_tmo infinity
                fast_io_fail_tmo 5
                rr_min_io 1000
		        path_grouping_policy group_by_prio
		}
}

multipaths {
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-ActLog
        }
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-ActLog-LibManager
        }
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-ArchLog
        }
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-ArchLog-LibManager
        }
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-ClusterDB
        }
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-ClusterQuorum
        }
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-DB2
        }
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-DB2-LibManager
        }
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-InstHome
        }
        multipath {
                wwid                    		XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                alias                   		XXXX-ISP-InstHome-LibManager
        }
        multipath {
                wwid                    XXXXXXXcXXXXXXX9473de447183e711              
                alias					XXX_L00              
        }				
        multipath {				
                wwid                    XXXXXXXcXXXXXXX9673de447183e711                
                alias					XXX_L01              
        }						
        multipath {				
                wwid                    XXXXXXXcXXXXXXX9873de447183e711                
                alias					XXX_L02              
        }				
        multipath {				
                wwid                    XXXXXXXcXXXXXXX9a73de447183e711                
                alias					XXX_L03               
        }				
        multipath {				
                wwid                    XXXXXXXcXXXXXXX9c73de447183e711                
                alias   				XXX_L04              
        }						
        multipath {				
                wwid    				XXXXXXXcXXXXXXX9e73de447183e711                
                alias   				XXX_L05              
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXXa073de447183e711                
                alias   				XXX_L06              
        }						
        multipath {						
                wwid    				XXXXXXXcXXXXXXXa273de447183e711                
                alias   				XXX_L07              
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXXa473de447183e711                
                alias   				XXX_L08              
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXXa673de447183e711                
                alias   				XXX_L09              
        }						
        multipath {						
                wwid    				XXXXXXXcXXXXXXX5ea6eb5c64a4e711                
                alias   				XXX_L10               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX60a6eb5c64a4e711                
                alias   				XXX_L11               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX62a6eb5c64a4e711                
                alias   				XXX_L12               
        }						
        multipath {						
                wwid    				XXXXXXXcXXXXXXX64a6eb5c64a4e711                
                alias   				XXX_L13               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX66a6eb5c64a4e711                
                alias   				XXX_L14               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX68a6eb5c64a4e711                
                alias   				XXX_L15               
        }						
        multipath {						
                wwid    				XXXXXXXcXXXXXXX6aa6eb5c64a4e711                
                alias   				XXX_L16               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX6ca6eb5c64a4e711                
                alias   				XXX_L17               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX6ea6eb5c64a4e711                
                alias   				XXX_L18               
        }						
        multipath {						
                wwid    				XXXXXXXcXXXXXXX70a6eb5c64a4e711                
                alias   				XXX_L19               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXXcdc727764a4e711                
                alias   				XXX_L20               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXXedc727764a4e711                
                alias   				XXX_L21               
        }						
        multipath {						
                wwid    				XXXXXXXcXXXXXXX50dc727764a4e711                
                alias   				XXX_L22               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX52dc727764a4e711                
                alias   				XXX_L23               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX54dc727764a4e711                
                alias   				XXX_L24                
        }						
        multipath {						
                wwid    				XXXXXXXcXXXXXXX56dc727764a4e711                
                alias   				XXX_L25                
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX58dc727764a4e711                
                alias   				XXX_L26                
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXX5adc727764a4e711                
                alias   				XXX_L27                
        }						
        multipath {						
                wwid    				XXXXXXXcXXXXXXX5cdc727764a4e711                
                alias   				XXX_L28               
        }								
        multipath {				
                wwid    				XXXXXXXcXXXXXXX5edc727764a4e711                
                alias   				XXX_L29              
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXXe7fc4a9d64a4e711                
                alias   				XXX_L30               
        }						
        multipath {						
                wwid    				XXXXXXXcXXXXXXXe9fc4a9d64a4e711                
                alias   				XXX_L31               
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXXebfc4a9d64a4e711                
                alias   				XXX_L32                
        }								
        multipath {						
                wwid    				XXXXXXXcXXXXXXXedfc4a9d64a4e711                
                alias   				XXX_L33                
        }
}

HPE SSD BUG – RPM Installation

I have a customer who is as a lot of them affected by the HPE SAS Solid State Drives Firmware bug, where the Disks will die after 32,768 power-on-hours. More you will find here about the bug. Within this short post I want to show you how to install under SLES 11.

At first you need to find which Disk you have, on the mentioned Website there are two different models (HPE SAS SSD models launched in Late 2015 and HPE SAS SSD models launched in Mid 2017), in this case we had a in Late 2015 DISKs.

You need to check with the CLI, OneView, ILO or with the SSA if you have Disks listed on the List Bulletin: (Revision) HPE SAS Solid State Drives – Critical Firmware Upgrade Required for Certain HPE SAS Solid State Drive Models to Prevent Drive Failure at 32,768 Hours of Operation

In my case I had the following disks in the Server:

Model VO0960JFDGU
Media Type SSD
Capacity 960 GB

So I downloaded the Online Flash Component for Linux – HPD8 and uploaded it to the SLES 11 Server, after that I installed the rpm with

rpm -ivh firmware-hdd-8ed8893abd-HPD8-1.1.x86_64.rpm

after the installation of the rpm you need to go to the folder /usr/lib/x86_64-linux-gnu/scexe-compat

cd /usr/lib/x86_64-linux-gnu/scexe-compat

with starting the installation

./CP042220.scexe

The installation of the Patch ist starting

so and thats it, we are done.

Here you see the Oneview before the Update:

and here after the Update:

Enjoy, Problem solved ūüėČ

Administrator@vsphere usage short hint

I’ve some customers who are using the local¬†Administrator@vsphere¬†for different things like Backup-User, or Reporting things. From my point of view I can’t¬†recommended¬†this and say please create Service-Users for all different topics. Like one user for Backup¬†–¬†in my example something like: _svc_veeam_bck¬†or for Horizon _svc_vdi. Give the local Administrator a good and secure Password, write it down, put it in¬†Keepass¬†or something else and use it only when¬†it’s¬†really needed! This is your last resort to login to your¬†vCenter.

What do you think about using the Administrator@vsphere User?

Syslog HASHMAP

I’ve a customer who have several DataCenters in the vCenter and each DataCenter needs different Syslog-Server. With this script you should be able to set for each Datacenter different Syslog-Server.

#Map DC to log server:
 
$servermap = @{
 
    "DC1" = "tcp://syslog01.v-crew.int:514,tcp://syslog02.v-crew.int:514";
    "DC2" = "tcp://syslog03.v-crew.int:514,tcp://syslog04.v-crew.int:514";
    "DC3" = "tcp://syslog05.v-crew.int:514,tcp://syslog06.v-crew.int:514";
 
};
 
  
 
foreach ($vmhost in (Get-VMHost)) {
 
    $DC = (Get-Datacenter -VMHost $vmhost)
 
    echo $vmhost.Name 
 
    echo $servermap.($DC.Name)
 

    $syslog = $servermap.($DC.Name)
 
    Get-AdvancedSetting -Entity $vmhost -Name "SysLog.Global.loghost" | Set-AdvancedSetting -Value $syslog -Confirm:$false
 
    Write-Host "Restarting syslog daemon." -ForegroundColor Green
 
    $esxcli = Get-EsxCli -VMHost $vmhost -V2
 
    $esxcli.system.syslog.reload.Invoke()
 
    Write-Host "Setting firewall to allow Syslog out of $($vmhost)" -ForegroundColor Green
 
    Get-VMHostFirewallException -VMHost $vmhost | where {$_.name -eq 'syslog'} | Set-VMHostFirewallException -Enabled:$true
 
}

https://github.com/Vaiper/syslog-servermap/blob/master/syslog-servermap.ps1

We hope it helps some of you.

Veeam Repository Configuration

For my Lab, i was in the lucky postion last year to got some awesome old Hardware HPE ProLiant DL380p Gen 8 etc. and with that I was able to build a awesome small Lab.


On this Server I’ve Windows Server 2016 running and as you can see under X: I’ve configured the Veeam-Repo. Here is as well Deduplication configured (I like dedup! ;))

How do you configure your Veeam-Repo? Do you know the awesome https://www.veeambp.com/ site? There are so many great tips and tricks. I’ve as well an external Server for my offsite Backups here, I’ve an Windows Server 2019 with REFS and 64 KB allocation unit size as well configured with Dedup.

How do you configure your Veeam-Repos?

another IT blog

Moin zusammen!

Ja, genau ein neuer IT Blog… mal wieder. Oliver @opalz und meiner einer @stimmermann probieren unser Gl√ľck mal, in Denglisch :D.

Themen werden im Groben: #Architektur #Design #Sizing #Performance #Administration #Projektleitung #Projektplanung #Beschaffung #DataCore #VMware #VDI #Linux #Windows #Veeam #DataCenter #Hardware #SDDC #DR #Network #Office365 #DSGVO #GoBD #CloudGate365 #Lizenzen #Backup