VMware vSphere Install Configure and Management 5.5 ------------------------------------------Module 1 : vSphere 5.5 ICM Course Intro------------------------------------------ vSphere is an infrastructure virtualization suite vSphere Editions:vSphere Standard, vSphere Enterprise, vSphere Enterprise Plus This course focus on vSphere Enterprise Edition, features available in Enterprise Plus are discussed in the VMware vSphere:Optimize and Scale More about Training at vmwarelearningpaths.com ------------------------------------------Module 2 : Software- Defined Data Center------------------------------------------ vSphere -Pooled Networking and Security -Pooled Storage -Pooled Computing Reasons to use Virtual Machines Physical machine -difficult to relocate -Difficult to manage -Hardware has limitations Virtual machine -Easy to relocate -Easy to manage -Provides ability to run legacy applications CPU Virtualization physical architecture vs virtual architecture emulation =! virtualization Physical and Virtualized Host Memory Usage Physical and Virtual Networking Physical File Systems and vSphere VMFS iSCSI, FCoE, FC Encapsulation - Datastores (VMFS or NFS) File-System Layouts Windows,Linux/Unix, VMFS VMware vCloud Director enables you to create a cloud Third-party providers can host public or private clouds Private cloud + Public cloud = Hybrid Cloud About Private Clouds -Self-service provisioning -Elasticity of resources -Rapid and simplified provisioning -Secured multitenancy -Improved use of IT resources -Better control of IT budget About Public Clouds IT resources are provided as a service over the internet A public cloud is similar to a utility or an Internet service provider Public clouds have all of the advantages of a private cloud: -rapid and flexible deployments -Secure IT assets -Efficient and cost-effective depployments Customer companies no longer have IT as an ongoing overhead expense About Hybrid Clouds Some cloud.based assets are accessible internally over an intranet Some cloud.based assets are accessible externally over the internet VMware Complete Cloud Infrastructure & Management Suite Management and Automation Compute - vSphere Network/Security - NSX, vCloud, Network and Security Storage/Availability - Virtual SAN ****Lesson 2 vsphere client and vsphere web client esxi1.vclass.local vSphere Client Home->Administration System Logs Server Log /var/log/hostd.log Server Log /var/log/vmkernel.log vCenter agent log /var/log/vpxa.log It´s possible to setup a syslog ****Lesson 3 ESXi View ESXI settings: -Processor and memory configuration -Licensing -NTP client -DNS and routing -Security Profile Identify user account best practises High Security -Memory hardening -Kernel module integrity -Trusted platform module Small disk footprint Installable on hard disks, SAN LUNs, USB devices, SD cards, or diskless hosts vSphere API/SDK, vCLI (scripting), CIM (hardware management) Configuring ESXi Direct console user interface (DCUI) is similar to BIOS of a computer with a keyboard-only ESXi work as a NTP client DNS and Routing Configuration->Properties -Hostname and Domain -DNS server address and search domains -Default VMKernel gateway Remote Access Settings:Security Profile -Remote clients are prevented from accessing services on the host -Local clients are prevented from accessing services on remote hosts -Unless configured otherwise, daemons will start and stop with the ESXi host:For example, DCUI or NTP server Key Points -Using Virtual machines solves many data center problems -VMs are hardware independent -VMs share the physical resources of the ESXi host which they reside -vSphere abstracts CPU, memory, storage and networking for virtual machine use ------------------------------------------Module 3 : Creating Virtual machines------------------------------------------ Virtual Machine concepts Extension files in a VM folder: .vmx - Configuration file .vswp and vmx-.vswp - Swap files .nvram - BIOS file vmware.log - Log files .vmtx - Template file -rdm.vmdk - Raw device map file .vmdk - Disk descriptor file -flat.vmdk - Disk data file .vmss - Suspended state file .vmsd - Snapshot data file .vmsn - Snapshot state file -delta.vmdk - Snapshot disk file Virtual Machine Hardware up to Serial/com ports up to 3 paralled ports AHCI controller up to 4 SCSI adapters up to 1 TB of RAM up to 64 vCPUs up to 10 NICs Virtual Disk Disk provisioning policy: Thick provision lazy zeroed Thick provision eager zeroed Thin provision - consume only the amount of capacity needed to hol the currrent files It's possible to mix thick and thin formats Thin provisioning is a more efficient storage utilization virtual disk allocation 140 GB available datastore capacity 100 GB used storage capacity 80 GB Virtual NICs Flexible:can function as either a vlance or vmxnet adapter -vlance:also called PCNet32 -vmxnet:provides better performance than vlance e1000 and e1000e (emulated of intel NIC) -High performance adapter available for only some guest operating systems vmxnet, vmxnet2, and vmxnet3 are VMware drivers that are only available with VMware Tools -vmxnet2 (Enhanced vmxnet):with enhanced performance -vmxnet3:Builds on the vmxnet2 adapter Whenever possible choose vmxnet3 Other Devices -CD/DVD drive -Floppy drive -Generic SCSI devices -USB 3.0 NVIDIA, AMD, Intel GPUs Custom Config of a VM Other disk-provisioning settings: -Virtual device node (for example, SCSI(0:0)) -Mode-independent (persistent and nonpersistent) Deploying OVF Templates Deploy any virtual machine or a virtual appliance stored in PVF format Available from the VMware Virtual Appliance Marketplace ------------------------------------------Module 4 : VMware vCenter Server------------------------------------------ vCenter Server Architecture vCenter is a service that acts as a central administration point for ESXi hosts and their virtual machines Up to 1000 hosts per vCenter Server instance Up to 10000 powered-on virtual machines per vCenter Server instance Linked mode shares information between vCenter servers ESXi and vCenter server Communication vSphere Client <-TCP 443-> vCenter Server (vpxd)<-TCP 443/9443>vSphere Web client vSphere Client <-TCP/UDP 902-> EXSi Host (hostd<->vpxa) ESXi Host (vpxa process)<-TCP/UDP 902->vCenter Server (vpxd) vCenter Server Components Database server, distributed services, SSO, user access control, VMware vPshere API, ESXi management Additional Services: -Update Manager -Orchestrator Example of additional modules features: -vSphere Update Manager -vCenter Site Recovery Manager These modules include a server component and a client component Inventory Object: Tagging -Search for objects by that tag -Ease of management Deployment the vCenter Server Appliance It includes: -64-bit application running on SUSE Linux Enterprise server 11 -an embedded database available for: --evaluating the appliance --running no more than 100 hosts and 3000 VMs -support for an external Oracle database -web-based interface for vcenter server appliance configuration -support for centralized authentication and vsphere web client vCenter Server Appliance Benefits: -Simplified deployment and config: --import appliance to an ESXi host --Configure the time-zone settings --use the web-interface to config the appliance -Lower total costs through elimination of the windows dependency and associated licensing costs -Embedded database supports larger environments than embedded SQL Express database on vCenter Server installed on Windows vCenter Appliance Requirements Disk space:Min 7GB Max 82GB Memory: 4GB 10 Hosts, 100 VMs 8GB 100 Hosts, 1000 VMs 16GB 400 Hosts, 4000 VMs CPU: 2 virtual CPUs (default) https://appliance_name:5480 username:root password:vmware Accessing vSphere Web Client Install client integration plug-in for console access https://vpshere-ip:/9443/vsphere-client Using Quick Filters Viewing Truncated Lists -This view shows only a specific number of objects from a list at the time -The truncated list number is not configurable -Scroll bars appear to show that the list is truncated vCenter Single Sign-On vCenter SSO allows vSphere software components to communicate with each other through a secure token mechanism Benefits of vCenter SSO -Faster operations and a less complex authentication process -Ability of vSphere solutions to trust each other without requiring authentication every time a solution is accessed -An architecture that supports multi-instance and multisite configurations that provide for a single-solution authentication across the entire environment Features of SSO -Supports open standards -Support for multiple user repositories, including Active Directory and OpenLDAP -Ability for users to see all vCenter Server instances for which they have permission -No need to use vCenter Linked Mode for unified views of vCenter Server instances How SSO works? When logging in to vSphere, authentication is passed to vCenter Single Sign-On. On successful authentication, a security token is used to access vSphere components About Identity Sources and the Default Domain -A repository for users and groups that vCenter Single Sign-On can use for user authentication -Usually a directory service like Active Directory or Open LDAP -Provides a means to attach one or more domains to vCenter SSO Default Domain -Used by vCenter SSO to authenticate users when the user logs in without a domain name -One system identify source named vsphere.local is created when you install vCenter SSO -vsphere.local is the default domain Supported Identity Sources localos, vpshere.local, OPenLDAP, AD as an LDAP Server, AD (Integrated Windows Authentication) Windows 2003 and Later Supported vCenter SSO Architecture vCenter Lookup Service->vCenter Server ->vCenter Orchestrator ->vCenter Director Security Token Service plays an important role on SSO SSO Deployment Modes -Basic -Multiple vCenter SSO instances in the same location -Multiple vCenter SSO instances in different locations Basic Deployment Mode You usually use the Simple Install option to deploy vCenter Server with vCenter SSO in basic mode It is appropriated for: -Until 1000 hosts or 10000 VMs -You have geographically dispersed vCenter Server instances that are administered independently of each other -You are using vCenter Appliance vSphere Web + vCenter Server + Inventory Service + SSO Multiple vCenter SSO instances in the same location Using a Network Load Balancer (does not balance sessions), and vCenters SSO synchronize vmdir folder -This deployment mode provides HA for your vCenter SSO -Use this mode if you do not plan to use vSphere HA or vCenter Server Heartbeat Multiple vCenter SSO instances in different locations This mode is required when you have geographically dispersed vCenter Server systems and you must administer these instances in Linked mode vCenter SSOs synchronized Protecting vCenter SSO Options: -Backup and restore - hours or days -vSphere HA - Minutes -vCenter Server Heartbeat - Minutes -vCenter Server SSO HA - seconds Installing vCenter SSO -Using the Simple install to deploy basic mode -Use Custom install option to install multisite or high availability mode During custom install select:Primary Node, High availability, Multisite About vCenter SSOs Administrator After installation, the domain name is vsphere.local administrator@vsphere.local has the following privileges: -Member of vCenter SSO group named Administrators -Is granted the vCenter Server Administrator role A vCenter SSO administrator differs from a vCenter Server administrator is the following ways: -A vCenter Server administrator is not allowed to perform vCenter SSO config tasks -You must be a member of the vCenter SSO Administrators group to configure vCenter SSO Configuring vCenter SSO You can configure SSO from the vSphere Web Client You can perform the following config tasks: -Add identity sources -Set default domain -edit password|lockout|token policy You must have vCenter SSO administrator privileges to perform these tasks About vCenter SSO Policies The password policy is a set of rules and restrictions on the format and lifespan of vCenter SSO user passwords The lockout policy specifies the conditions under which a user's vCenter SSO account is locked when the user attempts to log in with incorrect credentials The token policy specifies the clock tolerance, renewal count, and other token properties. Edit the Token policy if you must conform to your company's security standards Managing vCenter Server Inventory Data Center Object vCenter Server->It can have different datacenters objects Each datacenter has it's own hosts, virtual machines, templates, datastores and networks You can clone a VM from one datacenter to another datacenter Organizing Inventory Objects in folder vCenter Server Views: Hosts and Clusters, VMs and Templates vCenter Server Views: Storage and Networks vCEnter License Overview Licenses are managed and monitored from vCenter Server License consists of the following items:Product, License-Key, Asset ------------------------------------------Module 5 : Configuring and Managing Virtual Networks------------------------------------------ VMware ESXi networking features allow the following: -VM to communicate with other virtual and physical machines -Management of the ESXi host -The VMkernel to access IP-based storage and perform VMware vSphere vMotion migrations Failure to properly configure ESXi networking can negatively affect virtual machine management and storage operations Virtual switch allows the following connection types: -VM port groups -VMkernel port: --IP Storage, vMotion migration, Fault tolerance --ESXi management network Virtual Switch Connection Examples More than one network can coexist on the same virtual switch, or networks can exist on separate virtual switches Physical Network Considerations Number of physical switches Network bandwidth required 802.1ad (for NIC teaming) 802.1Q (VLAN trunking) Network port security Cisco Discovery Protocol (CDP), operational modes:listen, broadcast, listen/broadcast, disabled Network Policies -Security -Traffic shaping -NIC teaming Policies are defined: -At the standard switch level -At the port or port group level_override the default policies set at the standard switch level Security Policy Promiscuous mode - MAC address changes - allows vm mac-address changes Forged transmits - by default allows spoofing of mac-addresses Traffic Shaping Policy Average rate, peak rate, and burst size are configurable Traffic shaping - standard switch allows shaping egree only, and distributed switch allows egress/ingress shaping burst time = bandwidth x time Configuring Traffic Shaping Disabled by default Parameters apply to each virtual NIC in the Standard Switch NIC Teaming Policy NIC teaming settings: -Load Balacing (outbound only) -Network Failure Detection -Notify Switches -Failback -Failover Order Load-Balacing Method:Originating Virtual Port ID (based on virtual Port connected fo VM) Load-Balacing Method:Source MAC Hash (based on virtual NIC MAC address) Load-Balacing Method:IP-Hash Virtual Port ID is simple and fast and does not require VMkernel to examine the frame for necessary information Source MAC Hash has low overhead but it might not spread traffic evenly across the physical NICs IP-Hash requires more CPU overhead ------------------------------------------Module 6 : Configuring and Managing Virtual Storage------------------------------------------ datastore types:VMFS and NFS Storage Techonology:direct attached, Fibre Channel, FCoE, iSCSI, NAS Storage Protocol Overview Supports boot from SAN:Fibre Channel, FCoE, iSCSI Supports vSphere vMotion:Fibre Channel, FCoE, iSCSI, NFS, DAS, VMware VSAN Supports vSphere HA:Fibre Channel, FCoE, iSCSI, NFS, VMware VSAN Supports vSphere DRS:Fibre Channel, FCoE, iSCSI, NFS, VMware VSAN Supports Raw Device Mapping:Fibre Channel, FCoE, iSCSI, DAS Datastore A datastore is a logical storage VMFS-5 -Allows concurrent access to shared storage -Can be dinamically expanded -Uses 1MB block size -Uses subblock addressing (size 8KB) -Provides on-disk, block-level blocking NFS -is storage shared over network at the file system level -supports NFS version 3 over TCP/IP ESXI does not use locking protocol Raw Device Mapping RDM enables you to store virtual machine data directly on a logical unit number (LUN) Tha mapping file is stored on a VMFS datastore that points to the raw LUN Raw LUN (NTFS or ext3) Storage Device Naming Convetions Storage are identified in several ways: -SCSI ID:Unique SCSI identifier -Canonical name:The network Address Authority ID is a unique logical unit number (LUN) identifier guaranteed to be persistent across reboots --In addition to NAA IDs, devices can also be identified with mpx or T10 identifiers -Runtime name:Uses the convention vmhbaN:C:T:L. This name is not persistent through reboots Physical Storage Considerations Topics to discuss with storage administrator: -LUN sizes -I/O bandwidth -I/O requests per second that a LUN is capable of -Disk cache parameters -Zoning and masking -Identical LUN presentation to each ESXi host -Active-active or passive-active arrays -Export properties for NFS datastores iSCSI Components -iSCSI storage system -physical hard disk -LUNs -SPs (storage processors) -TCP/IP networkservers with iSCSI initiatos (hardware or software) An initiator resides in the ESXi host. Targets reside in the storage arrays that are supported by the ESXi host. iSCSI Addressing iSCSI target name iSCSI alias IP Address iSCSI initiator name iSCSI alias IP Address iSCSI Initiators Software iSCSI VMkernel [iSCSI initiator | TCP/IP | NIC driver] NIC Dependent hardware iSCSI VMkernel [iSCSI network config | NIC driver] NIC [iSCSI initiator | TCP/IP (TCP Offload Engine)] Inpendent hardware iSCSI VMkernel [iSCSI HBA Driver] NIC [iSCSI initiator | TCP/IP (TCP Offload Engine)] ESXI Network Configuration for IP Storage A VMKernel port must be configured for ESXi to access software iSCSI iSCSI Target-Discovery Methods Two discovery methods are supported:Static and Dynamic (also called SendTargets) The SendTargets response returns IQN and all available IP Addresses iSCSI Security:CHAP By default CHAP is not configured, supports Unidirectional and Bidirectional ESXi also supports per-target CHAP authentication Multipathing with iSCSI Storage Use two or more hardware iSCSI adapters Software or dependet hardware iSCSI: -Use multiple NICs -Connect each NIC to a separate VMkernel port -Associate VMkernel ports with the iSCSI initiator NFS is located in a NAS system Addressing and Access Control with NFS VMkernel port has to be on the same network as NFS NFS threats a root user (used by ESXi) Typically, to protect NFS volumes from unauthorized access, the NFS administrator exports the volumes with the root_squash option turned on. When root_squash is on, the NFS server treats access by the root user as access by an unprivileged user. The NFS server might refuse the ESXi host access to virtual machine files stored on the NFS volume. The NFS administrator must use no_root_squash allows root on the client (the ESXi host) to be recognized as root on the NFS server. And the NFS administrator must grant read and writ e privileges to the NFS datastore with no_root_squash if you are deploying virtual m achines onto the NFS datastore. Only with that configuration can the ESXi host deploy and manage the virtual machine whose home directory will be on the NFS datastore. Configuring an NFS Datastore For better performance and security separate it from the iSCSI Provide following information: -NFS server name (IP) -Folder on the NFS server -Host to create datastore on -Whether to mount the NFS file system read-only:Default is to mount read/write -NFS datastore name Multipathing and NFS Storage for NFS multipathing: -Configure on VMkernel port -Use adapters attached to the same physical switch to configure NIC teaming -Config the NFS server with multiple IP addresses -To use multiple links, configure NIC teams with the IP hash load-balacing policy Using Fibre Channel with ESXi ESXi supports the following protocols: -16 Gbps Fibre Channel -FCoE FC is commonly used for VMFS datastores ESXi hosts can be booted from Fibre Channel SAN LUNs Fibre Channel SAN Components storage system,physical hard disks, LUNs, SPs, FC switch, servers with HBA Fibre Channel Addressing And Access Control WWN (World Wide Name) zoning LUN masking (done at SP or server level, makes a LUN "invisible" when a target is scanned) LUNs FCoE Adapters Hardware FCoE ESXi Host Network Driver | FC Driver-> Converged Network Adapter Software FCoE ESXi 5.0 Host Network Driver | Software FC-> NIC with FCoE Support Configuring Software FCoE The VLAN ID and the priority class are discovered during FCoE initialization -priority class is not configured in vSphere -IP address is not used in vSphere ESXi supports the maximum of four network adapter ports used for software FCoE Using a VMFS Datastore with ESXi -VMFS is optimized for storing and accessing large files -A VMFS datastore can have a maximum volume size of 64TB -NFS datastores are good for storing virtual machines, but some functions are not supported Use RDMs if the following conditions are true of your virtual machine: -It is taking Storage Array level snapshots -It is clustered to a physical machine -It has large amounts of data that you do not want to convert into a virtual disk Managing Overcommitted Datastores A datastore becomes overcommited when the total provisioned space of the thin-provivioned disks is greater that the size of the datastore Actively monitor your datastore capacity:Alarms assist through notifications:Datastore disk overllocation, VM disk usage Actively manage your datastore capacity:Use Storage vMotion to mitigate space usage issues on particular datastore Increasing the Size of a VMFS Datastore Two ways to dynamically increase the size of a VMFS datastore: -Add an extent (LUN) -Expand the datastore within its extent You can expand but you cannot shrink a VMFS datastore Comparing Methods for increasing Datastore Size Adding an Extent to the Datastore VM power state - On SAN admin tasks - Add more LUNs (extents) Limits - a datastore can have up to 32 LUNs (extents), up to 64TB Expanding the Datastore in the Extent VM power state - On SAN admin tasks - Increase the size of the LUN Limits - A LUN can be expanded any number of times, up to 64TB Before Increasing the Size of a VMFS Datastore -Quiesce I/O on all disks involved -Record the unique identifier (for example, the NAA ID of the volume that you want to expand) Multipathing Algorithms Arrays provde different features, some offer active-active Storage processors (SP). Others offer active-passive SPs vSphere 5.5 offers native path selection, load-balacing, and failover mechanisms Third-party vendors can create their ow software to be installed on your ESXi host that enables the ESXi host to perperly interact with the storage arrays that it uses Configuring Storage Load Balacing Path selection policies exist for: -Scalability: --Round Robin:A multipathing policy that performs load balacing across paths -Availability --Most recently Used (MRU) and Fixed Fixed: The host always uses the preferred path to the disk when that path is available. If the host cannot access the disk through the preferred path, it tries the alternative paths. Fixed is the default policy for active-active storage devices. MRU: The host uses the most recent path to the disk until this path becomes unavailable. That is, the host does not revert back until this path becomes unavailable. A failover to a new path is performed. If the original path becomes availa ble again, the host does not fail back to the original path. MRU is the default policy for active-passive storage devices and is required for those devices. Round Robin: The host uses a path selection al gorithm that rotates through all available paths. In addition to path failover, the Round Robin policy supports load balancing across the paths. Before using this policy, check with storage vendors to find out whether a Round Robin configuration is supported on their storage. Virtual SAN Datastores VMware Virtual SAN is a hybrid storage system that leverages and aggregates local solid-state dr ives (SSDs) as cache and local hard disk drives (HDDs) to provide a clustered datastore that can be utilized by virtual machines. Virtual SAN Requirements 1Gb/10Gb NICsSAS/SATA (RAID Controller must work in "passthru" or HBA MODE) SAS/SATA SSD | SAS/SATA HDD -> at least 1 of each Not every node in VMware VSAN cluster needs to have local storage Hosts with no local storage can still leverage distributed datastore Objects in VMware VSAN Datastores A VMware Virtual SAN cluster stores and manages data in the form of flexible data containers called objects. An object is a vmdk file, a snapshot, or the virtual machine home folder (namespace). Think of an object as a logical volume that has its data and metadata distributed and accessed across the entire cluster. A single VMware Virtual SAN cl uster can store and manage tens of thousands of objects. For each virtual machine that is provisioned on a VMware Virtual SAN data store, an object is created for each of its virtual disks. In addition, a cont ainer object is created that holds a VMFS volume and stores all of the metadata files of the virtual machine. Configuring a VMware VSAN Datastore Setup VSAN Network->Enable VSAN on the Cluster->Select Manual or Automatic->If Manual, create disk groups We can activate VSAN on Cluster properties (on Web Client) VSAN is supported only in vSphere 5.5 VSAN Datastore A single VSAN Datastore is created, using storage from multiple hosts and ultiple disks in the cluster Using VMware VSAN Capabilities, Rquirements-> Create Policies that contain VM Requirements VM Storage Policies Capacity Availability Performance -VM Storage Policies are built in advance of VM deployment to reflect the requirements of the application running in the virtual machine -The policy is based on the VSAN capabilities -The appropriate policy is selected for the VM at deployment time (based on VM requirements) VM Storage Policy Capabilities -Mirroring --Number of disk stripes per object --Number of HDDs across which each replica of a storage object is distributed -Striping --Number of failures to tolerate --Number of hosts, network failures, and disk failures a storae object can tolerate Key Points -Use VMFS datastores to hold virtual machine files -NFS datastores are useful as a repository for ISO images -Shared storage is integral to vSphere features linke vMotion, HA, and DRS -VSAN enables low-end configs to use vSphere HA, vMotion, and Storage vMotion without requireing external shared storage -VSAN clusters direct-attached server disks to create shared storage designed for virtual machines ------------------------------------------Module 7 : Virtual Machine Management------------------------------------------ Using a Template Template is a master copy of a virtual machine, and typically includes a huest OS, a set of applications, and a specific virtual machine config Clone to Virtual Machine Clone to Template Convert to Template (vm must be powered off) Viewing Templates Two ways to view templates: -Use the VMs and Templates inventory view -Use the Related Objects tab and the VM Templates link in the Hosts and Clusters inventory view Updating a Template 1.convert the template to a VM 2.place the VM 3.make appropriate changes to the vm 4.convert the vm to a template Cloning a VM Cloning is an alternative to deploying a VM Deploying VM Across Data Centers VM deploymnet is allowed across data centers -Clone a VM from one data center to another -Deploy from a template in one data center to a vm in a different data center Hot-Pluggable Devices Guest OS must support hot-plug CPU|Mem to work Creating an RDM When you create a raw device mapping (RDM), vCenter Server creates a file in the specified VMware vSphere® VMFS volume that points to the raw logical unit number (LUN) Items to define when creating an RDM: -Target LUN:LUN that the RDM will map to -Mapped datastore:Store the RDM file with the VM or on a different datastore -Compatibility mode:Physical(pass-through)|Virtual -Virtual Device Node Physical compatibility (pass-through) mode: Allows the guest operating system to access the hardware directly Virtual compatibility mode: Allows the virtual machine to use VMware® snapshots and other advanced features Inflating a Thin-Provisioned Disk If you create a virtual disk in thin format, you can later inflate it to its full size To inflate a thin-provisioned disk: -Right-click the virutal machine's .vmdk file and select Inflate Virtual Machine Options Filename of VM doesn't change if you change VM name, but if you migrate it to another datastore filename it will be changed Boot delay is important to get access to VM console Migrating VMs Moving a virtual machine from one host or datastore to another host or datastore Types of migrations:Cold (vm powered off), Suspended, vSphere vMotion, vSphere Storage vMotion A maximum of 8 simultaneous vSphere vMotion, cloning, deployment, or vSphere Storage vMotion accesses to a single VMware vSphere VMFS-5 datastore is supported Comparison of Migration Types Migration Type | Virtual Machine Power State | Change Host or Datastore | Across Virtual Data Centers | Shared Storage Required | CPU Compatibility Cold Suspended vMotion Storage vMotion Enhanced vMotion vSphere vMotion Migration A vSphere vMotion migration moves a powered-on virtual machine from one host to another -VMware vSphere Distributed Resource Scheduler (DRS) balacing VMs across hosts VM Requirements for vMotion Migration -A virtual machine must not have a connection to a virtual device (such as a CR-ROM or floppy drive) with a local image mounted -A virtual machine must not have CPU affinity configured -If VM swap file is not accessible to the destination host, vSphere vMotion must be able to create a swap file accessible to the destination host before migration can begin -If a VM uses an RDM, the RDM must be accessible by the destination host Host Requirements for vMotion Migration Source and destination hosts must have these characteristics -Visibility to all storage used by the VM (128 concurrent vMotion migrations per VMFS datastore) -4 concurrent vMotion migrations on 1 Gbpss -8 concurrent vMotion migrations on 10 Gbps Identically named VM port groups connected to the same physical networks -Compatible CPUs: --CPU feature sets of both the source and destination host must be compatible --Some features can be hidden by using Enhanced vMotion Compatibility (EVC) or compatibility masks CPU Constraints on vMotion Migration CPU Constraints | Exact Match Required | Reason Clock speeds, cache sizes, hyperthreading, and number of cores Manufacturer (Intel or AMD) family and generation (Opteron, Intel Westmare) Presence or absence of SSE3, SSSE3, or SSE4.1 instructions Virtualization hardware assist Execution-disable (NX-XD bit) Hiding or Exposing NX/XD For future CPU features, edit mask at the bit level Choose between NX/XD security features and broadest vSphere vMotion compatibility Identify CPU Characteristics To identify CPU, use VMware CPU identification utility Storage vMotion Migration in Action 2.Use VMkernel data mover or VMware vSphere Storage APIs - Array Integration (VAAI) to copy data 3.Start a new VM process 4.Mirror I/O calls to file blocks that have already been copied to virtual disk on the destination datastore 5.Cut over the destination VM process to begin accessing the virtual disk copy vSphere Storage vMotion Paralled Disk Migrations vSphere Storage vMotion performs up to four paralled disk migrations per vSphere Storage vMotion operation -In previous versions, Storage vMotion copied virtual disks serially -The limit is two concurrent Storage vMotion operations per host Storage vMotion Guidelines and Limitations Guidelines: -Ensure that the host has access both to source datastores and target datastores -Files moved by Storage vMotion will automatically be renamed at the destination (suspended state files are not renamed) Limitations: -VM disks must be in persistent mode or be RDMs Enhanced vMotion Migration Enhanced vMotion Compatibility (EVC) -Combines vMotion and Storage vMotion into a single operation-Migrate between hosts and clusters without shared storage EVC Considerations Single Migration to Change Both Host and Datastore -Hosts must be part of the same data center -Hosts must be on the same layer-2 network (and same switch if VDS is used) Operational Considerations -Enhanced vMotion is a manual process -Maximum of two concurrent Enhanced vMotioon migrations per host -Enhanced vMotion migrations used multi-NIC when available Virtual Machine Snapshots Snapshots enable you to preserve the state of the VM so that you can return to the same state repeatedly A snapshot consists of a set of files: the memory state file (.vmsn), the description file (-00000#.vmdk), and the delta file (-00000#-delta.vmdk) The snapshot list file (.vmsd) keeps track of the VM snapshots A snapshot captures the state of the VM:Memory state, settings state, and disk state base disk (5GB) snapshot01 data snapshot02 delta 2GB if you delete snapshot01 the final file will be base disk (5GB)+snapshot01 data if you delete snapshot02 delta 2GB, the current snapshot will be snapshot01 delta+snapshot02 delta 2GB deleting all snapshots final final will be base disk (5GB)+snapshot01 data+snapshot02 data 2GB Snapshot Consolidation A method used to commit a chain of snapshots to the original VM when the Snapshot Managet shows that no spanshots exist, but the delta files still remain on the datastore Snapshot consolidation is intended to resolve known issues with snapshot management: -The snapshot descriptor file is commited correctly, but the Snapshot Manager incorrectly shows that all the snapshots are deleted -The shapshot files (-delta.vmdk) are still part of the VM -Snapshot files continue to expand until the VM runs out of datastore space Snapshots are not Backups!!! Removing a VM -Remove a VM from the inventory -Delete a VM from Disk Managing VM with a vApp vApp has these features: -It is a container for one or more VMs -It can be used to package and manage related applications -It is an object in the vCenter Server inventory vApp Characteristics You can configure a vApp: -CPU and memory allocation -IP allocation policy -Advanced settings You can also configure the VM startup and shutdown order ------------------------------------------Module 8 : Access and Authentication Control------------------------------------------ Host System Properties System->Security Profile You are able to check Daemons running and options to Start | Stop | Restart Firewall Incoming and Outgoing connections. Include an option for "Allow connections from any IP address" Enable Lockdown Mode - host will only be accesible through the local console or an authorized centralized management application (vcenter server). It's not possible run scrips on this mode Integrating ESXI with Active Directory You can join the vcenter to a domain, or using local accounts to give user permissions Roles Roles are collections of privileges: -They allow users to perform tasks-They are grouped in categories Roles include system roles, sample roles, and custom-built roles Objects Objects are entities on which actions are performed. -Objects include data centers, folders, resource pools, clusters, hosts, datastores, networks, and virtual machines All objects have a Permissions tab -This tab shows which user or group and role are associated with the selected object Assigning Permissions Object->Manage->Permissions "Users and Groups" "Assigned Role" Viewing Roles ans Assignments The Roles pane shows which users are assigned the selected role on a particular object When a user is a member of multiple groups with permissions on the same object -The user is assigned the union of privileges assigned to the groups for that object -For each object on which the group has permissions, the same permissions apply as if they were granted directly to the user Permissions defined explicitly for the user on an object take precedence over all group permissions on that same object ------------------------------------------Module 9 : Resource Management and Monitoring------------------------------------------ Virtual CPU and Memory Concepts Memory Virtualization Basics VMware vSphere has 3 layers of memory: -Guest OS virtual memory is presented to applications by the OS -Guest OS physical memory is presented to the VM by the vmkernel -Host machine memory that is managed by the VMkernel provides a contiguous, addressable memory space that is used by the VM VM Memory Overcommitment Allow Host to configure more memory for VMs that it physically has: -Memory overhead is store in a swap file (.vswp) Host manages memory allocations: -Stores in a swap file -Reallocates to other VMs based on VM requests vmxd-*.vswp Memory Reaclamation Techniques Economize use og the physical memory pages Deallocate memory from on VM for another (balloning mechanism) Memory compression - attempts to reclaim some memory performance when memory contention is high Host-level SSD swapping - use a SSD on the host for a host cache swap file Page VM memory out of disk - use of VMkernel swap space is the last resort Virtual Symmetric Multiprocessing (VSMP) Confused??? Hyper threading Hyperthreading enables a core to execute two threads, or sets od instructions, at the same time to enable hyperthreading:Enable it on BIOS and ensure ESXi host is turned on LCPU - Logical CPU PCPU - Physical CPU CPU Load Balacing Confused??? Shares, Limits, and Reservations A VM will power on only if its reservation can be guaranteed How VMs Compete for Resources Confused???? Systems for Optimizing VM Resource Use Managed by VMkernel | Configured by VM Creator | Adjustable by Administrator CPU cycles - hyperthreading,load balacing,nonuniform memory access | VSMP | Limit, Reservation, Share allocation | limit, reservation, share allocation RAM - Transparent page sharing, vmmemctl, memory compression, VMkernel swap files for VMs | Available memory | Limit, Reservation, Share allocation Disk bandwidth - Thin provisioning | VM file location | Multipathing, Storage I/O control Network bandwidth - (blank) | NIC teaming | Traffic shaping, Network I/O control About Resource Pools A resource pool is a logical abstraction for hierarchically managing CPU and memory resources. It is used on vSphere Distribute Resource Scheduler (DRS), it provides resources for VMs and child pools Resource Pool Attributes: -Shares: Low, Normal, High, Custom -Reservations:In MHz or GHz, MB or GB -Limits:In MHz or GHz, MB or GB -Reservation type:Expandable and non-Expandable Reasons to Use Resource Pools -Isolation between pools and sharing in pools -Access control and delegation -Separation of resources from hardware -Ability to prioritize virtual machine workloads -Management of sets of VMs running a multitier service Expandable Reservation Borrowing resources occurs recursively from the ancestors of the current resource pool -Exapandable reservation option must be enable -This option offers more flexibility but less protection Expandable reservations are not released until the VM that caused the expansion is shutdown or its reservation is reduced *A mismanaged or mis-sized expandable reservation might claim all unreserved capacity VM does not start if it's not met the requirements (reservations) Admission Control for CPU and Memory Reservations Power on a VM -> Can this Pool satisfy reservation? Yes/No No - Exapandable reservation?Yes - Go to parent pool Scheduling Changes to Resource Settings Schedule a task to change the resource settings of a resource pool or a virtual machine Performance-Tuning Methodology Assess performance -Use appropriate monitoring tools -Record a numerical benchmark before changes Identify the limiting resource Make more resources available -Allocate more -Reduce competition -Log your changes Benchmark again Resource-Monitoring Tools Inside The Guest OS perfmon dll Iometer Task Manager Outside the gust OS vCenter performance charts vCenter Operations ManagerVMware vCenter Hyperic vSphere/ESXi System Logs resxtop and esxtop Guest Operating System Monitoring Tools Task Manager and Iometer Using Perfmon to Monitor VM Resources The Perfmon DLL in VMware Tools provides VM processor and memory objects to access host statistics inside a virtual machine vCenter Server Performance Charts -Overview Charts and Advanced Charts CPU Constrained Virtual Machine If CPU usage is continuously high Memory-Constrained Virtual Machine Check the VMs ballooning activity: -If balloning activity is high, this might not be a problem if all VMs have sufficient memory -If balloning activity is high and the guest operating system is swapping, then the VM is constrained for memory Memory-Constrained Host If there is active host-level swapping, then host memory is overcommited Monitoring Active Memory of a Virtual Machine Monitor for increases in active memory on the host: -Host active memory refers to active physical memory used by virtual machine and the VMkernel -If amount of active memory is high, this situation could lead to VMs that are memory-constrained Disk-Constrained Virtual Machines Disk-intensive applications can saturate the storage or the path If you suspect that a VM is constrained by disk access: -Measure the throughput and latency between the VM and storage -Use the advanced performance charts to monitor --Read rate and write rate --Read latency and write rate Monitoring Disk Latency To determine disk performance problems, monitor two disk latency data counters: -Kernel command latency: --The average time spent in the VMkernel per SCSI command --High numbers (greater than 2-3 ms) represent either an overworked array or an overworked host -Physical device command latency: --The average time the physical devices takes to complete a SCSI command --High numbers (grater that 15-20ms) represent a slow or overworked Network-Constrained VMs Network-intensive applications often botleneck on path segments outside the ESXi host If you suspect that a VM is constrained by the network: -Confirm that VMware Tools is installed --Enhanced network drivers are available -Measure the effective bandwidth the VM and its peer system -Check for dropped received packets and dropped transmit packets Advanced Performance Charts Chart Options:real-Time and Historical vCenter Server stores statistics at different specificities Real-Time (past hour) | Data frequency 20 seconds | Number of samples 180 Chart Types Line, Stacked, Bar, Pie Objects and Counters Objects are instances or aggregations of devices Example:vCPU0, vCPU1,vmhba1:1:2, aggregate over all NICs Counters identify which statistics to collect: Examples:CPU(used time, ready time, usage (%)), NIC(network packets received), Memory(memory swapped) Statistical type The statistics type is the unit of measurement used during the statistical interval Rate - Value over the current interval, example CPU Usage (MHz) Delta - Change from previous interval, example CPU ready time Absolute - Absolute value (independent of interval), example Memory Active Rollup Rollup is the conversion function between statistics intervals -5 minutes of past-hour statistics are converted to 1 past-day value --Fifteen 20-second statistics are rolled up into a single value -30 minutes of past-day statistics are converted to 1 past-week value --Six 5-minute statistics are rolled up into a single value Rollup Type: Average - Average of data points, example CPU usage (average) Summation - Sum of data points, example CPU ready time (milliseconds) Lastest - Last data point, example Uptime (days) Other rollup types:Minimum, Maximum The minimum and maximum values are collected and displayed only in collection level 4. Minimum and maximum rollup types are used to capture peaks in data during the interval Setting Log Levels Setting log levels enables the user to control the quantity and type of information logged. Examples og when to set levels include: -When troubleshooting complex issues: --Set the log level to verbose or trivia. Troubleshoot and set it back to info -Controlling the amount of information being stored in the log files Option:None | Error(errors only) | Info(normal logging) | Verbose | Trivia(extended verbose) Saving Charts You can save charts in PNG, JPEG and CSV formats About Alarms An alarm is a notification that occurs in response to selected events or conditions that occur with an object in the inventory Default alarms exist for various inventory objects: -Many default alarms for hosts and VMs You can create custom alarms for a wide range of inventory objects: -VMs, hosts, clusters, datacenters, datastores, networks, distributed switches, and distributed port groups Alarm Triggers An alarm requires a trigger. Types of triggers: -Condition or state trigger: Monitors the current condition or state. Example: A host is using 90 percent of its total memory -Event: Monitors events. Example: A host has left the vNetwork distributed switch. A license has expired in the data center Configuring Conditions Triggers Event condition for a VM Configuring Event Triggers Event trigger for a host Configuring Actions Every alarm type has these actions: Send a notification email, send a notification trap, or run a command Virtual machine alarms and host alarms have more actions This action is executed between states (Once or Repeated) Configuring vCenter Server Notifications vCenter Server->Settings-> Manage Viewing and Acknowledging Triggered Alarms vCenter Server->Monitor Issues About vCenter Operations Manager vCenter Operations Manager collects performance data from each object at every level of the virtual environment -vCenter Operations Manager stores and analyzes the data -Provides near real-time information about issues or potential issues -Provides analysis at a deeper level than vCenter Server provides vCenter Operations Manager works with vSphere componenets to provide these functions -Combination of key metrics into a single score to determine the health, efficiency, and potential risk of the environment -Information about changes in the hierarchy of your virtual environment -Graphs and charts depicting the current and historical state of you environment vCenter Operations Management Suite Editions -Foundation -Standard -Advanced -Enterprise Logging In to vCenter Operations Manager The components of vCenter Operations Manager include: -A User Interface virtual machine -An Analytics virtual machine Open a Web browser https://User_Interface_IP/vcops-vsphere Log in using separate vCenter Operations Manager credencials ------------------------------------------Module 10 High Availability and Fault Tolerance------------------------------------------ VMware:Protection at Every Levels Protection against hardware failures, planned maintenance with zero downtime, and protection against unplanned downtime and disasters High availability and Fault Tolerance:vmotion, DRS, NIC teaming, Storage Multipathing Storage vMotion vSphere Replication, Third-Party Backup Solutions, vSphere Data Protection vCenter Site Recovery Manager vCenter Server Availability:Recommendations vCenter Server database: -Create a cluster for the database -Authentication identity source: -For example, SSO and Active Directory -Set up with multiple redundant servers Methods for making vCenter Server available: -Use vSphere High Availability to protect the vCenter Server VM -Use vCenter Server Heartbeat High Availability A highly avaialble system is one that is continuously operational for an optimal legth of time vSphere HA Amount of downtime Minimal Works with all supported guest operating systems Works with all supported ESXi hardware 512 VM per host 32 hosts per cluster 4000 VMs per cluster vSphere HA Failure Scenarios HA protect against these failures: -ESXi host failures -VM or guest OS failure -Application failure Other scenarios are discussed in lesson 3: -Management network failures: --Network partition --Network isolation ESXi Host Failure When a host fails vSPhere HA restarts the affected virtual machines on other hosts Guest Operating System Failures When a VM stops sending heartbeats or the VM process crashes (vmx), vSphere HA resets the VM. Requires installation of VMware Tools vSphere HA Scenario Application Failure When an application fails, vSphere HA restarts the affected VM on the same host Requires installation of VMware Tools Important of Redundant Heartbeat Networks In a vSphere HA Cluster, heartbeats have these characteristics: -They are sent betweens the master and the slave hosts -They are used to determine whether a master or slave host has failed -They are sent over a heartbeat network Heartbeat network -Implemented by using a VMkernel port marked for management Redundant heartbeat networks -Allow for the reliable detection of failures Redundancy Using NIC Teaming You can use NIC teaming to create a redundant heartbeat network on ESXi hosts Port or port groups that are used must be VMkernel ports Redundancy Using Additional Networks You can also create redundancy be configuring more heartbeat networks: On each ESXi host, create a second VMkernel port on a separate vswitch with its own physical adapter About Clusters A cluster is a collection of ESXi hosts and their associated VMs, configured to share their resources vCenter manages cluster resources like a single pool of resources Features such as vSphere HA and vSphere Distributed Scheduler (DRS) are configured on a cluster Enabling vSphere HA Cluster Settings->vSphere HA->Turn ON Cluster Settings->vSphere HA->Enable Host Monitoring | Admission Control vSphere HA Settings:Host Monitoring Cluster Settings->vSphere HA->Host Monitoring Host Monitoring Status - Host Monitoring VM Options - VM Restart Priority [Disable | Low | Medium | High] | Host Isolation response If Disabled VM will not restart on another host. This option does not affect virtual machine monitoring. If a guest operating system or application fails on a host that is functioning properly, the virtual machine is restarted on that same host. If a guest operating system or application fails on a host that is functioning properly, the virtual machine is restarted on that same host. VM Restart Priority can be overriden for individual VMs The Host isolation response determines what happens when a host in a vSphere HA cluster loses its management network connection (ESXi VMkernel port) but continues to run. This setting can be configured at either the cluster level or the virtual machine level. The values for the host isolation response setting are Leave powered on , Power off, Shut down , and Use cluster setting. Disabling Host Monitoring is useful during maintenance activities like network maintenance that might trigger host isolation responses which causes the cluster to perform a failover operation. vSphere HA Settings: Admission Control Admission control is a policy used by vSphere HA to ensure failover capacity within a cluster Cluster Settings->vSphere HA->Admission Control Define failover capacity reserving a percentage of the cluster resources You can enable or disable admission control by selecting from the following options: Enable: Do not power on virtual machines that violate availability constraints Enforcing availability constraints preserves failover capacity. If an attempt to power on a virtual machine violates availability constraints, a message informs you that the operation is not permitted. If this option is selected (the default), the following operations are also not allowed if they violate admission control: -Reverting a powered-off virtual machine to a powered-on snapshot -Powering on a virtual machine -Migrating a virtual machine into the cluster -Reconfiguring a virtual machine to increase its CPU or memory reservation Disable: Power on virtual machines that violate availability constraints Admission Control Policy Choices: -Define failover capacity by static number of hosts -Define failover capacity by reserving a percentage of the cluster resources (the default) -Use dedicated failover hosts -Do not reserve failover capacity vSphere HA Settings: VM Monitoring VM Monitoring settings VM Monitoring Status [Disabled | VM Monitoring Only | VM and Application Monitoring] Monitoring Sensivity [Preset | Custom] vSphere HA Settings: DataStore Heartbeating vSphere HA uses datastores to monitor hosts and VMs when mangement network has failed. vCenter Server select 2 datastores for each host using the policy and datastore preference Heartbeat datastore selection policy: -Automatically select datastores accessible from the host -Use datastores only from the specified list -Use datastores from the specified and complement automatically if needed vSphere HA Settings: Advanced Options Set default (minimum) slot size: -das.vmCpuMinMHz -das.vmMemoryMinMB Set maximum slot size: -das.slotCpuInMHz -das.slotMemInMB Configuring VM Overrides Configure options at the cluster level or per VM Automation Level | VM Restart priority | Host isolation response | VM Monitoring Network Configuration and Maintenance Before changing the networking configuration on the ESXi hosts), perform these steps: -Deselect Enable Host Monitoring -Place the host in maintenance mode These steps prevent unwanted attempts to fail over VMs Cluster Resource Allocation Tab Cluster -> Monitor ->Resource Allocation->CPU|Memory|Storage Monitoring Cluster Status Cluster -> Monitor ->vSphere HA->Summary vSphere HA Architecture: Agent Communication vCenter (vpxd) connect to vpxa of each host vCenter (vpxd) connect to FDM of Host Master FDM (Host Master) replicate to FDM (Slave Hosts) Hosts cannot participate in a Fault Domain in Maintenance Mode | Standby | or diconnected from vCenter Master Election: -number of stores (host with highest number wins) Tie breaker->managed object ID (MOID) assigned by vCenter Server (highest number wins) A new master is selected in 15 seconds, for example in maintenance or failures of actual master Heartbeats are sent over Management Network to Slaves, if network fails slaves switches to another management interface vSphere HA Architecture: Datastore Heartbeats Cluster->Settings->Datastore Heartbeating Datastores are used as a backup communication channel to detect virtual machine and host heartbeats. The heartbeat datastore is used to make the distinction between a failed and isolated or partitioned host. vSphere HA tries to restart vi rtual machines only in these situations: -A host has failed (no network heartbeats, no ping, no datastore heartbeats), or -A host becomes isolated and the cluster’s configured host isolation response is Power off or Shut down Additional vSphere HA Failure Scenarios -Slave host failure -Master host failure-Host isolation -Management network failures: --Network partition --Network isolation Failed Slave Host When a slave host does not respond to the network heartbeat issued by the master host, the master vSphere HA agent attempts to identify the cause. The master host must de termine whether the slave host has crashed or is not responding because of a network problem. For example a misconfigured firewall rule or component failure. The type of failure dictates how vSphere HA responds. When heartbeats cannot be obtained using the network, the master host must determine whether the slave host has had a network failure or if the system has crashed. The master host checks for both responses to pings and datastore heartbeats. Both must be not present for the host to be declared dead. The absence of both a network and datastore heartbeat indicat es full host failure. For VMFS, a heartbeat region on the datastore is read to find out if the host is still heartbeating to it. For NFS/NAS storage, vSphere HA creates a file named host--hb which is locked by the ESXi host accessing the datastore.The file guarantees that the VM kernel is heartbeating to the datastore and periodically updates the lock file. The lock file time stamp is used by the master host to determine whether the slave host suffers from network failure or host failure. In both storage examples, the vCenter Server sel ects a small subsets of datastores for hosts to heartbeat to. The datastores that are accessed by the greatest number of hosts are selected as candidates Failed Master Host When the master host is placed in maintenance mode or crashes, the slave hosts detect that the master host is no longer issuing heartbeats. In this case an election must take place to determine a new master. The host with access to the greatest number of datastores is elected the master. If all slave hosts have equal datastore access, the election process selects a new master host using the highest numbered managed object ID (MOID) assigned by vCenter Server when the host was added to the vCenter Server inventory. If the master host fails, the slave participates in a new master election. When a new master is elected, it reads MAC and IP addresses of the hosts and virtual machines from a host list that is stored on a datastore. The host list is used to determine wh ether the master should accept a connection from a slave. Isolated Host If the host is not observing any election traffic on the management and cannot ping its isolation addresses, the host is isolated Design Considerations Host isolation event can be minimized through good design: -Implement redundant heartbeat networks -Implement redundant isolation addresses If host isolation events do occur, good design enables vSphere HA to determine whether the isolated host is still alive: -Implement datastores so that they are separated from the management network using one or both of the following approaches: --Fibre Channel over fiber optic --Physically separating your IP storage network from the management network Network Partition Only one master host communicates with vCenter Server Failure of the management network can create a condition called network partitioning. Network partitioning occurs when the hosts in a cluster are unintentionally split into two or more groups of two or more hosts. These groups are called partitions. vSphere FT: -Designed so that a backup VM can immediately take over with no loss of service when an unplanned outage occurs --Provides a higher level of business continuity than vSphere HA --Provides zero downtime and zero data loss for applications -Designed to be used for ay application that needs to be available at all times The backup VM is called a secondary VM FT Characteristics Works with all supported guest OS's For best performance, VMware recommends enabling ESXi Host Monitoring. If Host Monitoring is disabled and a failure is detected, Fault Tolerance uses the secondary virtual machine to recover from the failure of the first virtual machine However, if Host Monitoring is disabled after the first failure, no new secondary will be created. No further failures can be tolerated if Host Monitoring is disabled vLockstep technology (syncronize intrusctions from primary to secondary VM) FT Guidelines Check the requirements and limitations of vSphere FT Ensure enough ESXi hosts for fault-tolerant VMs: -No more that four fault-tolerant VMs (primaries or secondaries) on any single host Store ISOs images on shared storage for continuous access: -Especially if used for important operations Disable BIOS-based power management -Prevents the secondary VM from having insufficient CPU resources Enabling vSphere FT on a VM All vCenter Actions > Fault Tolerance > Turn On Fault Tolerance After the secondary virtual machine is created and running, only the primary virtual machine is displayed in the vSphere Web Client inventory Secondary VM Lag Time indicates the latency between the primary and the secondary virtual machines Log Bandwidth indicates the amount of network being used for sending Fault Tolerance log information from the primary virtual machine’s host to the secondary virtual machine’s host. vSphere Replication Copy config/disk files to another host Virtual machines can be replicated between any type of storage platform: Replicate between VMFS and NFS, from iSCSI to local disk. Because vSphere Replication works above the storage layer, it can replicate independently of th e file systems. It will not, however, work with physical raw device mappings. vSphere Replication creates a “shadow virtual m achine” at the recovery site, then populates the virtual machine’s data through replication of changed data. vSphere Replication can revert to multiple points in time so that it can return to a known good point after failover. Replication Appliance Standard Virtual Appliance Delivered with the vSphere platform Bundled with most vSphere Editions The vSphere Replication Appliance scales up to 500 replications, 10 per vCenter instance, and the OVA specifies 2 CPUs, 4GB RAM, a 10GB and a 2GB hard drive (thick or thin) for the virtual machine on which it runs. Fully Integrated with vSphere Web Client How Replication Works Deploy and configure VR components->Pair with a destination->Configura a VR for a single VM You must define RPO, Target Datastore, Target Folder, or Resource Pool Replication of Only Changed Blocks After ensuring data is consistent on both sites ->VR Agent track all changing blocks through vSCSI filter ->Changed blocks replicated as per RPO Disks are always consistent Quiescing Applications with vSphere Replication Integrate with VSS and application writers for consistent applications -VSS writer integration -Works with VMtools Quiencing methods: None | MS Shadow Copy Services (VSS) Create quiescent copies of VMs including apps VSS works with Win 2008/2012 Single-Site vSphere Replication Architecture If vSphere Replication is asked to use VSS, it synchronizes its creation of the lightweight delta (LWD) with the request to flush writers and quiesce the application and operating system. This synchronization ensures full application consistency for backups Replication Limitations -Powered-on VMDKs only -Replication works at the virtual device layer:above VMDK (independent of disk format and snapshots) -FT, linked clones, VM Templates are not supported with vSphere Replication -Virtual Hardware 7 or later is required for VMs to be protected by VR -15 minute most aggressive Recovery Point Objective vSphere Replication can replicate to a different format than its primary disk . So you can replicate a thick-provisioned disk to be a thin-provisioned replica. Virtual machines can be replicated with a recovery point objective (RPO) of at most 15 minutes and at least 24 hours. A recovery of replicated virtual machines loses at least 15 minutes worth of recent data. The vSphere Replication Agent on the appropriate vSphere 5.1 or 5.5 host that holds the running virtual machine then starts tracking changes to disk as they are being written, and in accordance with the configured RPO se nds the changed blocks to the vSphere Replication Appliance. The vSphere Replication Appliance passes the changed block bundle through NFC to an ESXi host to write the blocks to the replica VMDK. Remote Offices Replicating with a Single VC Administrators can deploy more vSphere Replication servers (not the full vSphere Replication Appliance – there is only one per vCenter) to handle isolating the incoming replication traffic or to adjust for scale. The vSphere Replication Server is the same as the vSphere Replicati on Appliance. Both are deployed in the same way. But if only being used as a vSphere Replication Server, the appliance is simply not paired with a vCenter Server. Four Steps for Full Recovery Right-click, select "Recover" Select a target folder Select a target resource Click Finish vSphere Replication and vCenter Site Recovery Manager Choice od replication options for vCenter site Recovery Manager SRM users can choose to use array replication and vSphere Replication If vSphere Replication is already installed and configured, SRM uses it when it is installed Alternately you can install vSphere Replication as part of SRM install Site Recovery Manager (SRM) vSphere Replication can be installed independently of vCenter Site Recovery Manager as a feature of the vSphere platform, but is also included with the vCenter Site Recovery Manager installation package for ease of deployment through either mechanism. Building a Foundation for Disaster Recovery with SRM vSphere Replication is simply protection. vCenter site Recovery Manager is disaster recovery Common Functionality:Replication engine, Application quiescence VR Unique Functions:Next-generation Web client SRM Specific Functions:Full DR orchestration, Recovery planning, Grouping of protected VMs, Full site or partial site failover, etc Use vCenter Site Recovery Manager on top of vSphere Replication to gain full, automated, orchestrated application migration and site recovery ------------------------------------------Module 11: Host Scalability------------------------------------------ DRS Cluster Prerequisites DRS works best if the VMs meet vSphere vMotion migration requirements To use DRS for load balancing, the hosts in the cluster must be part of a vMotion migration network -If not, DRS can still make initial placement recomendations To use shared storage, configure all hosts in the cluster -Volumes must be accessible by all hosts -Volumes must be large enough to store all virtual disks for your virtual machine Cluster->Edit Cluster Settings->Turn ON vSphere Hardware Automation Level [Manual | Partially Automated | Fully Automated] Migration Threshold [Conservative...Aggressive] Virtual Machine Automation - Enable individual VM automation levels Othe Cluster Settings: EVC for DRS EVC is a cluster that prevents vSphere vMotion migrations from failing because of incompatible CPUs Cluster->EVC Mode [Disable EVC | Enable EVC for AMD Hosts | Enable EVC for Intel Hosts] CPU Baselines for an EVC Cluster EVC works at the cluster level, using CPU baselines to configure all processors included in the cluster enable for EVC A baseline is a set of CPU features supported by every host in the cluster EVC Cluster Requirements All hosts in the cluster must meet the following requirements: -Use CPUs from a single vendor (either AMD or Intel) --Use Intel CPUs with 2 Core micro architecture and newer --Use AMD first-generation Opteron CPUs and newer -Be enabled for hardware virtualization (AMD-V or Intel VT) -Be enabled for execution-disable techonology (AMD No eXecute (NX) or Intel eXecute Disable (XD)) -Be configured for vSphere vMotion migration Applications in VMs must be CPUID compatible To enabled EVC, you need power off all VMs. So create a new cluster and move hosts Other Cluster Settings: Swap File Location for DRS Virtual machine directory | Datastore specific by host VMware recommends that you store the swap file in the same directory as the VM DRS Cluster Settings: DRS Rules for Virtual Machine Affinity DRS affinity rules specify that either selected virtual machines be placed on the same host (affinity) or on different hosts (anti-affinity) Options:[Keep Virtual Machines together | Separate Virtual Machines | Virtual Machines to Hosts] Affinity rules Anti-affinity rules DRS Cluster Settings: DRS Groups A group of Virtual Machines A group of Hosts A virtual machine can belong to multiple virtual machine DRS groups A host can belong to multiple hosts DRS groups DRS Cluster Settings: Virtual Machines to Hosts Affinity Rules A Virtual Machines-to-Hosts affinity rule: -Specifies an affinity relationship between a VM DRS group and a host DRS group Type:Virtual Machine to Hosts VM Group Host Group Other Options:Must run on hosts in group,Must not run on hosts in group,Should run on hosts in group,Should not run on hosts in group VMs and hosts must reside in same cluster Virtual Machines to Hosts Affinity Rule:Preferencial A preferencial rule is softly enforced and can be violated if necessary Virtual Machines to Hosts Affinity Rule:Required A required rule is strictly enforced and can never be violated Example:Enforce host-based ISV licensing DRS Cluster Settings: Automation at the Virtual Machine Level (Optional) Set automation level per virtual machine Cluster->Add VM Overrides Virtual Machine Automation->Enable individual virtual machine levels Automation Level [Use Cluster Settings | ...] VM restart priority [Use Cluster Settings | ...] Host isolation response [Use Cluster Settings | ...] VM Monitoring [Use Cluster Settings | ...] Adding a Host to a Cluster When adding or moving a host into a DRS Cluster, you can keep the resource pool hierarchy of the existing host -If DRS is not enabled, host resources pools are lost When adding the host, choose to create a resource pool for this host's virtual machines and resource pools Viewing DRS Cluster Information The Cluster Summary tab also provides information specific to DRS Viewing DRS recommendations Option Override DRS recomendations - Apply a subset of recommendations Monitoring Cluster Status View the inventory hierarchy for the cluster state Maintenance Mode and Standby Mode If you place a host in maintenance mode: -VMs should be migrated to another host or shutdown -You cannot power on VMs or migrate VMs to a host in maintenance mode -You can not deploy VMs to a host in maintenance mode When a host is place in standby mode, it is powered off -This mode is used by VMware vSphere Distributed Power Management to optimize power usage Removing a host from the DRS Cluster Before removing a host from DRS cluster, consider the following issues: -The resource pool hierarchy remains with the cluster -Because a host must be in maintenance mode, all VMs running on that host are powered off -The resources available for the cluster decrease Improving Virtual Machine Performance Some methods for improving VM performance: -Use network shapping -Modify the VMs CPU and menory reservations -Modify the resource pools CPU and memory limits and reservations -Use NIC teaming -Use storage multipathing -Use aDRS Cluster Using vSphere HA and DRS Together Reasons why vSPhere HA might not be able to failover VMs: -vSphere HA admission control is disabled -Required Virtual Machines-to-Hosts affinity rule prevents vSphere HA from failing over -Sufficient aggregated resources exist, but they are fragmented across hosts -In such cases, vSphere HA uses DRS to try to adjust the cluster by migrating VMs to defragment the resources ------------------------------------------Module 12 : Patch Management------------------------------------------ vSphere Update Manager It enables centralized, automated patch and version management for VMware ESXi hosts, VM hardware, VMware Tools, and virtual appliances vSphere Update Manager reduces security risks: -Reduces the number of vulnerabilities -Eliminates many security breaches that exploit older vulnerabilities vSphere Update Manager reduces the diversity of systems in an environment: -Makes management easier -Reduces security risks vSphere Update Manager keeps machines running more smoothly: -Patches include bug fixes -Makes troubleshooting easier vSphere Update Manager Capabilities Enables cross-platform upgrade from VMWare ESX to ESXi Automated patch downloading: -Begins with information-only downloading -Is scheduled at regular configurable intervals -Contacts for the following sources for patching ESXi hosts: -For VMware patches :https://hostupdate.vmware.com -For third-party patches:URL of third-party source Creation of baselines and baseline groups Scanning: -Inventory systems are scanned for baseline compliance Remediation: -Inventory systems that are not current can be automatically patched Reduces the number of reboots required after Tools updates Update Manager Components The vSphere Update Manager server can be installed directly on the VMware® vCenter Server™ system or on a separate system. The system can be either a physical or a virtual machine. The operating system must be Windows 2008 or newer. vSphere Update Manager 5.5 can only be installed on a 64-bit operating system. vSphere Update Manager plug-in: This plug-in runs on the same system on which VMware vSphere® Client™ is installed. You can install vSphere Client with the vSphere Update Manager 5.5 plug-in on both 32-bit and 64-bit operating systems, but the clients must be the same version as the vSphere Update Manager server. -Guest agents: Guest agents are installed into virtual machines from the vSphere Update Manager server and are used in the scanning and remediation operations. -The Update Manager Download Service (UMDS) is an optional module of vSphere Update Manager, which is used on the download server to download patches. With UMDS in vSphere Update Manager 5.5, you can add these settings: --Configure multiple download URLs, and Restrict downloads to product version and type that are relevant to your environment NOTE UMDS 5.5 can be installed only on 64-bit Windows operating systems Installing vSphere Update Manager Update Manager must be installed on a Windows 64-bit machine To install, start VMware vCenter Installer and click VMWare vSphere Update Manager Information needed during the installation: -vCenter hostname, username/passowrd -Choice of database:user default or existing database -Port settings:hostname, ports, proxy settings (if necessary) -Destination folder and location for downloading patches To install the vSphere Update Manager client -Install the vSphere Update Manager Extension plug-in into vSphere Client Configuring vSphere Update Manager Settings Home->Solutions and Applications->Update Manager->vCenter By default, all patch sources are enabled. Additional patch sources can be added if necessary Baseline and Baseline Groups A baseline consists of one or more patches, extensions, or upgrades Five types of baselines: -Host patch -Host extension -Host upgrade -Virtual machine upgrade for hardware or Tools -Virtual appliance upgrade vSphere Update Manager includes a number of default baselines A baseline group consists of multiple baselines: -Can contain one upgrade baseline per type and one or more patch and extension baselines Creating a baseline 1.Select Create, specify name and description 2.Choose a baseline type 3.For a patch baseline, select a patch option:Fixed or Dynamic 4.Select patches to add to the baseline Attaching a Baseline To view compliance information and remediate inventory objects, first attach a baseline or baseline group to an object For improved efficiency, attach a baseline to container object instead of to an individual object ScanniViewing Compliance After a scang for Updates Scanning evaluates the inventory object agains the baseline or baseline group A scan can be performed manually or automatically, using a scheduled task n, patches and updates can be staged first and then remediated at a later time Remediating Objects You can remediate virtual machines, templates, virtual appliances, and hosts You can perform the remediation immediately or schedule it for a later date Maintenance Mode and Remediation Power off or suspend virtual machines Options for PXE-booted ESXi 5.0 To keep new software and patches on stateless hosts after a reboot, use a PXE boot image that contains the updates Remediation Options for a Cluster When remediating hosts in a cluster, you must temporarily disable certain cluster features: VMware vSphere Distributed Power Management, VMware vSphere High Availability, and VMware vSphere Fault Tolerance You can generate a report that identifies problems before remediation occurs Patch Recall Notification At regular intervals, vSphere Update Manager contacts VMware to download notifications about patch recalls, new fixes, and alerts -Notification Check Schedule is selected by default On receiving patch recall notifications, vSphere Update Manager: -Generates a notification in the notification tab -No longer applies the recalled patch to any host: --Patch is flagged as recalled in the database -Deçetes the patch binaries from its patch repository -Does not uninstall recalled patches from ESXi hosts: --Instead, it waits for a newwer patch and applies that to make a host compliant Using the vSphere Web Client DRS moves VMs to the available host Update Manager patches the host and exits maintenance mode DRS moves VMs back, per rule ------------------------------------------Module 13:Installing VMware vSphere Components------------------------------------------ ESXi Hardware Prerequisites Processor:64-bit x86 CPU -Requires at least 2 Cores -ESXi supports a broad range of x64 multicore processors Memory:4GB RAM minimum One or more Ethernet controllers: -Gigabit, 10 Gigabit, and 40 Gigabit Ethernet controllers are supported -For best performance and security, use separate Ethernet controllers for the management network and the VM networks Disk storage: -A SCSI adapter, Fibre Channel adapter, converged network adapter, iSCSI adapter, or internal RAID controller -A SCSI disk, Fibre Channel logical unit number (LUN), iSCSI disk, or RAID LUN with unpartitioned space:SATA, SCSI, or Serial Attached SCSI Information for Installing ESXi Host name, Install locations (must be at least 5GB), Keyboard Layout, VLAN ID, IP Address, Subnet mask, Gateway, Primary DNS (secondary can also be defined), Root password (must contain 6-to-64 characters) All are optional except first 3 Installing ESXi Make sure that you select a disk that is not formatted with VMware vSphere VMFS Booting from SAN ESXi can be booted from SAN: -Supported for Fibre Channel SAN -Supported for iSCSI and Fiber Channel over Ethernet for certain qualified storage adapters SAN connections must be made through a switched topology unless the array is certified for direct-connect The ESXi host must have exclusive access to its own boot LUN Use different LUNs for VMFS datastores and boot partitions vCenter Server Deployment Options Deployed on physical host or VM and installed with a supported version of Windows Reasons to use Windows based vCenter Server instead of vCenter Appliance: -Support staff trained only on Windows OS -Applications that depend on the specific Windows version -You prefer to use a physical host Deployed as a virtual appliance that runs the SuSE Linux operating system: -No operating system license required -Simple configuration through a Web browser -Offers same user experience as Windows-based version Single-Server Solution or Distributed Solution Virtual Machine Single Sign-On Server Inventory Service Database Server vCenter Server VMware vSphere Web Client VMware vSphere Update Manager vCenter Single Single-On Two Installations mode are available: -Simple Install -Individual component install Single Sign-On Installation Wizard Single Sign-On Deployment Type - install or join an existing installation Select Node type - one node or create a multinode installation Password for the administraor Account (Administrator@vsphere.local) FQDN Service Account Information Destination Folder Port Settings vCenter Inventory Services Stores cEnter Server application and inventory data: -Enables you to search and access inventory objects across linked vCenter Server systems Required with vCenter Server 5.x: -Supports login by vCenter Single Sign-On Used by the vSphere Web Client Can be deployed on the same host or separate host to vCenter Server Part of vSphere Simple install or installed as a separate component Inventory Service is included with vCenter Server Appliance vCenter Server Hardware and Software Requirements Hardware requirements (physical or virtual machine) -Number of CPUs:Two 64-bit CPUs or one 64-bit dual-core processor -Processor:2.0 GHz or higher Intel or AMD processor* -Memory:4GB RAM minimum* -Disk storage:4GB minimum* -Networking:Gigabit connection recommended *Higher if database, SSO, and Inventory Services run on the same machine Software requirements: -64-bit operating system is required -See "vSphere Compatibility Matrixes" vCenter Database Requirements Each vCenter Server instance must have a connection to a database to organize all the configuration data Supported databases: -Microsoft SQL Server 2005 SP3 -Microsoft SQL Server 2008 -Microsoft SQL Server 2008 R2 Express Oracle 10g R2 and 11g Default Ddatabase:Microsoft SQL Server 2008 R2 Express -Included with vCenter -Used for product evaluations and demonstrations -Also used for small deployments (up to 5 hosts and 50 VMs) Before Installing vCenter Server Ensure hardware and software requirements are met Ensure that the vCenter Server ystems belongs to a domain rather than a workgroup vCenter Server Installation Wizard Installation Wizard asks for the following data: User name and organization, License key, Database information, SYSTEM account information, Destination folder, Standalone or join a Linked Mode group, Ports, JVM memory, Ephemeral port configuration vCenter Server Services Several services start on reboot and can be managed from Windows Control Panel