Nutanix AOS 5.11, Prism Central 5.11 and AHV update released
These Nutanix Acropolis Family release notes describe features and updates for Nutanix Core software: AOS 5.11, Prism Central 5.11, and AHV-2017083-301.
New and Updated Features in AOS 5.11
Added ESXi Support for Nutanix Guest Tools. This release provides ESXi support to install and upgrade NGT on multiple VMs simultaneously in the Prism Central.Storage Quality of Service (QoS) to Set Throttle Limits on a VM.
Storage QoS provides administrators granular control to manage the performance of virtual machines and ensure that the system delivers consistent performance for all workloads. You can use a controllable knob to limit the IOPS that the storage layer would serve for individual virtual machines. IOPS is the number of requests the storage layer can serve in a second. You can set throttle limits on a VM to prevent noisy VMs from over-utilizing the system resources. Support for Up to 120 TiB Per Node.
AOS 5.11 with Foundation 4.4 and later now supports up to 120 Tebibytes (TiB) of storage per node.
- For these larger capacity nodes, each Controller VM requires a minimum 36 GB of memory.
- Foundation software provisions the default memory and number of vCPUs allocated to each Controller VM according to your platform model or storage capacity.
- SRM Support for NearSync – You can now create a NearSync schedule on a protection domain for SRA replication. The NearSync schedule can coexist with an asynchronous schedule.
- Network Segmentation for Services – You can now secure traffic associated with a service by confining its traffic to a separate vNIC on the Controller VM and using a dedicated virtual network that has its own physical NICs. This type of segmentation offers true physical separation for service-specific traffic.
AOS 5.11 now supports network segmentation for Nutanix Volumes.
New and Updated Features | Prism Central 5.11
New and Updated Features
Storage Quality of Service (QoS) to Set Throttle Limits on a VM
Storage QoS provides administrators granular control to manage the performance of virtual machines and ensure that the system delivers consistent performance for all workloads. You can use a controllable knob to limit the IOPS that the storage layer would serve for individual virtual machines. IOPS is the number of requests the storage layer can serve in a second. You can set throttle limits on a VM to prevent noisy VMs from over-utilizing the system resources.
Codeless Task Automation (X-Play)
You can automate routine administrative tasks through Prism Central by using the X-Play feature. X-Play is an easy to use automation tool that helps you to automate routine administrative tasks, and auto-remediate issues that occur in your system. You can achieve this automation by creating Playbooks.
- X-Play requires a Prism Pro license.
- X-Play requires only Prism Central to be on 5.11, Prism Element upgrade to 5.11 is not required.
For details, see Codeless Task Automation (X-Play) in the Prism Central Guide.
Syslog Monitoring
You can configure syslog monitoring to forward system logs (API Audit, Audit, and Flow logs) of the registered clusters to an external syslog server.
Export and Import Security Policy
Prism Central allows you to export and import security policies, see Exporting and Importing Security Policies in the Prism Central Guide.
Image Placement Policies
In Prism Central, you can configure policies that govern which clusters receive the images that you upload. These policies, called image placement policies, map images to target clusters through the use of categories associated with both those entities.
With image placement policies, you can also specify how strictly you want the policy to be enforced.
Added ESXi Support for Nutanix Guest Tools
This release provides ESXi support to install and upgrade NGT on multiple VMs simultaneously in the Prism Central.
Flow Logging Support for Policy Hit
The Policy Hit Logs option that is available at the time of creating security policies allows you to log traffic flow hits on the policy rules.
Nutanix Networking Service (Xi Cloud Services)
Nutanix Networking Service allows you to set up a VPN solution between your on-prem datacenter and Xi Cloud Services. If you select this option, you do not need to use your own VPN solution to connect to Xi Cloud Services.
Nutanix Networking Service creates a VPN gateway VM that runs on your on-prem cluster, connects to your network, and establishes an IPSec tunnel with the VPN gateway VM that is running in the Xi Cloud.
See the VPN Configuration topic in the Leap Administration Guide for more information.
- Support for Nutanix Guest Tools (NGT) in Xi Cloud Services
You can mount NGT, manage NGT applications, and upgrade NGT on multiple VMs running on Xi Cloud Services.Note: You can mount NGT on multiple VMs at the same time. However, you must manually install NGT on each VM.
See the Nutanix Guest Tools (NGT) Management chapter in the Xi Infrastructure Service Administration Guide for more information.
Category Support for ESXi
Support for categories has been extended to ESXi (in addition to AHV).
Enhanced Monitoring of Entity Metrics
The metric charts in an entity details page have been enhanced to include viewing options that more clearly show the observed patterns, behavior band, and anomalies.
Category-based RBAC for VM Management
You can now manage VMs in an AHV cluster with role-based access control (RBAC) specified using categories.
Multi-cardinality Support for Categories
Categories now support multi-cardinality, which means you can assign multiple category values to the same entity.
New and Updated Features | AHV
AHV-2017083-301
Support to Update the br0 Uplink Configuration
The default NIC-teaming policy of the bond br0-up of the default bridge br0 in your AHV cluster is Active-Backup. You can now change the NIC-teaming policy of br0-up to Active-Active, Active-Active with MAC pinning, or retain the default Active-Backuppolicy. If you select the Active-Active policy, you must manually enable Link Aggregation (LAG) and LACP on the corresponding top of the rack (TOR) switch for each node in the cluster.
By default, br0-up aggregates all the physical interfaces available on the node. You can now modify this default NIC configuration by selecting the interfaces that must belong to br0-up. You can either choose to have only 10G interfaces, only 1G interfaces, or retain the default setting, that is all the available interfaces aggregated into br0-up.
See the Uplink Configuration topic in the Prism Web Console Guide for more information.
Support for Compute-Only Nodes on AHV Clusters
A compute-only (CO) node allows you to seamlessly and efficiently expand the computing capacity (CPU and memory) of your AHV cluster. The Nutanix cluster uses the resources (CPUs and memory) of a CO node exclusively for computing purposes.
CO nodes enable you to achieve more control and value from restrictive licenses such as Oracle. A CO node is part of a Nutanix hyperconverged (HC) cluster, and there is no CVM running on the CO node (VMs use CVMs running on the HC nodes to access disks). As a result, licensed cores on the CO node are used only for the application VMs.
Applications or databases that are licensed on a per CPU core basis require the entire node to be licensed and that also includes the cores on which the CVM runs. With CO nodes, you get a much higher ROI on the purchase of your database licenses (such as Oracle and Microsoft SQL Server) since the CVM does not consume any compute resources.
You can use a supported server or an existing HC node as a CO node. To use a node as CO, you must first image the node as CO by using Foundation and then add that node to the cluster by using the Prism Element web console.
See the Compute-Only Node Configuration (AHV Only) topic in the Prism Web Console Guide for more information.
UEFI Support for VMs on AHV and Hyper-V clusters
Nutanix fully supports the booting of VMs with UEFI firmware in an AHV cluster. Nutanix also provides limited support for VMs migrated from Hyper-V cluster. You can create or update VMs with UEFI firmware by using the aCLI commands, the Prism web console, or the Prism Central UI.
See the UEFI Support for VM topic in the AHV Admin Guide for more information.