Hyper-Convergence Meets Data Protection and High Availability
Guest blog post by Dr. Mark Campbell, CTO of Unitrends
In 2014, there were 28 million IT professionals worldwide. IDC expects this number to grow to 36 million by 2020. The trouble here is that the amount of data for which these professionals are held responsible is growing at a much higher rate. Today, each IT professional on average is responsible for 230GB; by 2020, that number will explode to 1,231GB. This massive spike is made even more challenging by the increasing richness of data sources and formats, driven by everything from new applications to the Internet of Things.
This is the reason that productivity — doing more with less — has never been more important for the engineers creating and building next generation data centers. This is one reason that hyper-converged storage has been increasingly popular. Hyper-converged infrastructure systems offer a software-defined scale-out architecture that integrates compute, networking, and storage via virtualization. At the end of the day, what these systems offer to the overburdened IT professional is a more productive way of creating and managing IT infrastructure.
The future of backup lies in hyper-convergence as well. The basic architecture of a backup appliance — a server, networking, storage, operating system, and backup software — will need to take advantage of more and faster processors and cores, memory, backplane and I/O performance. Flash-enabled architectures will increasingly become mandatory — not just to enable faster backup and recovery, but also because of the increased functionality that the backup appliance must support in terms of more capability and high availability. Techniques such as on-appliance virtualized instant recovery of virtual and physical environments, as well as off-appliance support of virtual environments, will present very different I/O loads that can only be reconciled with tiered flash and rotational storage.
The core functions of data protection – backup, deduplication, private replication, and recovery — must be augmented. Instant recovery will become ever-more “instant” and as such present demands on the hardware and software far beyond that which are supported today. In addition, instant recovery will be used continually in recovery assurance architectures that are continually testing recovery of not just virtual machines and servers but entire infrastructures.
More capable and flexible near-line retention and archiving capabilities that handle D2D2x (Disk-to-Disk-to-Any, where “Any” can be rotational tape, rotational disk, fixed NAS, fixed SAN, and object-based cloud storage) are needed. Tape will not go away — from a cost perspective, it’s still the lowest cost option for archival storage despite the penny per gigabyte per month pricing of off-line storage in the cloud. Near-line retention and archiving on premise will continue to be important.
The cloud, whether private or public, whether single-tenant or multi-tenant, will continue to grow in importance not only as a storage target but as a disaster recovery as a service (DRaaS) target. The recovery assurance technology described above that works on the backup appliance must work seamlessly with DRaaS in the cloud as well. Backup appliance vendors will increasingly offer virtual appliances tailored to use in both private and public clouds, as well as standard bare software that operates directly on the operating system and within alternative virtualization paradigms, such as containers.
At Unitrends, we’re proud that we started the backup appliance movement over a decade ago; our January 2015 new family of appliances marks the third-major generation of purpose built backup appliances. Check out all the upgrades.
The best way to predict the future is to create it. [Abraham Lincoln]