Call a Specialist Today! 800-886-5369

Nutanix Federal Government

 

Overview:

In today’s budget-constrained environment, the U.S. federal government has attempted to make computing more efficient and less expensive by reducing the size of its data center portfolio. Since the Federal Data Center Consolidation Initiative (FDCCI) was first launched in 2010, efforts to close hundreds of data centers and streamline government IT operations have fallen short of the goal set by the Office of Management and Budget (OMB). This has forced government officials to look at data center consolidation in a different way – changing the focus from reducing the overall number of government data centers to reducing the cost of data center ownership. This represents a fundamental shift in focus from data center consolidation to transformation.

The Nutanix Solution for the Federal Government

Nutanix offers a perfect solution to meet OMB’s new requirements for the government’s NextGen data centers. The Nutanix Xtreme Computing Platform can transform any data center from an unwieldy, expensive, overcomplicated IT infrastructure to an efficient, cost- effective virtualization endeavor, enabling government agencies to successfully meet their missions.

Nutanix enables the federal government to operate more efficiently and cost-effectively by providing hyper-efficient, massively scalable and elegantly simple data center infrastructure solutions.

Nutanix solutions are easy to implement, delivering increased time to value and providing near immediate return on investment.

Nutanix is a pioneer of converged infrastructure, eliminating the need for complex storage networks and central storage systems.

Nutanix can reinvent the government data center by leveraging many of the advanced software technologies that power leading Web-scale and cloud infrastructures, such as Google, Facebook and Amazon.

Nutanix has 60+ government customers worldwide, indicating experience and a strong understanding of the government environment.

Designed for Secure Operation

Information Assurance is a high priority, and the coveted authority to operate needed to be considered “ready” for Government use is a 9 to 12 month process. Complex multi-vendor integrated solutions take enormous time and effort, and require third parties, adding additional time and cost.

All of this prevents access to innovation, and forces technology to be frozen – unable to adapt to changes in the modern world of cyber threats. Government agencies require a simpler approach, one in which the technology they acquire supplies defense in depth, hardening automation, proactive security patching, and published documentation as part of it’s design.

Security process is part of the Nutanix development DNA. Nutanix combines powerful features, such as two-factor authentication and data-at-rest encryption, with a security development lifecycle that is integrated into product development to help customers meet the most stringent security requirements.

Nutanix hyperconverged systems are certified across a broad set of evaluation programs to ensure compliance with the strictest standards. With innovation such as developing security technical implementation guides (STIGs) in the open XCCDF.xml format to support the Security Content Automation Protocol (SCAP) standard, Nutanix can speed up deployments while meeting requirements for DIACAP/DIARMF assessment and authorization.

Top 10 Pitfalls to Consider for Federal Government:


Space

For 80% of customers considering virtualized initiatives, rack space is a concern. Why? Because datacenter space is expensive in both real cost and opportunity cost. It is more expensive per square foot than standard office space (cubes, offices, break rooms, etc.) because it requires more power, cooling and security. As for opportunity cost, space used for a datacenter means that much less space for cubicles, offices, or break rooms.

If a datacenter becomes filled, it must be emptied or expanded if new initiatives are to be hosted. What fills a datacenter? Racks! What fills racks? Infrastructure! The servers, switches, storage, and security components that host the application components. Thus the more infrastructure required for hosting each of your initiatives, the sooner you will run out of datacenter space. And if you’re already low on datacenter space when you start your initiative, you need to be particularly cognizant of the amount of rack space required to host the infrastructure components required for your initiative.

Nutanix’ hyperconverged architecture is second-to-none in maximizing compute, storage and IOPS per Rack-Unit (RU). Thus the reason that prospects who are ‘Space challenged’ strongly consider Nutanix to host their virtualized initiative.

Weight

Most customers pay little attention to the weight of the infrastructure required to host their virtualized initiative. But for some, weight is critical. In the tactical DoD, where initiatives need to be hosted and accessed in forward-deployed locations, weight is a factor in shipping components overseas as well as a factor as the components must be hand-carried and configured in challenging terrain and weather conditions.

By using Nutanix, customers concerned with weight combine server, storage and switch components into a single platform that is lighter and far less cumbersome than traditional architectures. Nutanix DoD customers have reduced the number and size of ruggedized cases required in-theater, and in many cases have dropped from 2-many carry to 2-person carry.

Power

The “big four” in delaying virtualized initiatives are power, cooling, cost, and complexity (more on the latter three later). Power shortcomings can delay any virtualization effort by months. Why? Because the infrastructure required to run VMware, Citrix, or KVM draws massive amounts of power. For example, an HP C7000, which has sixteen dual-socket blades, has six 2400-watt power supplies. But for some of VMware’s most important features (HA, vMotion, DRS) to work, servers aren’t enough — a SAN is required as well. So add two 800 watt power supplies to power a couple of NetApp controllers, if you want to use vSphere Standard or higher. Then add another few power supplies for the disk shelves and the switches that connect the servers to the storage. You’re looking at some serious power draw to host that virtualized initiative! So much power, in fact, that many customers decide that they are going to buy VMware, but they delay their purchase by 3-6 months while they await the installation of new power circuits in their datacenters by their local power company.

Nutanix is the choice for customers who have limited power in their datacenters, or who are concerned with the cost or ecological impact of unnecessary power use. That same 16,000+ watt infrastructure from HP and NetApp could be replaced by four Nutanix blocks drawing a maximum of 1400 watts each (including their redundant power supplies) — thus a total of 5600 watts required as compared to 16,000+ watts. And Nutanix customers can get started tomorrow, without the need to delay their project by months while awaiting power increases.

Performance

For most virtualized initiatives, performance is critical. There are many measurements used to attempt to predict the speed of each component of the infrastructure. Gigahertz and number of sockets and cores measure CPU speeds in servers. For storage, disk RPMs and number of spindles are frequently discussed relative to spinning disk, IOPs are common measures applied to flash storage, and controller quantities and speed have profound influence. As for the network fabric between the servers and storage, bandwidth and protocol are commonly viewed as the best predictor of its performance.

This complexity in attempting to measure expected performance is eliminated by simplifying the infrastructure. Rather than hosting the virtualized initiative on multiple components (each with a differing role), host on a single, converged virtualization platform. Eliminate the need for any network fabric by attaching the CPUs, RAM, flash and spinning disk to the same motherboard. Have all components communicate at bus speeds. Keep “hot” data in the fastest tier, and “cold” data in the slower tier. But most importantly, combine three tiers into one, and eliminate the need to balance performance tweaks of one variable with those of another. Host on a single appliance of horsepower. Measure once. As your initiative grows, add another appliance of horsepower.

Politics

In today’s workplace, politics are a given. Even in “team environments”, human nature has all of us competing as individuals to ascend org charts and pay scales, and fighting to keep ourselves relevant. In the datacenter, this manifests itself in the form of “rice bowls” or “fiefdoms” — the server team, storage guy, network admin, desktop lead, …you name it, all want their opinion heard, their role to be critical, and ideally to be the hero of a successfully delivered initiative.

Converging servers, storage, and network fabric onto a single virtualization platform that cannot be categorized as any one of the three eliminates a significant breeding ground for politics. Individuals have fewer areas to “hole up” or draw lines in the sand. Without all of the moving parts and complicated decision-trees, IT staff can focus on productive endeavors that move the quickly initiative towards success in a far shorter timeframe.

Cooling

There is a direct correlation between power and cooling requirements — and for this reason, cooling lurks in the background as a threat to any new virtualization initiative. Power seems to carry more glamour, and is asked about far more often as IT personnel consider their infrastructure. But a lack of adequate cooling in the datacenter can be just as much of a threat to a customer’s deployment timeline.

Heat is a byproduct of power consumption, and infrastructure components can only tolerate a certain temperature before they will begin to break down. A 3x reduction in power draw (as mentioned above under “Power”) means a proportional reduction in cooling required. This reduction can translate into dollar savings, reduced ecological impact, or time savings in the form of not waiting three months for a contract to be awarded to a cooling contractor for improved air conditioning in the datacenter.

Cost

Cost is the number one killer of virtualization initiatives. In the earliest phases of any IT effort, ROI assessments are a given. If investment outweighs return, there is considerable likelihood of cancellation before a single server or desktop is virtualized. For the past ten years, the single greatest cost of any virtualized initiative has been storage. Close behind are the server and network costs. Manpower and training to manage the storage, server and network infrastructure are also major contributors to the “investment” column that must be offset by “return” if an initiative is to see the light of day.

Nutanix customers cut their CAPEX and OPEX costs by 60% or more when compared to traditional infrastructure. The converged infrastructure means fewer components are required, significantly reducing hardware costs. Nutanix “Heat Optimized Tiering” automatically moves IO hungry virtual machines into flash storage and idle virtual machines into spinning storage, so that customers get the performance of flash when needed, yet they avoid the cost of hosting “cold” data on expensive flash disk.

Complexity

Picture yourself in a data center, with multiple components of your virtualization pilot arriving from different vendors, on different days, with missing parts, and a ten-page bill of materials. If it’s going to take eight weeks to even test drive the solution, you can pretty much forget about a green light for the project.

Speculation

Unlike traditional architectures, Nutanix doesn’t require that you guess or speculate. Because of Nutanix’ modular scaling characteristics and automatic node clustering, Nutanix customers are able to start small, with a few nodes supporting a pilot-sized initial deployment, and then scale simply and without risk… scaling to massive enterprise deployments in increments of as little as one node at a time. This enables customers to invest in additional infrastructure only when needed; and purchasing decisions can be made on facts and experience rather than guesses and speculation. At Nutanix, we don’t ask you to cross your fingers and hope for the best.

Scaling

Agencies are typically given two undesirable alternatives: risk buying all infrastructure up front (ignoring the definition of “pilot”), or pilot on less expensive, non-production infrastructure, then rip and replace (to untested production hardware) when the pilot runs out of horsepower. More often then not, they choose neither.

Why are agencies forced to ponder two poor options? Because traditional infrastructures don’t scale well. Because they don’t scale well, their vendors offer the soft drink model — Small, Medium, Large, and XL — and they do their best to sell you the Large or XL well in advance of you ever needing it. They’d love for you to pay no mind to the fact that you will receive zero ROI for most of the lifecycle of their device, if you ever grow into it at all. And if you choose wrong and need to change? Bust out your budget for a rip and replace.

How does Nutanix help?

Nutanix doesn’t ask you to choose between two terrible alternatives. We don’t need you to because linear scaling is at the heart of the Nutanix Virtual Computing Platform. Nutanix clusters start with as few as three nodes, and scale to hundreds of nodes through automatic clustering and aggregating of resources. Start small and scale using facts, only when you achieve increments of success. Don’t be forced into ridiculous choices because of their platforms’ shortcomings.

Federal Use Cases:


VDI/Telecommute (BYOD)

With an ever increasing number of government initiatives for BYOD and telecommuting and executive mandates for security, more and more federal organizations are using VDI to provide mission-critical IT services. Check out the case studies here and see how Nutanix can help your VDI initiative succeed.

Big Data

The federal government faces greater, more complex challenges than at any time in history. And when unanticipated events occur, it frequently seems that hindsight shows that predictive data was there but unnoticed. Big Data initiatives in nearly every agency acknowledge that data can help guide the way to our best and safest future if it can just be properly analyzed and evaluated — the sooner the better.

The following six departments and agencies helped launch the Federal Big Data Research and Development Initiative, and since its launch many others have initiated or expanded their own Big Data efforts:

The intelligence community has embraced Nutanix for its ability to host and manage Big Data software, ranging from small to massive initiatives. Because of its simplicity, Nutanix considerably reduces time to success — which means data can be harnessed quickly and critical decisions can be made sooner.

Datacenter Consolidation, Cloud, & Server Virtualization

Server virtualization was becoming prevalent far before the Federal CIO established the Federal Data Center Consolidation Initiative (FDCCI) in February of 2010. But the memo initiating the FDCCI made clear the benefit that server virtualization, cloud, and datacenter consolidation could bring. The number of federal datacenters had nearly tripled in ten years — to 1100. Federal datacenters consumed 12 billion kwh of electricity in 2012. The focus of the FDCCI was to:

Promote the use of Green IT by reducing the overall energy and real estate footprint of government data centers;

Reduce the cost of data center hardware, software and operations;

Increase the overall IT security posture of the government; and

Shift IT investments to more efficient computing platforms and technologies.

Virtualizing on Nutanix helps with all of the above. Nutanix consumes less rackspace, power, cooling, and weight, while reducing CAPEX and OPEX by 50% or more over traditional server-SAN infrastructures.

COOP Disaster Recovery

All Federal departments and agencies are required to have viable and executable contingency plans for the continuity of operations (COOP). COOP planning facilitates the performance of department/agency essential functions during any emergency or situation that may disrupt normal operations. Secondary COOP datacenter locations are typically deliberately distanced from primary datacenters, making them an inconvenient location to visit for hands on work. This means that IT infrastructure for COOP should be simple, “set it and forget it” hardware.There when you need it, easily managed remotely, and inexpensive to acquire and operate. A complicated blend of servers, storage, controllers and SAN fabric is the last thing you want to deal with during a crisis – yet virtualization software requires ‘shared storage.’

Dozens of Federal customers use Nutanix for their COOP infrastructure because it provides the simplicity, reliability, low cost, and remote management you want for your COOP site. A single appliance provides all of the compute and SAN capabilities in as little as 2U of rack space, eliminating all of the variables while still supporting all of VMware’s highest-end features.