It’s almost inevitable that you’ll have to upgrade your servers some time or another. But whenever the time does come around, you better make sure that you’re getting the CPU and memory you need to maximize your efforts, especially if you’re considering virtualization. This exclusive article from Techcloudlink.com will guide you to make the right decision.

HARDWARE SPECS YOU NEED IN A SERVER FOR VIRTUALIZATION

When it’s time to buy a new server, make sure you’re getting the CPU and memory you need to maximize consolidation.

Virtual machines reside in memory, so more memory will support additional consolidation. There should be at least enough DDR3 memory to support the number of workloads you expect to run on the system. For example, the two 10-core CPU example above would support (20 cores with two threads each) 40 threads or potential workloads.

If each workload uses an average of 2 GB, the server would need at least 80 GB, though many organizations would select the next closest “binary” amount of 96 GB, or even 128 GB. More memory would simply waste money, while less memory will compromise consolidation or performance.  Remember that memory resilience features, such as memory sparing or memory mirroring, will require additional memory modules, which will not add to the available memory pool. Instead, you should save these features for servers running mission-critical workloads.

Every workload needs network access, so be sure there is adequate bandwidth available on any server for virtualization. For example, the one Gigabit Ethernet (GbE) network interface card (NIC) common on stock servers will almost certainly be inadequate for a modern virtualized server. Consider upgrading the network interface with a dual-port or quad-port NIC. You may even consider a 10 GbE NIC if workload demands justify it.

Servers represent a significant capital investment, so bargain hard with your prospective server vendors. Be sure to bring a server into your data center for evaluation, where you’ll be able to test performance at full consolidation. This is an excellent opportunity to identify possible oversights in system requirements and refine your specifications before making the actual purchase. Vendors recognize it is in their best interest to assist customers with specifications and evaluation units.

When it comes to timing your purchase, the choice is usually based on more of a business decision than a technical issue. It is certainly possible to purchase and upgrade the entire server fleet at the same time, and the biggest purchases often net volume discounts. However, this requires the biggest capital outlay and would pose the most danger of disruption to the production environment.

RECOVERY TOOLS AND TIPS FOR YOUR VIRTUAL SERVER BACKUP STRATEGY

THE MEASURE OF GOOD BACKUP SOFTWARE CAN OFTEN BE HOW POWERFUL THE RECOVERY TOOLS ARE. MAKE SURE TO AVOID THESE MISTAKES WHEN BACKING UP VIRTUAL SERVERS.

With all of the options admins have today, sometimes, it can be easy to forget about virtual server backups, especially with replication being used in the data center. However, replication doesn’t cover everything. The primary purpose of a backup is to create a copy of important data that isn’t online to hackers, or in case you run into software issues or those occasional mistakes that system admins create. Clouds and virtualized environments bring their own challenges when it comes to performing backups. VMs are transient and data is in constant motion. System admins need a virtual server backup strategy in place to ensure that every backup is handled correctly.

The focus in handling virtual systems is to impose a data management structure. It’s important to figure out what data needs to be saved and where the primary copy exists. This information has to be overlaid with backup frequency based on recovery point objective policies, which will likely differ from data set to data set. Here’s where replication does have an impact. If it’s done properly, with geodiversity across multiple zones, a bunch of failure mechanisms are negated, such as hardware or power problems.

Where data management is less structured, as is the case where many tenants access the VM pool, is networked storage backup, which lacks the visibility to handle the fragmented data map. Here, the best option is to resort to virtual machine backup. There are two options for this: One is to back up a set of selected files on each machine; the alternative is to just back up the whole VM. Often, the latter is the choice, simply because it is easier to set up, manage and, as importantly, easy to restore.

There are many tools that support VM backup. The large cloud providers have their own offerings, as do hypervisor vendors. Third-party tools take advantage of the API sets and offer their own approaches, especially in the recovery area.

For private clouds and simpler virtualized clusters, local backup is the near-term answer, with an unintegrated transfer of data to a public cloud as an option, but the move towards hybrid clouds opens up in-cloud storage, with all its fringe benefits in geodiversity and ease of use. Ultimately, cloud storage has too many benefits to ignore, likely ending the use of local storage mechanisms and tape libraries. These will be replaced by cloud backup gateways, likely themselves running in virtual machines, with backups being cached for a while locally due to some evidence that recent backups account for most restores.

This exclusive e-guide from our virtualization experts will guide you on the right path to choosing the best server upgrade for your virtualization initiative. Learn about the specs to consider, as well as why backup needs to play a large part in your strategy.

To read full download the whitepaper:
The Key Specs You’ll Need in a Server for Virtualization

SEND ME WHITEPAPER