October 22, 2013, 12:30 PM — The enterprise adoption of cloud computing resources has taken a precarious path. Many organizations have started by running small workloads in the public cloud, reticent to use the platform for bigger mission-critical workloads.
But once they get comfortable with say a test and development use case in the cloud, or an outsourced e-mail platform, perhaps CIOs and CTOs warm up to the idea of using outsourced cloud resources for more jobs.
At a recent panel of cloud users, one thing became clear though: Managing a public cloud deployment at small scale is relatively straightforward. The problem comes when that deployment has to scale up. "It gets very complex," says IDC analyst Mary Turner, who advises companies on cloud management strategies. "In the early stages of cloud we had a lot of test and development, single-purpose, ad-hoc use case. We're getting to the point where people realize the agility cloud can bring, and now they have to scale it."
And doing so can be tough. The panelists at the recent Massachusetts Technology Leadership Cloud Summit had some tips and tricks for users though. Here are five.
[LIVING IN THE CLOUD:3 Lessons from Netflix about how to live in the cloud]
-Consolidate account management
Unfortunately, a common way that cloud usage starts in an enterprise is when various departments within an organization spin up public cloud resources behind the back of their IT departments. Known as "shadow IT," it can create a scenario where multiple different departments each have their own accounts with a public cloud provider, like Amazon Web Services. When the IT department attempts to take control of these services, the IT manager all of a sudden is juggling multiple accounts.
Instead of managing each of these separately, Amazon Web Services allows users to consolidate those into a single administrative account. By doing this, usage statistics are aggregated into a single billing stream, and users can re-allocate resources among various accounts, with some limitations. Jason Fuller, head of cloud service delivery of Pegasystem's software, says that's an immensely helpful feature when managing multiple accounts within the same organization. It helps not only from a technical standpoint to have oversight across all the accounts, but from a financial one too because of the aggregated and streamlined billing.
-Turn the lights off
"Sometimes when I wake up in the morning I go downstairs and my kids have left the lights on all night," says John O'Keefe, senior director of operations at Acquia, a company that supports open source Drupl, and one of the panelists at the Mass TLC event. He worries about the same thing with his developers using Amazon's cloud. The beauty of self-service public cloud resources is that they're incredibly easy to spin up customers just swipe a credit card and click a few buttons. The problem is those resources don't get shut off when users are done with them. To prevent this situation, O'Keefe tries to do a daily inventory if not even more frequently to ensure that only the resources that are actively used are "on." De-provisioning resources is just as easy as spinning them up someone just needs to remember to do it. AWS has a variety of tools to help customers monitor this, including CloudWatch.
-Right-size your resources
A common platform for IaaS resources nowadays is for providers to offer an a la carte offering of virtual machine instances sizes and storage platforms. Customers should take care when choosing exactly which resources to use, because if you don't then there can be significant waste.
Usually it's pretty straightforward to decide between the three main flavors of storage from AWS: Elastic Block Storage (EBS), Simple Storage Service (S3) or Glacier. EBS is for block storage, which are volumes that are not connected directly to compute instances that use them. S3, on the other hand, is a massive file store system that can be used for granular storage of small items and scaled way up to larger files too. Glacier is a long-term storage platform with extremely high availability and low costs, but very (comparatively) long wait time for retrieving data. Within EBS there are tiers of storage though (see information about the options here). Customers should ensure they have the right performance, reliability and scalability requirements for their needs. If you don't, you may end up overpaying for services you don't need.
Another key is ensuring virtual machines are right-sized for your workloads. AWS has a catalog of more than a dozen different types of virtual machines, from high input/output VMs, to high memory ones. Evaluate what your application is and what kind of resource it needs, and get the right size VM for it. A variety of third-party AWS monitoring tools can help users make the right decisions. Other companies, like ProfitBricks and Cloud Sigma, allow customers to set their own VM instance (and pay for them by the minute, instead of by the house). These features allow customers to customize their VMs at granular levels, opposed to choosing from a menu of options from AWS.
-Beware of noisy neighbors
When using a public cloud, you're typically going to be sharing infrastructure resources with a lot of other users. That's in part why these IaaS clouds are so cheap because providers can pack multiple customers into high-density virtual machines. You may be sharing a virtualized server with other companies. For some applications and workloads that may not be a problem. But, for others that are performance-sensitive, it can be an issue.
While AWS says that it takes steps to avoid this by hard partitioning resources, users still worry about it. Panelist Greg Arnette, CTO of cloud data archiving company Sonian, says this used to be a bigger issue a few years ago, but network volatility is less common nowadays. Still, some users may be concerned about it. For those who are, customers can pay extra to have dedicated resources which are isolated areas of the AWS cloud reserved for individual customers. There is also AWS Virtual Private Cloud, which is now the default setting in EC2, which uses a hardware VPN and allows customers to configure their virtual networks. The best way to avoid the noisy neighbor though is to right-size the VMs to make sure they have enough capacity for the application that's running. If the VMs don't meet their performance being advertised, that can be a breach of the service-level agreement (SLA).
-Find efficiencies where you can
Living in a cloud makes techies think differently about how they run their shop. Let's say, for example, you have hundreds or thousands of files that you're looking to store into S3 on a daily or weekly basis. AWS makes it easy to upload and download those files one at a time. But, in doing so each one of those is an API call, which AWS users are charged for (see API call prices here). Instead, users should bundle their jobs, and load files up in blocks through a single API call to reduce API surcharges, Arnette says. Steps like these are what can be the difference between having an efficient, right-sized cloud and being nickel and dimed by your cloud provider.