Tuesday, July 26, 2011

Why is cloud computing so hard to understand?

Why is cloud computing so hard to understand? It would be an equally fair question to ask why today’s Information Technology is so hard to understand. The answer would be because it covers the entire range of business requirements, from back-office enterprise systems to various ways such systems can be implemented. Cloud computing covers an equal breadth of both technology and, equally important, business requirements. Therefore, many different definitions are acceptable and fall within the overall topic.
But why use the term "cloud computing" at all? It originates from the work to develop easy-to-use consumer IT (Web 2.0) and its differences from existing difficult-to-use enterprise IT systems.
A Web 2.0 site allows its users to interact with other users or to change content, in contrast to non-interactive Web 1.0 sites where users are limited to the passive viewing of information. Although the term Web 2.0 suggests a new version of the World Wide Web, it does not refer to new technology but rather to cumulative changes in the ways software developers and end-users use the Web.

Saturday, July 23, 2011

Hybrid cloud deployment

A hybrid cloud is a mix of both private and public clouds. This allows the organization to gain the benefits of cloud computing only where it makes sense. A common scenario is for an organization to keep its data in its own data center and then use a public cloud service to perform whatever computation tasks are required. A hybrid cloud allows an organization to leverage its current investments in compute infrastructure and augment it with a cloud-based service. Because this model allows organizations to migrate to cloud computing at their own pace, hybrid clouds will be the path most will choose to begin their cloud deployment.
At first glance, it appears that hybrid clouds provide the perfect mix of private and public, but that’s not necessarily the case. Ideally, in a hybrid environment, a portable computing resource, or workload, would be able to move seamlessly between the organization’s private and public cloud service. But the network will need to play a key role to secure and optimize this movement as it traverses the Internet. To do this, the network needs to provide the necessary security as well as QoS.

Deploying private clouds

A private cloud -- also known as an internal cloud -- is the model in which an organization builds a data center using cloud computing principles. The private cloud is really virtualization technology reaching its full potential and vision. A data center designed as a private cloud would look like a public cloud from the perspective of applications and services. The compute resources are “pooled” to create on-demand access to applications and services. The only difference is that the compute resource pools are available from the organization’s own data center rather than over the public Internet. This means that the compute resources need to be virtual and portable, allowing them to move across the network in real time.
Private clouds are highly disruptive to the data center network and break down most traditional network best practices. Traditional network design principles follow a common three-layer topology: an access layer, aggregation layer, and core. In addition, many networks are built at Layer 3 (IP Layer) because this provides the most flexibility. This type of architecture, while ideal for current data centers, will not allow organizations to migrate to a private cloud model.
The three-tier architecture typically has servers connected to a top-of-rack switch, which is then aggregated in an end-of-row switch and connected back to a core switch. In a private cloud, the compute resources need to be fluid and move across a rack, across a data center or even between data centers in as short a time as possible. The multiple hops that the virtual workload would make add a degree of latency that degrades the performance of the private cloud.

Friday, July 22, 2011

Public cloud deployment: Planning for the network impact

Cloud computing promises to be one of the most dramatic transformations in technology since IT shifted from mainframes to client server systems. However, cloud computing comes in different shapes and sizes, and the type of architecture used will have an impact on the network. Network managers must be involved with the decision about which cloud architecture is used and must understand the impact that private, public or hybrid clouds will have on the network. For the majority of cloud computing deployments, network optimization will be the key determining factor for the performance of “the cloud.”

Public clouds
A public cloud is a compute model where a cloud service provider makes resources such as applications, computing power and storage available to organizations over the Internet. With public clouds, the compute resource is located in the network, and the deploying organization pays only for what it consumes. In this model, if an organization needs more compute power for a short period of time, it can order more from the service provider, utilize the service for as long as necessary and then return to normal operations.
This model appears to be a simple one, but even the most basic of cloud services can fail if the key network considerations are not taken into account along with the deployment. There are two main areas of focus for the network when it comes to public cloud deployments: bandwidth and security.

VMware vCloud Director 1.5: Small but definitive step forward

Along with the release of vSphere 5, widely acclaimed as a technical success and a potential licensing crisis, VMware has unveiled vCloud Director 1.5 Both are not available yet, but the details have been released. Users hail it as a great start to getting a viable private cloud from VMware.
Overall, however, vCD deployment remains very low, with lighthouse cases in some enterprise test and dev environments and most traction at service providers. Private organization adoption is sparse; one person listed as a vCloud Director customer by VMware and contacted for this story did not know if they were using the product and thought a former graduate student may have experimented with it at some point.
That might begin to change since, most importantly, deployment options for vCloud Director (vCD) have changed. It still needs to be deployed to a dedicated server host running RHEL, but it now supports Microsoft SQL 2005 and 2008 databases for a backend, and VMware promises more database support to come.
"To be honest, the biggest thing that stopped us going forward was the Oracle licensing," said Gurusimran Khalsa, systems administrator for the Human Services Division of the State of New Mexico. He said his agency had already endured several years of consolidation and virtualization and vCloud Director looked attractive. HSD even bought a few licenses to experiment with but never used them because the requirement for Oracle was something the division had successfully dodged in the past and wasn’t about to sacrifice for vCD.

A really useless Cloud Computing Index

Investment firm, First Trust Portfolios' Cloud Computing Index is an interesting sign of the times in cloud computing and the surest sign that we're in a cloud bubble, according to many analysts.

The bubble hype is neither here nor there, but looking at the description of the fund, the key question is whether its top 10 holdings appropriately reflect leaders and bellwethers of the cloud computing landscape. Here's the list as of July 21:
Google, Inc.
Amazon.com, Inc.
Open Text Corporation
VMware, Inc.
Salesforce.com, Inc.
Blackboard, Inc.
Netflix, Inc.
Teradata Corporation
TIBCO Software Inc.
F5 Networks, Inc.
A good number of companies here do represent trends in cloud computing, but others don't exactly fit the bill.

HP CloudSystem: What exactly is it?

HP claims to have put private cloud, hybrid cloud and possibly public cloud in its pocket to sell to enterprises and service providers. But CloudSystem, as it is known, isn't so much a platform as a collection of intersecting HP products and roadmaps to get cloud capabilities -- elastic, self-service provisioning, storage and metered use -- into your data center.
There's CloudSystem Matrix, CloudSystem Enterprise, CloudStart Solution, Cloud Service Automation, Cloud Service Delivery, CloudMaps, Cloud Matrix Operating Environment, CloudSystem Security, CloudAgile, and on and on. HP suddenly has a lot of stuff stamped "cloud." What's more, there's no shrink-wrap; you pick bits here and there, and HP helps you install and tune it. Fortunately, there's at least one example in the wild to see what actually constitutes an "HP cloud."
"Yes, they do have an awful lot to look at," said Christian Teeft, VP of engineering for data center operator and services provider Latisys. Teeft said Latisys may well have the first live CloudSystem environment at an HP customer. Latisys is using the system to sell cloud infrastructure services, which come in "private" and "semi-private" options; dedicated clouds for customers, as it were.
Teeft said Latisys deliberately leaned away from startups and smaller cloud platform vendors, talking to enterprise vendors and other service providers like NewScale, Joyent, BMC and others about automation, but HP had the country club marquee customers and Teeft liked the integration with HP's hardware.
"There are synergies with HP around hardware blade systems and ISS," he said.
Industry Standard Server Technology Communications (ISS technology communications) is HP's way of distributing technology guides and techniques to users, a bit like Microsoft's TechNet.

Thursday, July 21, 2011

What 'rogue' cloud usage can teach IT

The use of cloud computing to work around IT shines a bright light on what needs to change

Many people in enterprises use cloud computing -- not because it's a more innovative way to do storage, compute, and development, but because it's a way to work around the IT bureaucracy that exists in their enterprise. This has led to more than a few confrontations. But it could also lead to more productivity.
One of the techniques I've used in the past is to lead through ambition. If I believed things were moving too slow, rather than bring the hammer down, I took on some of the work myself. Doing so shows others that the work both has value and can be done quickly. Typically, they mimic the ambition.
The same thing is occurring in employee usage of cloud services to get around IT. As divisions within companies learn they can allocate their own IT resources in the cloud and, thus, avoid endless meetings and pushback from internal IT, they are showing IT that it's possible to move fast and quickly deliver value to the business. They can even do so with less risk.
Although many in traditional IT shops view this as a clear threat and in many cases reprimand the "rogue" parties, they should reflect on their own inefficacies that have taken hold over the years. Moreover, the use of cloud computing shines a brighter light on how much easier IT could do things in the cloud. It becomes a clear gauge as to the difference between what IT can do now and what technology itself can achieve when not held back.

Hybrid clouds: Three routes to implementation

Hybrid clouds come into play when traditional storage systems or internal cloud storage are supplemented with public cloud storage. To make it work, however, certain key requirements must be met. First and foremost, the hybrid storage cloud must behave like homogeneous storage. Except for maybe a small delay when accessing data on the public cloud, it should otherwise be transparent. Mechanisms have to be in place that keep active and frequently accessed data on-premises and push inactive data into the cloud. Hybrid clouds usually depend on nimble policy engines to define the circumstances when data gets moved into or pulled back from the cloud.
There are currently three routes you can take to implement hybrid clouds:
  • Via cloud storage software that straddles on-premises and public cloud storage
  • Via cloud storage gateways
  • Through application integration

Wednesday, July 20, 2011

Why Microsoft's 'cloud bribes' are the right idea

Although many view cloud migration fees as payola, the charges are exactly what's needed from cloud computing companies

InfoWorld's Woody Leonhard uncovered the fact that Microsoft is paying some organizations to adopt its Office 365 cloud service, mostly in funds that Microsoft earmarks for their customers' migration costs and other required consulting. Although this raised the eyebrows of some bloggers -- and I'm sure Google wasn't thrilled -- I think this is both smart and ethical. Here's why.
Those who sell cloud services, which now includes everyone, often don't consider the path from the existing as-is state to the cloud. If executed properly, there is a huge amount of work and a huge cost that can remove much of the monetary advantages of moving to the cloud.
Microsoft benefits from subsidizing the switch because it can capture a customer that will use that product for many years. Thus, the money spent to support migration costs will come back 20- or 30-fold over the life of the product. This is the cloud computing equivalent to free installation of a cable TV service.

Beware the oversimplification of cloud migration

Many cloud vendors' oversimplified approaches are more likely to hurt than help

I get a pitch a day from [enter any PR firm here]. The details vary, but the core idea is the same: "We have defined the steps to migrate to the cloud, so follow us."
To be honest with you, I often bite. If nothing else, I'll take a look at the press release, white paper, or website. Often there is a list of pretty obvious things to do, such as "define the business objectives" and "define security," but the details are nowhere to be found. Why? Because the details are hard and complex, and the vendors would rather that their steps seem more approachable by the IT rank-and-file than ensure they will actually work. 
Moving applications, systems, and even entire architectures to cloud-based platforms is no easy feat. In many cases it requires a core understanding of data, processes, APIs, services, and other aspects about the existing state of IT before planning the move. Yes, the details.

Building an IT infrastructure roadmap to the cloud

Infrastructure is where the public cloud model can offer the biggest benefit for startups. Public clouds allow startups to spin up their businesses quickly without eating into smaller cash flows by wasting dollars on building their own internal IT infrastructures.
Some employees may be using the cloud right now without even knowing. This represents a hidden "iceberg" of cost, where employees may be spending their organization’s money on virtual machines (VMs) with providers like Amazon EC2. This not only circumvents the standard operating procedures of many organizations but also their accounting and auditing systems as well.
But there is an unhealthy assumption that by merely adopting a public cloud strategy, organizations will automatically save money. This is not necessarily true.
For two major reasons – to guarantee that commercially sensitive content remains private and to ensure the application of proper accountancy practices -- it may be time for organizations to legitimize public cloud for certain types of work. Costs will be incurred, but they should not be hidden from the business like a dirty little secret.

Tools to unlock private cloud's potential

Many enterprises either already have a private cloud, plan to build one or at least have considered in-house cloud as an option. If you're on the private cloud bandwagon but remain unfamiliar with how to extract its full benefits, you're not alone.
This tutorial looks at private cloud computing tools that unleash the power of automation and orchestration, monitoring and service catalogs. While these features are important, they're also not yet fully understood in the context of virtualized, or private cloud, environments.
Enabling orchestration and automation
Although automation and orchestration are often used interchangeably, there is a subtle difference between the two terms. Automation is usually associated with a single task, and orchestration is associated with a process that involves workflow around several automated tasks. If you're looking to better understand the value and importance of automation (and orchestration) in a private cloud environment, one of the best ways is to contrast server provisioning in a traditional data center with virtual server provisioning in a virtualized environment.
Server virtualization can reduce the amount of time to provision servers, but it does not decrease the time associated with installation. An IT staff uses labor-intensive management tools and manual scripts to control and manage the infrastructure, and they will not be able to keep up with the continuous stream of configuration changes needed to maintain access and security changes in conjunction with a private cloud's dynamic provisioning and virtual machine (VM) movement. This is why automation of these processes is an important element of private cloud.

Improving cloud interoperability with application migration tools

Cloud interoperability goes beyond matters like cloud application programming interfaces and virtual machine image format conversion. It is primarily about application migration; moving them back and forth between private clouds and public clouds or from a public cloud to another public cloud. Application migration among clouds allows users to select best of breed cloud technology and avoid lock-in, but it's not possible without tools that facilitate communication between different cloud vendors and services.
Each cloud provider decides which hypervisors, storage models, network models, management tools and processes they are going to work with. This means limited control of the environment for developing and deploying applications; the decisions made by a cloud provider affect what you can and cannot do in a cloud.
Even if there was an open standard cloud API that all vendors used, only part of the problem would be solved. Only relatively simple applications can be migrated to a target cloud without some difficulties. Most depend on services such as a directory infrastructure, identity management and databases, and every component of the application must be determined and reproduced or replaced in a target cloud. This is true for all application dependencies.
But there are some potentially good options available to reduce the issues associated with cloud interoperability and application migration. CloudSwitch, Racemi DynaCenter 4.0 and Citrix NetScaler Cloud Bridge are three tools that help move applications among clouds. These tools do not require modifications, and they allow applications to be managed as though they are still running in a private cloud.