Thursday, December 15, 2011

Comparing open source cloud platforms: OpenStack versus Eucalyptus

Rackspace and NASA recently launched OpenStack, a cloud software stack that has already generated significant buzz in open source and cloud computing circles. What it offers, in a nutshell, is an entryway for hosting providers that want to provide a cloud service to their customers, much like Parallels Virtuozzo opened up virtual private servers to Web hosting companies.

OpenStack offers the promise of do-it-yourself clouds in a secure, private test lab before moving to either private cloudor public cloud, along with insight into the real security issues behind cloud computing and Infrastructure as a Service (IaaS). OpenStack has been hailed as the most significant development in cloud computing to date. What doesn't it offer?
A tie-in to the number-one cloud provider, Amazon, for one. For that, you'd have to turn to Eucalyptus, the other open source cloud computing product on the market. Eucalyptus has been around for nearly three years, a long time in terms of IaaS products. It was founded out of a research project in the Computer Science Department at the University of California, Santa Barbara, and became a for-profit business in 2009.

What admins need to know about Windows Azure services


Microsoft is famous for saying it’s "all in" with cloud computing, and for developers, there’s been a lot of information, code and features to sink their teeth into. But as an IT administrator, the cloud represents something outside of your boundaries -- something your company pays for, but you don’t directly control.
The truth is, there hasn’t been a lot of really good messaging around what Windows Azure means for IT professionals. So let’s take a look at the Azure lifecycle from an IT perspective and try to piece together how the worlds of cloud computing and IT departments come together.
First, understand the essentials of Azure
Windows Azure, put very succinctly, is an environment run by Microsoft that lets developers create applications that will run anywhere without worrying about things like specifying hardware, dealing with demand, or acquiring management teams to take care of the bells and whistles. Azure basically abstracts the layers of service provisioning and computer management from the developer, so he or she can write an application to the Azure platform but not be concerned with resources, machines, state, and so on.
Companies supposedly benefit because you only pay for what you use (i.e. the time Microsoft’s resources are used on your company’s behalf), which minimizes capital costs to run your Internet-connected applications.

Microsoft sets Azure free; Appliance flutters to Fujitsu


Almost a full year after a giddy, typically early Microsoft announcement, Redmond's Platform as a Service technology has left Microsoft's data centers and is running at giant IT supplier Fujitsu.
Microsoft Azure is available on Fujitsu hardware in the company's Tokyo data center, and customers can interact with it just like Microsoft's U.S.-based Azure services. The Fujitsu Global Cloud Platform/A5 (FGCP/A5 or Fujitsu Azure) will be live in August; users will have to sign up with Fujitsu to access the service, and pricing is expected to be the same as current Microsoft Azure pricing.

Users are by and large enthused, since this announcement cuts the Gordian Knot of enterprise IT security and public cloud services. Fujitsu says that, as a longstanding colocation and hosting provider, users can be assured their data is somewhere familiar, although it does not address concerns about multi-tenancy. Fujitsu Azure users will share data space on the platform within Fujitsu's environment, just like Microsoft Azure users do in Microsoft's data centers.
"This latest development is a significant milestone in making Windows Azure a ubiquitous enterprise solution," said Brian Fino, an IT consultant specializing in Microsoft development.
He said there was an intense dichotomy in the enterprise at the moment, between the almost-absurd ease with which Azure could be used and the need for IT shops to maintain control over infrastructure and data, still by far the biggest roadblock to cloud.

Hybrid cloud: The best of public and private clouds


While the term “private cloud” means custom cloud technology for enterprises, most believe their own data centers already provide private clouds services. Since these companies also expect to adopt at least some public cloud services, the next step clearly is to build a hybrid cloud
But if hybridization isn’t a partnership between a public cloud and a private cloud that are built on common technologies, how does it happen? Companies expect worker application experiences to be transparent to where the application runs, which means either the experiences or the applications must be integrated in a hybrid cloud regardless of how the “private” portion is created.
A hybrid cloud’s success begins by selecting the right integration method.
Building a hybrid cloud with a front-end applicationThe dominant strategy in creating a hybrid cloud that ties traditional data centers with public cloud services involves the use of a front-end application. Most companies have created Web-based front-end applications that give customers access to order entry and account management functions, for example. Many companies have also used front-end application technologies from vendors like Citrix Systems Inc. to assemble the elements of several applications into a single custom display for end users. You can use either of these front-end methods to create a hybrid cloud.

Cloud predictions for enterprise IT in 2012


Ask around and you’ll hear it over and over again: Enterprise IT shops are worried about using the cloud. They’re concerned about the security, privacy and governance risks of moving their data and applications outside their four walls. Cloud vendor lock-in and job security are also major worries for these guys, though few IT pros will admit to the latter.  
Next year will be no different. But instead of keeping the cloud at arms’ length, which many enterprise IT shops have done this year, 2012 will be the year for taking the plunge. If you’ve been sitting on the fence, here are some thoughts on what to expect next year that might push you over the edge.

The following are my predictions based on current trends and where I see them leading us. Just some observations; I have no crystal ball.
Cloud hype ends, disillusion begins
After several years of megalomania, cloud vendors finally settle down and stop trying to prove cloud computing is real. Enterprise IT gets it: We’re looking at the next model for IT delivery and consumption. The adoption process becomes a long-term “roadmap” and a slow and safe transition touching every part of the business. It’s going to be a bumpy road.  Watch out for the vendors with professional services arms who are rubbing their hands together with glee, claiming to you show you the way forward. And channel guys: Your time is now!   

Checklist: Managing applications in the cloud


Moving applications to the cloud is not for the faint of heart. Issues can crop up that may force you to re-architect the application, compliance requirements can create roadblocks, and bandwidth problems can occur if your cloud provider does not support low-level networking services such as multicasting.
After you’ve assessed which applications can run in a public cloud, there are other factors to consider -- configuration, data migration and monitoring. What are some of the most common configuration tasks you need to keep in mind when migrating an app to the cloud? This checklist outlines key points:

Tuesday, November 29, 2011

Mashup (web application hybrid)

In Web development, a mashup is a Web page or application that uses and combines data, presentation or functionality from two or more sources to create new services. The term implies easy, fast integration, frequently using open APIs and data sources to produce enriched results that were not necessarily the original reason for producing the raw source data.
The main characteristics of the mashup are combination, visualization, and aggregation. It is important to make existing data more useful, moreover for personal and professional use.
To be able to permanently access the data of other services, mashups are generally client applications or hosted online. Since 2010, two major mashup vendors have added support for hosted deployment based on Cloud computing solutions; that are Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid.
In the past years, more and more Web applications have published APIs that enable software developers to easily integrate data and functions instead of building them by themselves. Mashups can be considered to have an active role in the evolution of social software and Web 2.0. Mashup composition tools are usually simple enough to be used by end-users. They generally do not require programming skills and rather support visual wiring of GUI widgets, services and components together. Therefore, these tools contribute to a new vision of theWeb, where users are able to contribute.
The term mashup is also used to describe a remix of digital data.
SOURCE: Wikipedia

Tuesday, October 18, 2011

Near field communication

Near field communication, or NFC, allows for simplified transactions, data exchange, and wireless connections between two devices in close proximity to each other, usually by no more than a few centimeters. It is expected to become a widely used system for making payments by smartphone in the United States. Many smartphones currently on the market already contain embedded NFC chips that can send encrypted data a short distance ("near field") to a reader located, for instance, next to a retail cash register. Shoppers who have their credit card information stored in their NFC smartphones can pay for purchases by waving their smartphones near or tapping them on the reader, rather than using the actual credit card. Co-invented by NXP Semiconductors and Sony in 2002, NFC technology is being added to a growing number of mobile handsets to enable mobile payments, as well as many other applications.
The Near Field Communication Forum (NFC Forum) formed in 2004 promotes sharing, pairing, and transactions between NFC devices and develops and certifies device compliance with NFC standards. A smartphone or tablet with an NFC chip could make a credit card payment or serve as keycard or ID card. NFC devices can read NFC tags on a museum or retail display to get more information or an audio or video presentation. NFC can share a contact, photo, song, application, or video or pair Bluetooth devices.
The 140 NFC Forum members include LG, Nokia, Huawei, HTC, Motorola, NEC, RIM, Samsung, Sony Ericsson, Toshiba, AT&T, Sprint, Rogers, SK, Google, Microsoft, PayPal, Visa, Mastercard, American Express, Intel, TI, Qualcomm, and NXP.
SOURCE: Wikipedia

Monday, September 19, 2011

Developments in the Azure and Windows Server 8 pairing

The shroud of secrecy has been lifted surrounding Windows Server 8 and Azure. What lies behind it is greater symmetry between the cloud computing and virtualized HA infrastructures, improved storage and an Azure toolkit that promises to help enterprises easily develop an Azure service and deploy it to end users.
This means DevOps teams will need to gain expertise with Window 8's new features to obtain maximum return on investment with public cloud computing as well as with private and hybrid clouds.
Windows 8 ultimately will provide the underpinnings of the Windows Azure platform with the intent to democratize high availability (HA) clusters to push the "scale-up envelope" with features previously reserved for high-performance computing Windows Server 2008 R2 versions.
Windows 8 will include new alternative disk storage architectures called Storage Pools and Spaces, Satya Nadella, president of Microsoft's servers and tools business, said here in his BUILD conference keynote. Storage Pools aggregate commodity disk drives into isolated JBOD (Just a Bunch of Disks) units and attaches them to Windows for simplified management. Storage Spaces do the same for virtual machines.

Microsoft sharpens Windows Server 8, Azure cloud story

If IT managers in Windows shops had any doubts about Windows Server 8 and Azure serving as fundamental building blocks for their company’s cloud strategy, they don’t have any now.
From dawn to dusk at the BUILD conference here this week, Microsoft executives pounded home the message that Server 8 and Azure will have a hand-in-glove relationship that will anchor most of the company’s enterprise platforms and applications.
They also consistently pushed the cloud platform as a scalable, highly available option for application developers and IT shops, emphasizing the company is also building features into the server product that are informed by its experience engineering and managing the hosting service.

Thursday, August 4, 2011

Securing virtual machines in the cloud

Choosing protection for a virtual infrastructure is a lot like buying an antivirus product for the Mac OS: most people would wonder why you bothered. Nonetheless, as more IT shops migrate their servers to virtual machines and cloud-based environments, it is only a matter of time before protecting these resources becomes considerably more important.
However, you can't just install your firewall or antivirus software on a cloud-based virtual machine (VM). Physical firewalls aren't designed to inspect and filter the vast amount of traffic originating from a hypervisor running 10 virtualized servers. Because VMs can start, stop and move from hypervisor to hypervisor at the click of a button, whatever protection you've chosen has to handle these activities with ease. Plus, as the number of VMs increases in the data center, it becomes harder to account for, manage and protect them. And if unauthorized people gain access to the hypervisor, they can take advantage of the lack of controls and modify all the VMs housed there.
As enterprises move toward virtualizing more of their servers and data center infrastructure, they need specialized protective technologies that match this environment. Luckily, there are numerous vendors who have stepped up to this challenge, although the level of protection is still nowhere close to the depth and breadth that is available for physical server protective products.

Three questions to ask when considering Microsoft Azure

Cloud computing is, for some, a means of escaping from the clutches of traditional computer and software vendors. Most enterprises realize that the value of cloud will depend on how well services integrate with their own IT commitments and investments.
Because Microsoft is so much a part of internal IT, Microsoft's cloud approach is especially important to users. Many will find it compelling; others may decide it's impossible to adopt. Which camp are you in?
The foundation of Microsoft Azure's value proposition is the notion that users must design their enterprise IT infrastructure for peak load and high reliability operation, although both of these requirements waste a lot of budget dollars. The Azure solution is to draw on cloud computing to fill processing needs that exceed the long-time average. It also backs up application resources to achieve the necessary levels of availability, if normal data center elements can't provide those levels.
This means that Azure, unlike most cloud architectures, has to be based on elastic workload sharing between the enterprise and the cloud. They accomplish this by adopting many service-oriented architecture (SOA) concepts, including workflow management (the Azure Service Bus).

Tuesday, July 26, 2011

Why is cloud computing so hard to understand?

Why is cloud computing so hard to understand? It would be an equally fair question to ask why today’s Information Technology is so hard to understand. The answer would be because it covers the entire range of business requirements, from back-office enterprise systems to various ways such systems can be implemented. Cloud computing covers an equal breadth of both technology and, equally important, business requirements. Therefore, many different definitions are acceptable and fall within the overall topic.
But why use the term "cloud computing" at all? It originates from the work to develop easy-to-use consumer IT (Web 2.0) and its differences from existing difficult-to-use enterprise IT systems.
A Web 2.0 site allows its users to interact with other users or to change content, in contrast to non-interactive Web 1.0 sites where users are limited to the passive viewing of information. Although the term Web 2.0 suggests a new version of the World Wide Web, it does not refer to new technology but rather to cumulative changes in the ways software developers and end-users use the Web.

Saturday, July 23, 2011

Hybrid cloud deployment

A hybrid cloud is a mix of both private and public clouds. This allows the organization to gain the benefits of cloud computing only where it makes sense. A common scenario is for an organization to keep its data in its own data center and then use a public cloud service to perform whatever computation tasks are required. A hybrid cloud allows an organization to leverage its current investments in compute infrastructure and augment it with a cloud-based service. Because this model allows organizations to migrate to cloud computing at their own pace, hybrid clouds will be the path most will choose to begin their cloud deployment.
At first glance, it appears that hybrid clouds provide the perfect mix of private and public, but that’s not necessarily the case. Ideally, in a hybrid environment, a portable computing resource, or workload, would be able to move seamlessly between the organization’s private and public cloud service. But the network will need to play a key role to secure and optimize this movement as it traverses the Internet. To do this, the network needs to provide the necessary security as well as QoS.

Deploying private clouds

A private cloud -- also known as an internal cloud -- is the model in which an organization builds a data center using cloud computing principles. The private cloud is really virtualization technology reaching its full potential and vision. A data center designed as a private cloud would look like a public cloud from the perspective of applications and services. The compute resources are “pooled” to create on-demand access to applications and services. The only difference is that the compute resource pools are available from the organization’s own data center rather than over the public Internet. This means that the compute resources need to be virtual and portable, allowing them to move across the network in real time.
Private clouds are highly disruptive to the data center network and break down most traditional network best practices. Traditional network design principles follow a common three-layer topology: an access layer, aggregation layer, and core. In addition, many networks are built at Layer 3 (IP Layer) because this provides the most flexibility. This type of architecture, while ideal for current data centers, will not allow organizations to migrate to a private cloud model.
The three-tier architecture typically has servers connected to a top-of-rack switch, which is then aggregated in an end-of-row switch and connected back to a core switch. In a private cloud, the compute resources need to be fluid and move across a rack, across a data center or even between data centers in as short a time as possible. The multiple hops that the virtual workload would make add a degree of latency that degrades the performance of the private cloud.

Friday, July 22, 2011

Public cloud deployment: Planning for the network impact

Cloud computing promises to be one of the most dramatic transformations in technology since IT shifted from mainframes to client server systems. However, cloud computing comes in different shapes and sizes, and the type of architecture used will have an impact on the network. Network managers must be involved with the decision about which cloud architecture is used and must understand the impact that private, public or hybrid clouds will have on the network. For the majority of cloud computing deployments, network optimization will be the key determining factor for the performance of “the cloud.”

Public clouds
A public cloud is a compute model where a cloud service provider makes resources such as applications, computing power and storage available to organizations over the Internet. With public clouds, the compute resource is located in the network, and the deploying organization pays only for what it consumes. In this model, if an organization needs more compute power for a short period of time, it can order more from the service provider, utilize the service for as long as necessary and then return to normal operations.
This model appears to be a simple one, but even the most basic of cloud services can fail if the key network considerations are not taken into account along with the deployment. There are two main areas of focus for the network when it comes to public cloud deployments: bandwidth and security.

VMware vCloud Director 1.5: Small but definitive step forward

Along with the release of vSphere 5, widely acclaimed as a technical success and a potential licensing crisis, VMware has unveiled vCloud Director 1.5 Both are not available yet, but the details have been released. Users hail it as a great start to getting a viable private cloud from VMware.
Overall, however, vCD deployment remains very low, with lighthouse cases in some enterprise test and dev environments and most traction at service providers. Private organization adoption is sparse; one person listed as a vCloud Director customer by VMware and contacted for this story did not know if they were using the product and thought a former graduate student may have experimented with it at some point.
That might begin to change since, most importantly, deployment options for vCloud Director (vCD) have changed. It still needs to be deployed to a dedicated server host running RHEL, but it now supports Microsoft SQL 2005 and 2008 databases for a backend, and VMware promises more database support to come.
"To be honest, the biggest thing that stopped us going forward was the Oracle licensing," said Gurusimran Khalsa, systems administrator for the Human Services Division of the State of New Mexico. He said his agency had already endured several years of consolidation and virtualization and vCloud Director looked attractive. HSD even bought a few licenses to experiment with but never used them because the requirement for Oracle was something the division had successfully dodged in the past and wasn’t about to sacrifice for vCD.

A really useless Cloud Computing Index

Investment firm, First Trust Portfolios' Cloud Computing Index is an interesting sign of the times in cloud computing and the surest sign that we're in a cloud bubble, according to many analysts.

The bubble hype is neither here nor there, but looking at the description of the fund, the key question is whether its top 10 holdings appropriately reflect leaders and bellwethers of the cloud computing landscape. Here's the list as of July 21:
Google, Inc.
Amazon.com, Inc.
Open Text Corporation
VMware, Inc.
Salesforce.com, Inc.
Blackboard, Inc.
Netflix, Inc.
Teradata Corporation
TIBCO Software Inc.
F5 Networks, Inc.
A good number of companies here do represent trends in cloud computing, but others don't exactly fit the bill.

HP CloudSystem: What exactly is it?

HP claims to have put private cloud, hybrid cloud and possibly public cloud in its pocket to sell to enterprises and service providers. But CloudSystem, as it is known, isn't so much a platform as a collection of intersecting HP products and roadmaps to get cloud capabilities -- elastic, self-service provisioning, storage and metered use -- into your data center.
There's CloudSystem Matrix, CloudSystem Enterprise, CloudStart Solution, Cloud Service Automation, Cloud Service Delivery, CloudMaps, Cloud Matrix Operating Environment, CloudSystem Security, CloudAgile, and on and on. HP suddenly has a lot of stuff stamped "cloud." What's more, there's no shrink-wrap; you pick bits here and there, and HP helps you install and tune it. Fortunately, there's at least one example in the wild to see what actually constitutes an "HP cloud."
"Yes, they do have an awful lot to look at," said Christian Teeft, VP of engineering for data center operator and services provider Latisys. Teeft said Latisys may well have the first live CloudSystem environment at an HP customer. Latisys is using the system to sell cloud infrastructure services, which come in "private" and "semi-private" options; dedicated clouds for customers, as it were.
Teeft said Latisys deliberately leaned away from startups and smaller cloud platform vendors, talking to enterprise vendors and other service providers like NewScale, Joyent, BMC and others about automation, but HP had the country club marquee customers and Teeft liked the integration with HP's hardware.
"There are synergies with HP around hardware blade systems and ISS," he said.
Industry Standard Server Technology Communications (ISS technology communications) is HP's way of distributing technology guides and techniques to users, a bit like Microsoft's TechNet.

Thursday, July 21, 2011

What 'rogue' cloud usage can teach IT

The use of cloud computing to work around IT shines a bright light on what needs to change

Many people in enterprises use cloud computing -- not because it's a more innovative way to do storage, compute, and development, but because it's a way to work around the IT bureaucracy that exists in their enterprise. This has led to more than a few confrontations. But it could also lead to more productivity.
One of the techniques I've used in the past is to lead through ambition. If I believed things were moving too slow, rather than bring the hammer down, I took on some of the work myself. Doing so shows others that the work both has value and can be done quickly. Typically, they mimic the ambition.
The same thing is occurring in employee usage of cloud services to get around IT. As divisions within companies learn they can allocate their own IT resources in the cloud and, thus, avoid endless meetings and pushback from internal IT, they are showing IT that it's possible to move fast and quickly deliver value to the business. They can even do so with less risk.
Although many in traditional IT shops view this as a clear threat and in many cases reprimand the "rogue" parties, they should reflect on their own inefficacies that have taken hold over the years. Moreover, the use of cloud computing shines a brighter light on how much easier IT could do things in the cloud. It becomes a clear gauge as to the difference between what IT can do now and what technology itself can achieve when not held back.

Hybrid clouds: Three routes to implementation

Hybrid clouds come into play when traditional storage systems or internal cloud storage are supplemented with public cloud storage. To make it work, however, certain key requirements must be met. First and foremost, the hybrid storage cloud must behave like homogeneous storage. Except for maybe a small delay when accessing data on the public cloud, it should otherwise be transparent. Mechanisms have to be in place that keep active and frequently accessed data on-premises and push inactive data into the cloud. Hybrid clouds usually depend on nimble policy engines to define the circumstances when data gets moved into or pulled back from the cloud.
There are currently three routes you can take to implement hybrid clouds:
  • Via cloud storage software that straddles on-premises and public cloud storage
  • Via cloud storage gateways
  • Through application integration

Wednesday, July 20, 2011

Why Microsoft's 'cloud bribes' are the right idea

Although many view cloud migration fees as payola, the charges are exactly what's needed from cloud computing companies

InfoWorld's Woody Leonhard uncovered the fact that Microsoft is paying some organizations to adopt its Office 365 cloud service, mostly in funds that Microsoft earmarks for their customers' migration costs and other required consulting. Although this raised the eyebrows of some bloggers -- and I'm sure Google wasn't thrilled -- I think this is both smart and ethical. Here's why.
Those who sell cloud services, which now includes everyone, often don't consider the path from the existing as-is state to the cloud. If executed properly, there is a huge amount of work and a huge cost that can remove much of the monetary advantages of moving to the cloud.
Microsoft benefits from subsidizing the switch because it can capture a customer that will use that product for many years. Thus, the money spent to support migration costs will come back 20- or 30-fold over the life of the product. This is the cloud computing equivalent to free installation of a cable TV service.

Beware the oversimplification of cloud migration

Many cloud vendors' oversimplified approaches are more likely to hurt than help

I get a pitch a day from [enter any PR firm here]. The details vary, but the core idea is the same: "We have defined the steps to migrate to the cloud, so follow us."
To be honest with you, I often bite. If nothing else, I'll take a look at the press release, white paper, or website. Often there is a list of pretty obvious things to do, such as "define the business objectives" and "define security," but the details are nowhere to be found. Why? Because the details are hard and complex, and the vendors would rather that their steps seem more approachable by the IT rank-and-file than ensure they will actually work. 
Moving applications, systems, and even entire architectures to cloud-based platforms is no easy feat. In many cases it requires a core understanding of data, processes, APIs, services, and other aspects about the existing state of IT before planning the move. Yes, the details.

Building an IT infrastructure roadmap to the cloud

Infrastructure is where the public cloud model can offer the biggest benefit for startups. Public clouds allow startups to spin up their businesses quickly without eating into smaller cash flows by wasting dollars on building their own internal IT infrastructures.
Some employees may be using the cloud right now without even knowing. This represents a hidden "iceberg" of cost, where employees may be spending their organization’s money on virtual machines (VMs) with providers like Amazon EC2. This not only circumvents the standard operating procedures of many organizations but also their accounting and auditing systems as well.
But there is an unhealthy assumption that by merely adopting a public cloud strategy, organizations will automatically save money. This is not necessarily true.
For two major reasons – to guarantee that commercially sensitive content remains private and to ensure the application of proper accountancy practices -- it may be time for organizations to legitimize public cloud for certain types of work. Costs will be incurred, but they should not be hidden from the business like a dirty little secret.

Tools to unlock private cloud's potential

Many enterprises either already have a private cloud, plan to build one or at least have considered in-house cloud as an option. If you're on the private cloud bandwagon but remain unfamiliar with how to extract its full benefits, you're not alone.
This tutorial looks at private cloud computing tools that unleash the power of automation and orchestration, monitoring and service catalogs. While these features are important, they're also not yet fully understood in the context of virtualized, or private cloud, environments.
Enabling orchestration and automation
Although automation and orchestration are often used interchangeably, there is a subtle difference between the two terms. Automation is usually associated with a single task, and orchestration is associated with a process that involves workflow around several automated tasks. If you're looking to better understand the value and importance of automation (and orchestration) in a private cloud environment, one of the best ways is to contrast server provisioning in a traditional data center with virtual server provisioning in a virtualized environment.
Server virtualization can reduce the amount of time to provision servers, but it does not decrease the time associated with installation. An IT staff uses labor-intensive management tools and manual scripts to control and manage the infrastructure, and they will not be able to keep up with the continuous stream of configuration changes needed to maintain access and security changes in conjunction with a private cloud's dynamic provisioning and virtual machine (VM) movement. This is why automation of these processes is an important element of private cloud.

Improving cloud interoperability with application migration tools

Cloud interoperability goes beyond matters like cloud application programming interfaces and virtual machine image format conversion. It is primarily about application migration; moving them back and forth between private clouds and public clouds or from a public cloud to another public cloud. Application migration among clouds allows users to select best of breed cloud technology and avoid lock-in, but it's not possible without tools that facilitate communication between different cloud vendors and services.
Each cloud provider decides which hypervisors, storage models, network models, management tools and processes they are going to work with. This means limited control of the environment for developing and deploying applications; the decisions made by a cloud provider affect what you can and cannot do in a cloud.
Even if there was an open standard cloud API that all vendors used, only part of the problem would be solved. Only relatively simple applications can be migrated to a target cloud without some difficulties. Most depend on services such as a directory infrastructure, identity management and databases, and every component of the application must be determined and reproduced or replaced in a target cloud. This is true for all application dependencies.
But there are some potentially good options available to reduce the issues associated with cloud interoperability and application migration. CloudSwitch, Racemi DynaCenter 4.0 and Citrix NetScaler Cloud Bridge are three tools that help move applications among clouds. These tools do not require modifications, and they allow applications to be managed as though they are still running in a private cloud.

Sunday, June 26, 2011

How to define cloud computing terms and technology

A senior exec from a major IT vendor recently commented in a Freeform Dynamics industry analyst briefing that it was amazing how cloud computing was touching pretty much every part of their business and portfolio. Our response was that this is not particularly surprising, given that they had to define cloud computing in such a broad and fluffy manner. It might sound a bit harsh, but it was true.
In fact, over the course of a few hours, we heard numerous takes on how to define cloud computing from this one vendor alone. If we were new to the space, we probably would have ended up quite confused.
It would be unfair to name this vendor and single them out, as jumping on the cloud bandwagon and defining what cloud means to fit with what you have to sell is common behaviour across the vendor community at the moment. The fact that cloud is really just an umbrella term for a collection of evolutionary trends and developments, rather than some single specific revolutionary concept, actually encourages this and provides plenty of room for abuse.
So how do we make sense of everything that is said, claimed and offered about cloud? Well, the key is to be clear on the perspective or dimension being discussed, and there are three that we find useful when doing this:
  • Technology versus services
  • Architectural stack perspective
  • Functional service taxonomy