Cloud Computing Offers Ability to Store and Retrieve Mass Data at Little Cost
Businesses can subscribe to specific applications in "the cloud" (such as e-mail) but there is growing momentum to provide raw processing and storage capability so that any application can run remotely without typical constraints. The number and diversity of applications moving online is increasing. To support this demand the underlying infrastructure and business tools for hosting online applications are maturing. The two working together are feeding off each other thus accelerating overall adoption. Cloud computing allows for code and storage to exist on the Internet ("the cloud") as a service on a series of devices that by design appear as a single device. This abstracts software from hardware concerns. Cloud computing has existed for some time for research purposes but general purpose business-grade clouds (services that include a service level agreement (SLA)) are fairly recent - notably with Amazon entering the market in 2002 with its Elastic Compute Clouds - still in Beta. (1)
Most applications in use revolve around storage capabilities more than the processing - although processing is what most businesses really need to be dynamic. Systems include real-time offline backups and disaster recovery, massive image storage and audio / video streaming such as Amazon's UnBox.
Performance-hungry applications such as financial number-crunching and design rendering are also users of cloud computing and eventually the same Web applications that are found in data centers will run in the cloud.
- Dynamic Capacity -dynamically allocate (up or down) computer resources on the fly - even by the software itself
- Dynamic Instant Sizing -virtually build an instance with any virtual hardware configuration
- Reliability - dependably managing and saving thousands of servers online requires a business to have the highest level of controls and standards
- Network portability - portability through abstracted hardware and removed or reduced network constraints such as hard-coded IP addresses
- Geographical redundancy - ability of most companies to offer any sort of cloud computing service is geographically redundant.
- Great Price - partially due to promoting a new service but also due to massive economies-of-scale - making the price almost impossible to replicate. Additional savings also include no longer having to build systems to handle maximum load.
- Convenience - conveniently enter in a credit card number and a system is set-up in a couple of minutes. It's appealing to anyone who has ever tried working with a data center to configure a massive disk system or anything over a couple of servers in a simple configuration and knows it is very time-consuming and complex.
- No concerns about correctly sizing hardware to maximum loads frequently caused by unpredictable business cycles and macroeconomic forces
- Easily and correctly determine how to allocate spending within a single device between options such as memory, disk and processing power
- Offsite storage gives business customers the ability to increase redundancy by remotely located data
- It will may harder for some OS/Programming stacks to fully convert to cloud computing. Amazon just announced in late 2008 that Windows will be supported even though commercial cloud systems supporting Linux have been available since 2002. That doesn't restrict other OS/Programming stacks from utilizing services provided in the cloud but rather prevents those services from running in the cloud.
- There is less control of the hardware environment as currently available in a data center. The subscriber must rely 100% on the provider to physically secure the hardware and its access. Unlike a traditional data center, it is impossible to augment hardware to add additional physical and logical layers of security.
- Controls will have to be put in place to mitigate bad code because unlike a traditional hardware environment, bad code is not physically constrained to a single machine and could end up consuming vast amounts of resources if the system is configured to expand on demand.
- Although the idea is to isolate developers from hardware constraints and concerns most developers don't grasp what is required to efficiently scale a system to a large processing footprint.
Why Technologists Care
- Hardware capabilities and their cost directly influence programming paradigms
- Infrastructure support staff will have another option to the traditional data center
- No hassling with complex infrastructure to scale. Easy to set up, pay as you go, high availability and no long-term commitments.
- Allows different distributed providers to do what they do best (division of labor). It is possible to have one system run the code and another remote system store the data.
Alternatives in the Marketplace
Although there are few companies (Google, Microsoft, Amazon) that have the vast amount of resources and expertise required to construct the underlying infrastructure required, solutions exist that provide subsets of the benefits:
- Virtual Machines (also referred to as VMs) are being adopted rapidly as the technology matures. Zen, Microsoft, and VMWare are great for running multiple environments on one machine but can't span multiple machines - although they do allow for a great deal of hardware abstraction most notably demonstrated by ease in which an IT administrator can move a system from machine to machine (in some cases even while users are attached to a running application).
- Super computers solve some of the world's most complex problems such as modeling weather but they require special coding and typically only run a few programs at a time. Super computers are mostly used by the military and research facilities.
- Volunteer Peer-to-Peer networks such as Seti@home is an example of a massive distributed application with massive computational power; but distributed volunteer networks are hard to provision and control so they are mostly used for research.
John Basso's Bold Claim
Initially, offline storage and special purpose applications will drive the industry but cloud computing will be a viable and cost effective alternative to traditional data centers once billing, provisioning, and increased support for existing programming stacks is in place. The transformation will be evidenced by traditional data centers such as RackSpace - with purchase of Slicehost - altering their offerings and pricing to counter this new competitive threat.
It changes the game: just as businesses can real-time weigh the cost and benefit of the amount of traffic driven to a site through Google Pay-Per-Click, businesses will be able to weigh the cost and benefit of increasing the speed of their systems real-time in the future. For example, many large e-commerce sites know the correlation between their response time and conversion metrics. In a cloud computer system it would be possible to have the code self-allocate more resources to maintain a certain performance metric, for a cost. ****
About Amadeus Consulting
Amadeus Consulting is a custom software development company dedicated to creating intelligent technology solutions with successful business results. We are a Microsoft Gold Certified Partner, winner of the Microsoft Office XP Challenge, and has Microsoft Partner Competencies in Custom Software and Data Management Solutions. Amadeus Consulting is an expert in custom software applications such as content management systems, e-commerce, surveys, social networking sites, data collection and management, browser plug-ins and many more.
**** This statement reflects an opinion and Amadeus is not liable for any decisions or conclusions made as a result of this statement.