Cloud computing is Internet based system development in which large scalable computing resources are provided “as a service” over the Internet to users.
The concept of cloud computing incorporates web infrastructure,platform as a services, software as a service (SaaS), Web 2.0 and other emerging technologies, and has attracted more and more attention from industry and research community.
Cloud computing is making it possible to separate the process of building an infrastructure for service provisioning from the business of providing end user services. The main idea is to make applications available on flexible execution environments primarily located in the Internet.
Cloud computing describes computation, software, data access, and storage services that do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Parallels to this concept can be drawn with the electricity grid where end-users consume power resources without any necessary understanding of the component devices in the grid required to provide said service.
Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture, autonomic and utility computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology infrastructure “in the cloud” that supports them.
Cloud computing describes a new supplement, consumption, and delivery model for IT services based on Internet protocols, and it typically involves provisioning of dynamically scalable and often virtualized resources.
It is a by-product and consequence of the ease-of-access to remote computing sites provided by the Internet. This frequently takes the form of web-based tools or applications that users can access and use through a web browser as if it were a program installed locally on their own computer.
“Cloud computing is a model for enabling convenient, on-demand network security access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
- Agility improves with users’ ability to rapidly and inexpensively re-provision technological infrastructure resources.
- Application Programming Interface (API) accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud Computing systems typically use REST-based APIs and backend can be managed using Big Data
- Cost is claimed to be greatly reduced and in a public cloud delivery model capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
- Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
- Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
- Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
- Peak-load capacity increases (users need not engineer for highest possible load-levels)
- Utilization and efficiency improvements for systems that are often only 10–20% utilized.
- Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery. Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.
- Scalability via dynamic (“on-demand”) provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
- Maintenance of cloud computing applications is easier, since they don’t have to be installed on each user’s computer. They are easier to support and to improve since the changes reach the clients instantly.
History of Cloud Computing :
Cloud Computing has evolved through a number of phases which include grid and utility computing, application service provision (ASP), and Software as a Service (SaaS).
But the overarching concept of delivering computing resources through a global network is rooted in the sixties.
The idea of an “intergalactic computer network” was introduced in the sixties by J.C.R. Licklider, who was responsible for enabling the development of ARPANET (Advanced Research Projects Agency Network) in 1969.
His vision was for everyone on the globe to be interconnected and accessing programs and data at any site, from anywhere, explained Margaret Lewis, product marketing director at AMD. “It is a vision that sounds a lot like what we are calling cloud computing.” (Read more about 3D – 4D Technology Assignment)
Other experts attribute the cloud concept to computer scientist John McCarthy who proposed the idea of computation being delivered as a public utility, similar to the service bureaus which date back to the sixties.
Since the sixties, cloud computing has developed along a number of lines, with Web 2.0 being the most recent evolution. However, since the internet only started to offer significant bandwidth in the nineties, cloud computing for the masses has been something of a late developer.
One of the first milestones for cloud computing was the arrival of Salesforce.com in 1999, which pioneered the concept of delivering enterprise planning applications via a simple website. The services firm paved the way for both specialist and mainstream software firms to deliver applications over the internet.
The next development was Amazon Web Services in 2002, which provided a suite of cloud-based services including storage, computation and even human intelligence through the Amazon Mechanical Turk.
Then in 2006, Amazon launched its Elastic Compute cloud (EC2) as a commercial web service that allows small companies and individuals to rent computers on which to run their own computer applications.
“Amazon EC2/S3 was the first widely accessible cloud computing infrastructure service,” said Jeremy Allaire, CEO of Brightcove, which provides its SaaS online video platform to UK TV stations and newspapers.
Another big milestone came in 2009, as Web 2.0 hit its stride, and Google and others started to offer browser-based enterprise applications, though services such as Google Apps.
Five Key Events in the history of Cloud Computing :
Launch of Amazon Web Services in July 2002
The initial version of AWS in 2002 was focused more on making information available from Amazon to partners through a web services model with programmatic and developer support and was very focused on Amazon as a retailer. While this set the stage for the next steps the launch of S3 was the true step towards building a cloud platform.
S3 Launches in March 2006
Here are some interesting articles on the launch of S3 in 2006. The real breakthrough however was the pricing model for S3 which defined the model of ‘pay-per-use’ which has now become the defacto standard for cloud pricing. Also the launch of S3 really defined the shift of Amazon from being just a retailer to a strong player in the technology space.
3. EC2 Launches in August 2006
EC2 had a much quieter launch in August 2006 but i would think had the bigger impact by making core computing infrastructure available. This completed the loop on enabling a more complete cloud infrastructure being available. In fact at that time analysts had some difficulty in understanding what the big deal is, and thought it looks similar to other hosting services available online only with a different pricing model.
The Amazon development model involves building Zen virtual machine images that are run in the cloud by EC2. That means you build your own Linux/Unix or Windows operating system image and upload it to be run in EC2. AWS has many preconfigured images that you can start with and customize to your needs. There are web service APIs (via WSDL) for the additional support services like S3, SimpleDB, and SQS. Because you are building self-contained OS images, you are responsible for your own development lifecycle and deployment tools.
AWS is the most mature of the CC options. Applications that require the processing of huge amounts of data can make effective you of the AWS on-demand EC2 instances which are managed by Hadoop.
If you have previous virtual machine experience (e.g. with Microsoft Virtual PC 2007 or VirtualBox) one of the main differences working with EC2 images is that they do not provide persistent storage. The EC2 instances have anywhere from 160 GB to 1.7 TB of attached storage but it disappears as soon as the instance is shut down. If you want to save data you have to use S3, SimpleDB, or your own remote storage server. It seems to me that having to manage OS images along with applications development could be burdensome. On the other hand, having complete control over your operating environment gives you maximum flexibility.
Launch of Google App Engine in April 2008
The launch of Google App Engine in 2008 was the entry of the first pure play technology company into the Cloud Computing market. Google a dominant Internet company entering into this market was clearly a major step towards wide spread adoption of cloud computing. As with all their other products they introduced radical pricing models with a free entry level plan and extremely low cost computing and storage services which are currently among the lowest in the market.
GAE allows you to run Python / Django web applications in the cloud. Google provides a set of development tools for this purpose. i.e. You can develop your application within the GAE run-time environment on our local system and deploy it after it’s been debugged and working the way you want it.
Google provides entity-based SQL-like (GQL) back-end data storage on their scalable infrastructure ( BigTable ) that will support very large data sets. Integration with Google Accounts allows for simplified user authentication .
Windows Azure launches Beta in Nov 2009
The entry of Microsoft into Cloud Computing is a clear indication of the growth of the space. Microsoft for long has not accepted the Internet and the web as a significant emerging markets and has continued to focus on the desktop market for all these years. I think this is a realization that a clear shift is taking place. The launch of Azure is a key event in the history of cloud computing with the largest software company making a small but significant shift to the web.
Azure is essentially a Windows OS running in the cloud. You are effectively uploading and running your ASP.NET (IIS7) or .NET (3.5) application. Microsoft provides tight integration of Azure development directly into Visual Studio 2008.
For enterprise Microsoft developers the .NET Services and SQL Data Services (SDS) will make Azure a very attractive option. The Live Framework provides a resource model that includes access to the Microsoft Live Mesh services.
Bottom line for Azure: If you’re already a .NET programmer, Microsoft is creating a very comfortable path for you to migrate to their cloud
The Key Concept & Categories:
IT services provided through the cloud are grouped into three categories: infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and software-as-a-service (SaaS).
IaaS provides the processing environment (servers, storage, load balancers, firewalls). These services can be implemented through different technologies, virtualization being the most common one, but there are implementations that use grid technologies or clusters.
PaaS provides an environment for developing and running applications. Authentication, authorization, session management and metadata are also part of this service.
SaaS is the most advanced and complex cloud model. The software services provide functionalities that solve user problems, whether it’s an individual or an employee of a company. Some examples of solutions that are now offered under the SaaS model include: business intelligence, Web conference, e-mail, office automation suites and sales force automation.
Security of Cloud Computing
Security is necessary for cloud adoption, and lack of security is often a cloud adoption show-stopper. However, with increasing security policy and compliance complexity, IT complexity, and IT agility, the task of translating security policies into security implementation gets more time-consuming, repetitive, expensive, and error-prone and can easily amount to the bulk of security effort for end-user organizations. Automation can help end-user organizations (and cloud providers) reduce that effort and improve policy implementation accuracy.
This article focuses on application security policy automation as a service, and there are only a few early-adopter deployments today, ObjectSecurity integrated its OpenPMF model-driven security policy automation product with a private platform as a service (PaaS) cloud (Intalio Cloud with Intalio BPMS), which offers seamless policy automation for cloud mashups. It involves policy-as-a-service for virtualized IT services in a high-assurance environment.
The model-driven security component
To achieve automation, some “algorithm” needs to be able to understand the security policy requirements and everything the policy relates to (users, applications, application interconnections, and application workflows), and automatically generate the matching technical security policy implementation. Model-driven security facilitates this required level of security policy automation by applying the reasoning behind model-driven software development approaches to security and compliance policy management.
In essence, model-driven security can automatically generate technical authorization (and other) rules by analyzing the application with all its interactions, and applying generic security requirements to it. Model-driven security is a tool-supported process that involves modeling security requirements at a high level of abstraction, and using other information sources available about the system, especially the applications’ functional models (produced by other stakeholders), to automatically generate fine-grained, contextual technical authorization (and other) rules. The inputs into model-driven security are expressed in Domain Specific Languages (DSL), using generic modeling languages (such as, Unified Modeling Language-UML) or Enterprise Architecture Frameworks (EA Frameworks) (for example, Department of Defense Architecture Framework-DODAF, Ministry of Defense Architecture Framework-MODAF, and NATO Architecture Framework-NAF).
Capturing security requirements does not have to be done using a graphical editor; it can also be done using a textual model editor (for example, in a modeling tool such as Eclipse). These are then automatically transformed into enforceable security rules (access control and monitoring rules) by analyzing information about the applications with all their interactions.
The model-driven security runtime supports the runtime enforcement of the security policy across all protected IT applications, automatic policy updates, and the integrated monitoring of policy violations. In the first step of model-driven security, regulations and governance standards are modeled (or selected from pre-built templates) as a high-level security policy in a model-driven security tool. This security policy model is then applied to the functional models of constituent systems by mapping security policy model actors to system actors and constraining the behavior of those system actors in accordance with the security policy model.
From a technical perspective, these (security and functional) models are automatically translated into low-level, fine-grained, contextual security policies and enforced across the entire cloud orchestration, cloud mashup, or SOA environment (for example, through local enforcement points integrated into the middleware or at a domain boundary). The local enforcement points also deal with the monitoring of security compliance relevant events. And whenever the SOA application (or its interaction configuration) changes, model-driven security can automatically update security enforcement and monitoring.
In summary, the model-driven security process can be broken down in the following steps: policy modeling, automatic policy generation, policy enforcement, policy auditing, and automatic update. Let’s examine how each of those steps work in the context of cloud applications.
Model-driven security architecture
The diagram in Figure 1 illustrates the basic architecture: The top left shows the cloud-based development and mashup environment (Business Process Management System (BPMS)-orchestrated web services). It also shows that model-driven security components need to be installed (by the cloud provider) into the development/mashup toolchain to automate the policy generation.
The top right shows the cloud security service that provides PaaS the model-driven security add-on to the development tools with regular policy updates in a generic form; it also shows the runtime management (monitoring/analysis/reporting) functionality offered by the cloud security service.
The bottom shows some cloud services deployed on application servers (and the rest of the cloud runtime stack) with Policy Enforcement (PEP) + Monitoring points that need to be installed (by the cloud provider) with the runtime stack.
When an application is developed and integrated, model-driven security automatically analyzes the application and the policy models and generates technical rules that are automatically pushed into the relevant PEP/Monitoring points. Whenever a message passes between any of the protected services, model-driven security automatically evaluates and enforces the technical policy, and — if needed — pushes an incident alert to the runtime security policy management feature of the cloud security service.
From practical experiences, even after only a short while in operation, model-driven security can greatly reduce costs of effort of protecting the system and improve security and safety compared to traditional, manual policy definition and management.
IaaS and PaaS to combine
The separation between IaaS and PaaS is somewhat arbitrary. It just so happens that Amazon got into the IaaS business as a way to re-sell hardware capacity that sits idle for much of the year (most of the hardware exists to absorb the load of the Christmas shopping period). Google got into the PaaS business as a sensible extension of their skills in building highly scalable web applications on commodity hardware and opensource software. Microsoft… Well, Microsoft got into PaaS as a way to ensure that their technology (.NET) remains competitive as people move to the cloud.
But cloud computing is now maturing. And as it matures, people request more flexibility, somewhat blurring the lines between IaaS, PaaS and general application delivery services. Take a look at Amazon’s recent offerings (last 2 years): CDN, SimpleDB, DNS and Email Services. These are not just solutions to deploy more server hardware, but rather they are ancillary services that help applications work more effectively in the cloud. In fact, the most recent announcement from Amazon is Elastic Beanstalk, a solution built specifically to help developers to migrate their Java applications to the cloud — sounds like PaaS, no?
In short, by 2012 you will see that cloud computing becomes a mishmash of hardware and application services, and that the unnatural boundaries of IaaS and PaaS will disappear.
While all this is happening, we are moving into what I believe is the third stage of the Internet. Call it Web 3.0 or whatever you wish, but cloud computing is perhaps the most important technology. In fact, I believe that cloud mobile computing is the key enabling technology for this next technological wave and the next phase in the evolution of the Internet.
Back in the mid to late 1990s companies were just concerned with getting websites up so they could have a presence on the Internet. It was all about providing very basic information to the public. But soon the so called e-commerce trend arose and business was being conducted on the Internet. Then Web 2.0 came into play and all users realized that they could share their ideas, create content, and collaborate online. We are now well into this next phase of the evolution where the enabling technologies will be cloud, analytics, mobile, video, and semantic capabilities. This so called Web 3.0 phase will provide applications that are much more immersive, social, and collaborative in nature. Combined that with an explosion of networked sensors and advanced predictive analytic and all the Smarter Planet initiatives will become a reality. But the most important enabler will be the combination of private and public cloud computing infrastructures that will be the ‘engine’ of the future Internet.