An overview of Cloud Computing Gopinath Taget – Autodesk Inc. Cp2524



Download 36.94 Kb.
Date23.05.2017
Size36.94 Kb.
#18742

adsktemplate-header.jpg

An overview of Cloud Computing

Gopinath Taget – Autodesk Inc.




CP2524 Learn what cloud computing is all about, what kind of applications can be written for and run on the cloud, and where it is suitable to use them and where it is not. Learn about the popular commercial cloud service providers, such as Amazon Web Services™ (AWS), Microsoft® Azure™ and Google App Engine, and how to use them. We will discuss the similarities and differences among the services they provide, the advantages of using one over the others, and the coverage and sophistication of the APIs provided to use the cloud services. The class will include code and user interface demonstrations.

Learning Objectives


At the end of this class, you will be able to:

  • Get started on programming for cloud computing

  • Describe the kinds of cloud infrastructure and services available

  • List what APIs are available and explain how to use them

  • Explain the differences among the major cloud infrastructure providers and services they provide


About the Speaker


Gopinath is a member of the Autodesk Developer Technical Services Team. He has more than seven years of experience developing and supporting AutoCAD® APIs, including ObjectARX®, Microsoft® .NET, VBA and LISP. Gopinath also has several years of experience in software development on other CAD platforms, including MicroStation®, SolidWorks®, and CATIA mainly using C++ and technologies such as MFC and COM. Gopinath was also involved in the development of web-based applications for Autodesk® MapGuide® and AutoCAD Map. Gopinath has master's degrees in civil engineering and software systems. Currently, Gopinath is investigating the exciting new trends in cloud computing and how it impacts the CAD industry.


gopinath.taget@autodesk.com

Introduction to cloud computing


Cloud computing is a set of evolving software and hardware technologies that help you build and use software services over the internet efficiently and economically. A typical cloud computing solution would consist of a web page (or a set of web pages) and/or web services as the front end. Now, web pages and web services have been around for more than two decades and are not unique to the cloud. What’s different about the cloud is the way it helps you deliver web pages and web services. Before the concept of cloud computing came along, an organization that would like to publish web pages and services on the web would need to do the following:

  • Create web based applications

  • Acquire hardware by owning or renting to create web server infrastructure

  • Acquire Operating System and Web Server software and other support software like database software

  • Use the acquired hardware or software to publish web based applications to the internet

  • Maintain or hire third-party to maintain the hardware and support software

All this has often led not only to large startup costs but also to recurring IT support responsibilities and costs of upgrading and maintaining the hardware and support software as well as the networking infrastructure.

Cloud computing allows you to simplify the above tasks significantly. When you use the cloud, your tasks can be summed up as follows:



  • Create web based applications

  • Publish web based application to the internet using a cloud provider

The cloud provider (we discuss cloud providers in greater detail below) takes care of everything else including acquiring all the necessary hardware, support software and establishing an IT infrastructure for maintaining the hardware and support software. Costs of infrastructure acquisition, maintenance and upgrade are rolled up into standardized, recurring (often monthly) costs. The infrastructure usage costs are computed by the cloud provider on an arbitrary unit of the kind of infrastructure used (processing unit, storage unit, number of bytes transferred in networking etc.). This sort of billing and service has allowed cloud users to envision hardware and software infrastructure as a “utility” (like electricity or water) that you pay for based on what or how much you use and nothing more. As a side note, this has also forced software vendors to reevaluate the way they deliver and license their software. Vendors with traditional desktop based software providers that charged a one-time licensing fee have begun moving towards a more pay-per-use model for their desktop based applications.

This new way of thinking about hardware and software as infrastructure has several advantages:



  • It allows you to experiment with new technologies or ideas without significant startup costs

  • It allows small organizations or organizations unwilling to invest too much into new software development to scale infrastructure use based on demand and success

  • Allows you to abandon or significantly modify your software direction without any unwanted hardware/software left behind

  • You are only charged for hardware and software infrastructure that you use

Examples of Cloud providers include:

  • Amazon web services (http://aws.amazon.com)

  • Microsoft Azure (http://www.azure.com)

  • Google App Engine (http://code.google.com/appengine/)

Amazon was one of the first companies that standardized provisioning and billing of the cloud infrastructure and in that sense is a true pioneer of cloud computing.

There are several other cloud infrastructure providers. The differences between the providers are in the level of sophistication, abstraction, comprehensiveness and variety of services they provide.

There are several advantages of the standardization that cloud computing has brought in:


  • The scale of cloud usage has brought down the per user cost of high performance hardware infrastructure, IT support and also support software because the software is purchased on a very massive scale by cloud providers.

  • It allows for more affordability of certain traditionally expensive software requirement scenarios

    • High availability of web applications and data

    • Quick and consistent recovery from failures

    • Data and database redundancy

There are also a few challenges with cloud computing:

  • Potentially high network traffic that probably did not exist in your non-cloud applications. This could impact costs

  • May require a shift in thinking about Software architectures

  • May require a shift in thinking about the licensing approach

  • Architectecting applications to maintain proximity between data and applications

  • Performance degradation because of network bottlenecks (that did not exist in a desktop scenario)

  • Securing data and applications to protect IP (Intellectual Property)

As you explore cloud computing technologies more deeply however, you will see that all the above challenges can be easily and elegantly addressed while making your application more robust using different architectural and software engineering techniques.


Hardware and Software as a Software Service


There are several levels of services that commercial cloud providers provide today and they mainly differ in the level of sophistication and control they provide to the cloud infrastructure. The cloud infrastructure itself can be broadly categorized as hardware and software infrastructure services. This is what makes the cloud unique. i.e., it provides all infrastructure as a software service; including hardware. What do I mean by this? I mean that hardware infrastructure usage is programmable. You can write scripts and applications that let you create and manage multiple running “instances” of virtual machines, the operating systems running on the virtual machine, the power of the processors and cores it uses, amount of memory it uses, amount of storage it uses and many other hardware parameters we have come to know and love as programmers.

This approach to thinking of hardware as a software service is truly a paradigm shift and is what makes cloud computing a very powerful software development and delivery tool. Scaling is not a dirty word anymore:



  1. You can instantaneously (and programmatically) request access to more hardware or reduce the amount of hardware you use

  2. You can instantaneously request for more powerful hardware or less powerful hardware

  3. You can automate hardware scaling based on real time usage metrics like network traffic, number of customer requests, amount of memory and storage used

  4. No more guesswork in planning hardware acquisition that is usually done months or even years in advance

They even have a word for this unique ability of the cloud: Elasticity. Elasticity is the ability of the cloud that lets you scale your hardware usage up or down quickly and easily. This is the single most powerful concept that cloud computing brings to the table.

There are a few other concepts that have evolved in cloud computing and are also arguably unique to the cloud. Before we dig very deep into these concepts, we need to understand what constitutes cloud computing.


Data Storage Service


Data Storage is a fundamental service provided by all cloud providers. It is persistent storage typically accessible on the internet so it can be accessed from anywhere as long as your connected to the internet. The amount of storage is virtually unlimited and you pay for amount of storage you use typically measured in GB. Now storage can be used for different purposes:

  1. The simplest use is to store, retrieve and organize data files such as design documents, music or anything that can be stored in a file system on the local computer.

  2. Use it along with the Compute Service (discussed below) to store state information

  3. Use it to distribute content quickly and reliably across multiple geographies. This is possible because of presence of data centers of cloud providers at multiple locations across the globe

Because these uses have very little that is common between them, they are often provided as separate services with hardware and software optimized for the specific purpose.

For instance, Amazon Web Services provides two kinds of Persistent storage services (definitions in italics from the aws.amazon.com site):



  1. Simple Storage Service (S3): Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, secure, fast, inexpensive infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.

  2. Elastic Block Storage (EBS): Amazon EBS provides block level storage volumes for use with Amazon EC2 instances (We will discuss EC2 when we discuss Compute Service). Amazon EBS volumes are off-instance storage that persists independently from the life of an instance. Amazon Elastic Block Store provides highly available, highly reliable storage volumes that can be attached to a running Amazon EC2instance and exposed as a device within the instance. Amazon EBS is particularly suited for applications that require a database, file system, or access to raw block level storage.

Apart from this AWS also provides content distribution service called Amazon CloudFront. It is a web service for content delivery. It integrates with other Amazon Web Services to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no commitments. 

Windows Azure also has different types of storage (Binary Large Object (BLOB) Service, Table Service, Windows Azure Drive) and content distribution (Windows Azure Content Delivery Network (CDN)) services. You will find more information about them here: http://www.microsoft.com/windowsazure/storage/.


Relational Database Services


Though in very simplistic terms a relational database can be viewed as “Storage”, it deserves a separate categorization as a cloud service because of its complexity (as anyone who has performed Database Administrator would understand). This service grew out of a need to migrate all parts of an enterprise application to the cloud. A typical web based application uses a relational database for various needs such as maintaining, customer information, order information, inventory state information or anything else that the application needs stored. When cloud computing first arrived on the horizon, there was no cloud service that let you store a relational data. You as a cloud developer had to do all the work of setting up the software infrastructure (either in a private data center or on the cloud) necessary for the database server. This prompted cloud providers like Amazon Web Services and Microsoft Azure to design a service that makes a relational database also accessible as a service like everything else on the cloud. Here are a couple of examples of Relational Database services:

  • Amazon SimpleDB (though this is technically non-relational, it lets you store and query for data similar to a relational DB)

  • Amazon Relational Database Service (Amazon RDS)

  • Azure SQL

Google App Engine does not have a relational database service but it does provide a data store service: The App Engine datastore provides robust, scalable storage for your web application, with an emphasis on read and query performance. An application creates entities, with data values stored as properties of an entity. The app can perform queries over entities. All queries are pre-indexed for fast results over very large data sets (http://code.google.com/appengine/docs/java/datastore/overview.html).

Compute Services


This is the service around which every other cloud service is designed. The compute service allows you to request for compute hardware like CPU, CPU cores, memory, disks pace and software like Operating systems and Web Server.

The computer service provided by Amazon Web Services is called Elastic Compute Cloud (EC2). Please check out Appendix A in this hand out for more information on the type of compute hardware available with AWS.

You will find a lot more information about EC2 like pricing and types of Operating Systems supported at this site: http://aws.amazon.com/ec2/.

Information about Azure compute can be found here: http://www.microsoft.com/windowsazure/compute/


Other Cloud Services


Apart from the three types of services mentioned above, cloud providers provide a number of other services that build on these fundamental services. For instance, both AWS and Azure provide messaging services that allow you to store and retrieve persistent messages that can be retrieved by the cloud.

Another type of service lets you monitor the usage of compute services to determine its health and scale up and scale down based on the usage statistics.

You can check out the AWS and Azure web pages for a sampling of information on all the services provided by a typical Cloud Services provider.

Networking and affinity


Networking infrastructure and software is the backbone for all cloud based systems. You need fast and reliable network connectivity to:

  1. Connect client applications to cloud applications

  2. Connect cloud applications and services with each other to exchange data and messages

  3. Connect client applications to data store and Relational DB

  4. Connect cloud applications and services to data store and Relational DB

There are a couple of considerations to keep in mind when designing applications for the cloud. Depending upon the amount of data being transferred between different cloud applications and services and the location of data source and data client, cloud applications could face data latency and behave sluggishly. So it is always desirable to design applications such that data source and data client are as close to each other as possible and connected by fast, reliable network.

Cloud providers address this need by providing the concept of Affinity groups. One can define affinity groups and add the data stores and cloud services to the group. This ensures that the services in the group are not only running on hardware that are physically close to each other either in the same data center or centers close to each other but are also connected by fast networking hardware.


Data Centers


Commercial cloud providers already have several data centers spread around several parts of the world (and continuing to expand) to allow cloud developers to build and deploy applications the serve customers in specific geographic locations. For instance AWS has three data centers in the USA located in the eastern, central and western regions. They also have several locations outside USA including Europe and Asia.

Here is a map of Data centers of Azure (obtained from the website: http://www.zdnet.com/blog/microsoft/where-in-the-world-are-microsofts-datacenters/5700):



http://i.zdnet.com/blogs/ms-datacenters.png?tag=mantle_skin;content

Developing for the Cloud


Cloud providers like AWS, Azure and Google App Engine provide very sophisticated SDKs with APIs to build applications that can programmatically manage all the cloud services and can even be easily deployed on the cloud.

The Cloud providers also provide different flavors of APIs. For instance, AWS provides SDKs that can be used with a Java, Python, Ruby or .NET environments. GAE provides SDKs for Java and Python environments.



Microsoft Azure provides an extensive .NET API that can used as a standalone SDK or as one integrated with Visual Studio 2010. The SDK integrated with Visual Studio 2010 not only provides APIs to use Azure services but also lets the developer test cloud applications locally and publish the application to the Azure cloud directly from VS 2010. Please visit the websites of the cloud providers for more information, samples and videos on how to use the SDKs.

Appendix A


Instance Types

Standard Instances


Instances of this family are well suited for most applications.

  • Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit platform

  • Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of local instance storage, 64-bit platform

  • Extra Large Instance 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform

Micro Instances


Instances of this family provide a small amount of consistent CPU resources and allow you to burst CPU capacity when additional cycles are available. They are well suited for lower throughput applications and web sites that consume significant compute cycles periodically.

  • Micro Instance 613 MB of memory, up to 2 ECUs (for short periodic bursts), EBS storage only, 32-bit or 64-bit platform

High-Memory Instances


Instances of this family offer large memory sizes for high throughput applications, including database and memory caching applications.

  • High-Memory Extra Large Instance 17.1 GB memory, 6.5 ECU (2 virtual cores with 3.25 EC2 Compute Units each), 420 GB of local instance storage, 64-bit platform

  • High-Memory Double Extra Large Instance 34.2 GB of memory, 13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute Units each), 850 GB of local instance storage, 64-bit platform

  • High-Memory Quadruple Extra Large Instance 68.4 GB of memory, 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform

High-CPU Instances


Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

  • High-CPU Medium Instance 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each), 350 GB of local instance storage, 32-bit platform

  • High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform

Cluster Compute Instances


Instances of this family provide proportionally high CPU with increased network performance and are well suited for High Performance Compute (HPC) applications and other demanding network-bound applications. Learn moreabout use of this instance type for HPC applications.

  • Cluster Compute Quadruple Extra Large 23 GB memory, 33.5 EC2 Compute Units, 1690 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet

Cluster GPU Instances


Instances of this family provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefitting from highly parallelized processing, including HPC, rendering and media processing applications. While Cluster Compute Instances provide the ability to create clusters of instances connected by a low latency, high throughput network, Cluster GPU Instances provide an additional option for applications that can benefit from the efficiency gains of the parallel computing power of GPUs over what can be achieved with traditional processors. Learn more about use of this instance type for HPC applications.

  • Cluster GPU Quadruple Extra Large 22 GB memory, 33.5 EC2 Compute Units, 2 x NVIDIA Tesla “Fermi” M2050 GPUs, 1690 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet

EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

See Amazon EC2 Pricing for details on costs for each instance type.



See Amazon EC2 Instance Types for a more detailed description of the differences between the available instance types, as well as a complete description of an EC2 Compute Unit.

Download 36.94 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page