Manoranjan Dash1, Amitav Mahapatra



Download 41.66 Kb.
Date31.01.2017
Size41.66 Kb.
#13760




Cost Effective Selection of Data Center in Cloud Environment



Manoranjan Dash1, Amitav Mahapatra2 & Narayan Ranjan Chakraborty3

1Institute of Business & Computer Studies, Siksha O Anusandhan University, Bhubaneswar, Odisha

2College of Engineering and Technology, Bhubaneswar, Odisha

3Department of CSE, Daffodil International University, Bangaldesh

E-mail: manoranjanibcs@gmail.com1










Abstract Cloud computing today has now been rising as new technologies and new business models. The increasing cloud computing services offer great opportunities for clients to find the maximum service and finest pricing, which however raises new challenges on how to select the best service out of the huge group. Cloud computing employs a variety of computing resources to facilitate the execution of large-scale tasks. Therefore, to select appropriate node for executing a task is able to enhance the performance of large-scale cloud computing environment. It is time-consuming for consumers to collect the necessary information and analyze all service providers to make the decision. This is also a highly demanding task from a computational perspective, because the same computations may be conducted repeatedly by multiple consumers who have similar requirements. Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time .Load balancing ensures that all the processor in the system. or every node in the network does approximately an equal amount of work at any instant of time. CloudAnalyst is a tool that helps developers to simulate large-scale Cloud applications with the purpose of understanding performance of such applications under various deployment configurations. The simulated results provided in this paper based on the scheduling algorithm Throttled load balancing policy across VM’s in a single data center and is being compared with round robin scheduling algorithm to estimate response time , processing time .

Keywords Cloud computing, round robin and throttled algorithms, load balancing.

i. Introduction

Cloud computing is a distributed computing paradigm that focuses on providing a wide range of users with distributed access to scalable, virtualized hardware and/or software infrastructure over the internet. Despite this technical definition cloud computing is in essence an economic model for a different way to acquire and manage IT resources. An organization needs to weigh cost, benefits and risks of cloud computing in determining whether to adopt it as an IT strategy. The availability of advance processors and communication technology has resulted the use of interconnected, multiple hosts instead of single high-speed processor which incurs cloud computing. In case of Cloud computing services can be used from diverse and widespread resources, rather than remote servers or local machines. There is no standard definition of Cloud computing. Generally it consists of a bunch of distributed servers known as masters, providing demanded services and resources to different clients known as clients in a network with scalability and reliability of datacenter. Cloud computing is revolutionizing the way IT resources are managed and provisioned. Cloud computing is on demand service in which shared resources, information, software and other devices are provided according to the client requirement at specific time. Cloud computing is an evolving paradigm with changing definitions, it is defined as a virtual infrastructure which provides shared information and communication technology services, via an internet i.e. cloud. Cloud computing provides a computer user access to Information Technology (IT) services (i.e., applications, servers, data storage) without requiring an understanding of the technology or even ownership of the infrastructure. Cloud Computing is getting advanced day by day. Cloud service providers are willing to provide services using large scale cloud environment with cost effectiveness. Also, there are some popular large scaled applications like social-networking and ecommerce. These applications can benefit to minimize the costs using cloud computing. Cloud computing is modeled to provide service rather than a product. Services like computation, software, data access and storage are provided to its user without its knowledge about physical location and configuration of the server which is providing the services. Cloud works on the principle of virtualization of resources with on-demand and pay-as–you go model policy. .



Fig.1: View of the Cloud Computing Environment



ii. Load balancing in cloud computing:

Load balancing in clouds is a mechanism that distributes the excess dynamic local workload evenly across all the nodes. It is used to achieve a high user satisfaction and resource utilization ratio, making sure that no single node is overwhelmed, hence improving the overall performance of the system. Proper load balancing can help in utilizing the available resources optimally, thereby minimizing the resource consumption. It also helps in implementing fail-over, enabling scalability, avoiding bottlenecks and over-provisioning, reducing response time etc.



Fig. 2 : Load Balancing in Cloud Computing

Load balancing is the process of distributing the load among various resources in any system. Thus load need to be distributed over the resources in cloud-based architecture, so that each resources does approximately the equal amount of task at any point of time. Basic need is to provide some techniques to balance requests to provide the solution of the application faster. Cloud vendors are based on automatic load balancing services, which allow clients to increase the number of CPUs or memories for their resources to scale with increased demands. This service is optional and depends on the clients business needs. So load balancing serves two important needs, primarily to promote availability of Cloud resources and secondarily to promote performance. In order to balance the requests of the resources it is important to recognize a few major goals of load balancing algorithms:

a) Cost effectiveness: primary aim is to achieve an overall improvement in system performance at a reasonable cost.

b) Scalability and flexibility: the distributed system in which the algorithm is implemented may change in size or topology. So the algorithm must be scalable and flexible enough to allow such changes to be handled easily.

c) Priority: prioritization of the resources or jobs need to be done on before hand through the algorithm itself for better service to the important or high prioritized jobs in spite of equal service provision for all the jobs regardless of their origin.



iii. Distributed Load Balancing Algorithm For Cloud

A. Round Robin Algorithm

Round robin algorithm is random sampling based. It means it selects the load randomly in case that some server is heavily loaded or some are lightly loaded.



B. Throttled Load Balancing Algorithm

Throttled algorithm is completely based on virtual machine. In this client first requesting the load balancer to check the right virtual machine which access that load easily and perform the operations which is give by the client or user. In this algorithm the client first requests the load balancer to find a suitable Virtual Machine to perform the required operation



C. VectorDot

Singh et al. proposed a novel load balancing algorithm called VectorDot. It handles the hierarchical com-plexity of the data-center and multidimensionality of resource loads across servers, network switches, and storage in an agile data center that has integrated server and storage virtualization technologies. VectorDot uses dot product to distinguish nodes based on the item requirements and helps in removing overloads on servers, switches and storage nodes.



D. Compare and Balance

Y. Zhao et al. addressed the problem of intra-cloud load balancing amongst physical hosts by adaptive live migration of virtual machines. A load balancing model is designed and implemented to reduce virtual machines’ migration time by shared storage, to balance load amongst servers according to their processor or IO usage, etc. and to keep virtual machines’ zero-downtime in the process. A distributed load balancing algorithm COMPARE AND BAL-ANCE is also proposed that is based on sampling and reaches equilibrium very fast. This algorithm assures that the migration of VMs is always from high-cost physical hosts to low-cost host but assumes that each physical host has enough memory which is a weak assumption.



E. Event-driven

V. Nae et al. presented an event-driven load balancing algorithm for real-time Massively Multiplayer Online Games (MMOG). This algorithm after receiving capacity events as input, analyzes its components in context of the resources and the global state of the game session, thereby generating the game session load balancing actions. It is capable of scaling up and down a game session on multiple resources according to the variable user load but has occasional QoS breaches.



Metrics for Load Balancing In Clouds

Various metrics considered in existing load balancing techniques in cloud computing are discussed below



Throughput is used to calculate the no. of tasks whose execution has been completed. It should be high to improve the performance of the system.

Overhead Associated determines the amount of overhead involved while implementing a load-balancing algorithm. It is composed of overhead due to movement of tasks, inter-processor and inter-process communication. This should be minimized so that a load balancing technique can work efficiently.

Fault Tolerance is the ability of an algorithm to perform uniform load balancing in spite of arbitrary node or link failure. The load balancing should be a good fault-tolerant technique.

Migration time is the time to migrate the jobs or resources from one node to other. It should be minimized in order to enhance the performance of the system.

Response Time is the amount of time taken to respond by a particular load balancing algorithm in a distributed system. This parameter should be minimized.

Resource Utilization is used to check the utilization of re-sources. It should be optimized for an efficient load balancing.

Performance is used to check the efficiency of the system. This has to be improved at a reasonable cost, e.g., reduce task response time while keeping acceptable delays.

iv. Introduction To Cloud Analyst

GUI Package - It is responsible for the graphical user interface, and acts as the front end controller for the application, managing screen transitions and other UI activities.

Simulation - This component is responsible for holding the simulation parameters, creating and executing the simulation.

UserBase - This component models a user base and generates traffic representing the users.

DataCenterController - This component controls the data center activities.

Internet - This component models the Internet and implements the traffic routing behavior.

InternetCharacteristics - This component maintains the characteristics of the Internet during the simulation, including the latencies and available bandwidths between regions, the current traffic levels, and current performance level information for the data centers.

VmLoadBalancer - This component models the load balance policy used by data centers when serving allocation requests. Default load balancing policy uses a round robin algorithm, which allocates all incoming requests to the available virtual machines in round robin fashion without considering the current load on each virtual machine. Additionally, it is possible application of a throttled load balancing policy that limits the number of requests being processed in each virtual machine to a throttling threshold. If requests are received causing this threshold to be exceeded in all available virtual machines, then the requests are queued until a virtual machine becomes available. CloudAppServiceBroker This component models the service brokers that handle traffic routing between user bases and data centers. The default traffic routing policy is routing traffic to the closest data center in terms of network latency from the source user base.

v. Performance Analysis

We had used the cloud analyst tool to evaluate the algorithms round robin, and throttled algorithm for the case closest data center by using g user base (1-3) with different regions & data centers (1-4) with different virtual machine monitors.


A. User Base

The design model use the user base to represent the single user but ideally a user base should be used to represent a large numbers of users for efficiency of simulation.



Fig. 3 : Simulation Configuration



B. Datacenter

Datacenter manages the data management activities virtual machines creation and destruction and does the routing of user requests received from user base via the internet to virtual machines.

After performing the simulation the result computed by cloud analyst is shown in following below figures. We have used the above defined configuration for each load balancing policy one by one and depending



vi. Result





(Response Time and Processing Cost Using Round Robin)







(Response Time and Processing Cost Using Throttled )



Fig. 4 : Comparison of Load Balancing Algorithm)

Comparing with the table and graph, overall response time and data centre processing time is improved. It is also seen that the virtual machine cost and data transfer time in the round robin algorithm is much better when compared to throttled algorithms.

vii. Conclusions

The response time and data transfer cost is a challenge of every cloud engineer to build up the products that can increase the business performance in the cloud industry adopted sector. Several strategies lack efficient scheduling and load balancing resource allocation techniques leading to increased operational cost and give customer dissatisfaction. The paper tries to give a bird’s eye view on enhanced strategies through improved job and load balancing resource allocation techniques.



viii. References

[1] Anthony T.Velte, Toby J.Velte, Robert Elsenpeter, Cloud Computing A Practical Approach, TATA McGRAW-HILL Edition 2010.

[2] Bhathiya, Wickremasinghe. Cloud Analyst: A Cloud Sim-based Visual Modeller for Analysing Cloud Computing Environments and Applications

[3] Chhabra, G. Singh, Qualitative Parametric Comparison of Load Balancing Algorithms in Distributed Computing Environment,14th International Conference on Advanced Computing and Communication, July 2006 IEEE, pp 58 – 61.

[4] Fang Y., Wang F. and Ge J. (2010) Lecture Notes in Comput-er Science, 6318, 271-277.

[5] Hu J., Gu J., Sun G. and Zhao T. (2010) 3rd International Symposium on Parallel Architectures, Algorithms and Pro-gramming, 89-96.

[6] Mehta H., Kanungo P. and Chandwani M. (2011) International Conference Workshop on Emerging Trends in Technology, 370-375

[7] Nae V., Prodan R. and Fahringer T. (2010) 11th IEEE/ACM International Conference on Grid Computing (Grid), 9-17.

[8] Ram Prasad Padhy (107CS046), P Goutam Prasad Rao (107CS039).Load balancing in cloud computing system Department of Computer Science and Engineering National Institute of Technology, Rourkela Rourkela-769 008, Orissa, India May, 2011.

[9] Randles M., Lamb D. and Taleb-Bendiab A. (2010) 24th Inter-national Conference on Advanced Information Networking and Applications Workshops, 551-556

[10] T.R.Gopalakrishnan Nair Vaidehi .M2 Suma. V Improved Strategies for Enhanced Business Performance in Cloud based IT Industries, Research & Industry Incubation Centre, DSI, Bangalore, India.

[11] Wenhong Tian, Yong Zhao, Yuanliang Zhong, Minxian Xu, Chen Jing(2011) , A Dynamic And Integrated Load balancing Scheduling Algorithm For Cloud Datacenters, University of Electronic Science and Technology

[12] Zhao Y. and Huang W. (2009) 5th International Joint Confer-ence on INC, IMS and IDC, 170-175.





ISSN (Print) : 2319 – 2526, Volume-2, Issue-1, 2013



Download 41.66 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page