Dynamic algorithm for Load balancing in a hierarchical Cloud Computing Network Pinki 1, Nida2



Download 63.24 Kb.
Date13.06.2017
Size63.24 Kb.
#20454
Dynamic algorithm for Load balancing in a hierarchical Cloud Computing Network

Pinki 1, Nida2

1, 2, M.Tech (CSE), School of Computing Science and Engineering, Galgotias University, Greater Noida , India

Email: {1 pinkilko.2911@gmail.com, 2 nidamtech@gmail.com}




Abstract At the present time implementation of local cloud is much more efficient, organization are utilizing the power consumed by resources which are still unused. Reducing power utilization has been a necessary requirement for cloud environments not only to reduce operating cost but also recover the system consistency. The energy-aware computing is not just to make algorithms run as fast as feasible, but also to minimize energy requirements for computation. This paper discusses the existing OLB scheduling technique for load balancing in cloud computing and further developed a simulated dynamic load balancing algorithm to enhance the cloud performance and efficiency.
The fundamental of this study is to attain load balancing by the LBMM scheduling algorithm to make the minimum execution time on the node of every job and the minimum entire completion time is obtained. Though, the load balancing of three-level cloud computing network is utilized; all results calculating from first level could be integrated to the second level node prior to revert back to the management. Thus, the objective of loading balance and enhanced resources management could be attained. But, in a general case, the cloud computing network is not only static, but also dynamic.
We are considering this as an optimization problem and an efficient load balancer should adapt its approach to the varying environment and the types of jobs. This paper proposes a new load balancing approach using Genetic Algorithm (GA). The algorithm utilizes to balance the load of the cloud communications while trying minimizing the makespan of a specified loads set. The projected load balancing approach has been simulated by means of the CloudSim simulator. On other hand, our projected method will be extending to continue and manage when the node fundamentally exists in a three-level hierarchical cloud computing network.
Index Terms— Load Balancing, Cloud Computing, LBMM scheduling algorithm, Genetic Algorithm, Dynamic load balancing, Hierarchical Cloud Network


  1. INTRODUCTION

Load balancing is one of the main issues in Cloud computing that allocates the dynamic workload across various nodes to make sure that no particular resource is either weighed down or underutilized. Cloud computing can be categorized as a new exemplar for dynamic provisioning computer services supported by data centers that generally utilize virtual machine (VM) technology for consolidation[4]. Cloud computing distribute communications, platform, infrastructure and software as services which are made accessible to customers as subscription based services. Clouds use virtualization technology in dispersed data centers to distribute resources to clients as they require them. In general, clouds are deployed to clients giving them three levels of access: Platform-as-a-Service (PaaS), Software-as-a- Service (SaaS), and Infrastructure-as-a-Service (IaaS). The jobs can vary significantly from one client to another client.


Load Balancing is a method that distributes the dynamic local workload equally across all the nodes in the entire cloud to circumvent circumstances where some nodes are heavily loaded whereas others are unused or doing little work. It helps to attain a high user approval and resource consumption ratio, therefore improving the overall performance and resource efficacy of the system.
Load balancing methods can be generally categorized as centralized or decentralized, dynamic or static, and periodic or non-periodic. There has been some research on load balancing techniques in cloud computing environment [6]. The main parameter of load balancing uses Minimum Execution Time (MET) to allocate order to every job in random manner to the nodes on which it is predictable to be executed fastest, in spite of the existing load on that node. Use of some presented scheduling techniques like Round Robin, Min-Min, and FCFS for load balancing also exists in research. An intellectual technique for load balancing has been projected by Yang Xu et. al. [5]. It proposes a new model to balance data allocation to enhance cloud computing performance in data-intensive applications, such as decentralized data mining.
In this paper we use Genetic Algorithm (GA) as a soft computing method, which uses the method of natural selection approach. CloudSim is a Visual Modeler based cloud computing technique that has been used for simulation and analysis of the algorithm. The performance of the algorithm is compared with our base load balancing technique i.e. OLB scheduling approach [7].


  1. RELATED WORKS

Numerous heuristic algorithms have been established for achieving efficient task scheduling in the distributed environment. A brief summary to those related work is given below:


Min-Min, Max-Min, Suffrage proposed by Metal [9] is three key techniques which have been engaged for scheduling workbox jobs in vGrADS [10] and Pegasus. The techniques are based on the performance evaluation for job implementation and I/O data communication. For every iterative step, it computes ECTs (Early Completion Time) of every job on its every accessible resource and obtains the MCT (Minimum Estimated Completion Time) for each job and a job having least MCT value above all jobs is chosen to be programmed at this first iteration. But job having highest estimated completion time will be in demand.
F. Dong et al have projected a QoS precedence grouping scheduling [11] considering the cut-off time and acceptation rate of the job and gives comparatively better outcome. E Ullah Munir et. al. have [12] considers network bandwidth and schedules tasks based on their bandwidth requirement as the QoS guided Minmin algorithm does. Compared with the Maxmin, QoS guided Min-min, Min-min, and QoS Sufferage, QoS priority grouping algorithms obtains smaller make span. Amit Agarwal and Padam Kumar [13] attempt to minimize the number of task replications exclusive of affecting the on the whole makespan of the meta-task submitted to the lattice, by proposing two workflow scheduling algorithms, those are Reduced Duplication (RD) for identical systems and Heterogeneous Economical Duplication (HED) for heterogeneous systems respectively and simulate the overall processor utilization, by removing some duplicated tasks in the schedule whose exclusion does not influence the makespan negatively, thereby producing scheduling holes in the system, which can, in turn, be used to schedule other distributed applications in the lattice.
Braun et al. reported the comparison of the performance of batch queuing techniques, tabu search, genetic algorithm and simulated annealing to lessen the makespan [14]. Genetic algorithm achieved the best results compared to batch queuing techniques. Hongbo Liu et al. proposed a fuzzy particle swarm optimization (PSO) algorithm for scheduling jobs on computational network with the minimization of makespan as the main principle [15]. They empirically showed that their process out performs the genetic algorithm and simulated annealing approach.


  1. LOAD BALANCING IN THE CLOUD

Load balancing is a method of reassigning the total load to the entity nodes of the combined system to make resource utilization efficient and to improve the response time of the job, concurrently removing a condition in which some of the nodes are over loaded while some others are under loaded. Server load balancing addresses numerous requirements that are fetching increasingly significant in networks:



  • Increased scalability

  • High performance

  • High accessibility and failure recovery

First the requests or job coming from the user side are stored in a job pool or the inner middleware then these jobs are partitioned and making the replication of these partitioned jobs into their local middleware. Thus adding or removing of any node does not affect the entire system. And the replication strategy of the partitioned jobs ensures the fault tolerant by the interior communication among the nodes i.e. if any of the nodes fail the entire system does not concern. The job queue in each middleware are simplified the job status at the time when a task is assigned and at any time it is completed the implementation.



Figure 1: Load balancing in the cloud


  1. EXISTING LOAD BALANCING TECHNIQUES IN CLOUD COMPUTING

Following load balancing techniques are at present ubiquitous in clouds which we are using in our research.



In this study, the Load Balance Min-Min (LBMM) scheduling algorithm is projected that takes the attributes of the Min-Min scheduling algorithm as groundwork. The efficiency of Min-Min scheduling algorithm is considered to lessen the completion time of all works. Though, the major limitation of Min-Min scheduling algorithm is it only considers the completion time of each task at the node but does not consider as the work loading of each node. Consequently, some nodes possibly all the time get loaded with tasks but some nodes possibly still inactive. The proposed LBMM will develop the load unbalance of the Min-Min and decrease the execution time of each node efficiently.
Additionally, the multi-level hierarchical network topology can decline the data store cost [1]. However, upper lever will amplify the cost of network management. Thus, in our research, a three-lever hierarchical structure is used. The third level is the service node that used to execute sub task. The second level is the service manager that used to partition the task into some consistent autonomous subtasks. The first level request manager is used to allocate the task to an appropriate service manager.



Figure 2: Three-level framework
In order to attain load balance and decrease execution time for each node in the three-level cloud computing network, the OLB and LBMM scheduling algorithms are integrated in our study. The development of LBMM scheduling algorithm is shown in Figure 2. The theory of LBMM scheduling algorithm is to allocate task along with each service manager into some subtasks to be executed in an appropriate service node. LBMM considers the execution time of each subtask on each service node. Each subtask will be figured out the execution time on diverse service nodes (NIb NI2 ...) by way of agent. According to the information assembly by agent, each service manager chooses the service node of shortest execution time to execute various subtasks and reports it into the Min-time array.
At the final situation, the Min-time array of each subtask is recorded that is a set of least execution time on definite service nodes. In the intervening, service manager chooses the service node from Min-time array. In this approach y is performed prior to the ath subtask on service. As a result, the sub task a will be circulated to service node y. Because the subtask has been distributed to service node y to be performed, the subtask a will be eliminated from subtask queue. For the meantime, the Min time array will be rearranged, and the service node y is set on the preceding one of Min-time array [2].
In this research work, on the basis of scheduling algorithm is which is a combination of OLB and LBMM scheduling algorithm in a three-level cloud computing network is chosen as to move on to overcome the shortcomings of existing technique. According to the attributes of the proposed integrated scheduling algorithm, the load balance and the execution time of each node is measured as main parameters. There are numerous heterogeneous nodes in a cloud computing system. Specifically, each node has different potential to implement task; therefore, only regard as the CPU enduring of the node is not adequate when a node is chosen to perform a task. Consequently, how to choose a proficient node to execute a task is very significant in a cloud computing.
Due to the task perhaps has various characteristic for user to pay implementation. Hence it is requirements some of the resources of particular, for instance, when implement organism sequence assembly; it is feasible have to big constraint toward memory remaining [3]. And in order to attain the best resourceful in the execution each tasks, so we will expected tasks property to implement a different condition decision variable in which it is according to supply of task constraint to set decision variable.
In this scheduling algorithm, an agent generally collects related information of each node participating in this cloud computing system, such as enduring CPU potential, remaining memory, and transmission rate. After all these data are composed, they will be supplied to the manager and support it in maintaining the load balancing of the system. The factors are defined as follows:

VI = the remaining CPU capability;

V2 = the remaining memory; and

V3 = Transmission rate


To compose the manager choose suitable nodes efficiently, all of the nodes such as service manager work on service nodes in the system will be evaluated by the threshold that is resultant from the demand for resource required to accomplish the task. The service manager that passes the "threshold of service manager" measured efficient, and will be the candidate of effective nodes by manager. The service nodes that exceed the "threshold of service node" considered efficient, and will be the candidate of efficient nodes by service manager.

  1. Threshold of service manager

The cloud computing atmosphere is composed of heterogeneous nodes, where the assets of each node may significantly differ. In other words, the computing capability provided by the CPU, the accessible sizes of memory, transmission rate are diverse. Additionally, cloud computing utilizes the assets of each node, so the accessible resource of each node may differ in a emergency condition. From the perception of task completion time, the accessible CPU capacity, the accessible size of memory and transmission rate are the 3 decisive attributes for the interval of execution. Thus, in this particular technique, the accessible CPU capacity, the accessible size of memory, and transmission rate were taken as the threshold for estimating service manager values.


An instance of detailed as follows [3]:

  1. The remaining CPU capability 2: 490 MB/s

  2. The remaining memory 2: 203698 KB/s

  3. Transmission Rate 2: 7.09 MB/s




  1. Threshold of service node

When a service manager crosses the "threshold of service manager" according to the constraint of a task, then the service nodes that are managed by this service manager have the qualifications to execute this task. On the other hand, in a closed computing atmosphere, the composition of nodes is dynamic; every node is expected to enter an emergency state at any time and hence increase the execution time of task then direct to lower its performance. Thus, the "threshold of service node" is used decide the better service node. The evolution is divided into four steps as follow:

Step 1: First step is to calculate the average execution time for each subtask.

Step 2: Second step is to evaluate calculated average execution time to required execution time of each subtask. If the required execution time of a subtask is equal to or less than the average execute time then carry out the subtask to execute normally.

Step 3: If the required execution time of a subtask is greater than the average execute time then the executing time is set to 00 (the execution time is too long so that cannot to be considered). The other nodes that had been executed will reenter into the system to participate the execution of subtask.

Step 4: Repeat Step 1 to Step 3, until all subtasks have been executed absolutely.


In this base study, the tasks can be distributed to execute rapidly by this base scheduling algorithm the effectual service nodes can be preferred by the threshold m a three-level cloud computing network. The existing two-phase scheduling algorithm composes OLB and LBMM to support in the selection for efficient service nodes. Initially start from a queue that is used to store tasks in need to be conceded out by manager (No), then the OLB scheduling algorithm within "threshold of service manager" is used to allocate task to the service managers in second layer (NhN2, N3, N4, Ns).
Though each consignation carries out of the task have different attributes, so the concurrence of node assortment is also different. A mediator is used to assemble the concerned information of each node. According to the assets of each task, the threshold value of each node is evaluated and a service node will be distributed. However, in order to avoid the execution time of various is too long and concern system performance, "threshold of service node" is used to prefer the appropriate service node to execute subtask.


  1. PROPOSED METHOD FOR LOAD BALANCING

As the basic scheduling technique used in this worked in idle conditions or we can say in static working environment of cloud. But Cloud computing working environment is dynamic but at any particular instance regarding the problem of load balancing that can be calculated as assigning N number of jobs created by cloud users to M number of processing units in the Cloud. Every of the processing unit will have a processing unit vector (PUV) representing current status of processing unit consumption. This vector comprises MIPS, representing how many million instructions can be executed by that device per second, α, cost of execution of instruction and delay cost L. The delay cost is an estimation of penalty, which Cloud service provider requests to pay to user in the incident of job concluding actual time being more than the cut-off time assumed by the service provider.



PUV = f (MIPS, α, L) (1)
In the same way each job created by cloud user can be represented by a job unit vector (JUV). Therefore the attribute of various jobs can be represented by 2.

JUV = f (t, NIC, AT, wc) (2)
Where, t represents the form of service needed by the job, Infrastructure as a Service (IAAS), Software as a Service (SAAS), and Platform-as-a-Service (PAAS). NIC represents the number of instructions currently live in the job; this is count of instruction in the job determined by the processor. Job arrival time (AT) specify wall clock time of entrance of job in the system and worst case completion time (wc) is the least time required to complete the job by a processing unit.
The Cloud service provider needs to allocate these N jobs between M numbers of processors such that cost function ζ as represented in equation 3, is minimized.

ζ = w1 * α (NIC ÷ MIPS) + w2 ∗ L (3)


where w1 and w2 are predefined weights. It is very complex to decide/optimize the weights, one standard could be that more common the issue is, and larger is the weight. Logic is user’s priority or importance given to a specific factor over the other. Here the simulated approach has been used and the simulation in then performed on the given group of weights. The weights are measured as w1 = 0.8 and w2 = 0.2 such that their summation is 1. Therefore the load balancing dilemma is composite and can be measured as computationally inflexible problem.
Such a problem cannot be calculated by linear programming therefore it is quite complicated to find the worldwide optimal solution by using deterministic polynomial time algorithms or rules. GAs is measured as one of the most extensively used artificial intelligent techniques used principally for effectual search and optimization. It is a stochastic searching algorithm based on the mechanisms of ordinary selection and genetics. GAs has been proven to be very resourceful and stable in searching out worldwide optimum solutions, particularly in complex and/ or huge search space. In this paper, GA has been proposed as a load balancing technique for cloud computing to find worldwide optimum processors for job in a cloud. The entrance of job is considered as linear and rescheduling of jobs is not measured as the solution will be worldwide optimal in nature.
A simple GA is composed of three operations:

  • Assortment,

  • Genetic operation, and

  • Substitution.

The advantage of this method is that:

  • It can lever a vast search space,

  • It provides appropriate to complex objective function and

  • It can avoid being trapping into local optimal solution.

The working principle of GA used for the load balancing in Cloud computing is depicted in details of GA are described as follows.


  1. Primary population generation: GA works on preset bit string depiction of individual solution. So, all the probable solutions in the solution space are encoded into binary strings. From this an primary population of ten (10) many chromosomes are chosen arbitrarily.

  2. Crossover: The purpose of this step is to choose most of the times the best integral pair of individuals for crossover. The suitability value of each individual chromosome is considered using the suitability function. This pool of chromosomes undergoes an arbitrary single point crossover, where depending ahead the crossover point; the portion lying on one region of crossover site is exchanged with the other side. Therefore it generates a novel pair of individuals.

  3. Mutation: Now a very minute value (0.05) is selected as mutation possibility. Depending upon the mutation value the bits of the chromosomes, are toggled from 1 to 0 or 0 to 1. The resultant of this is a novel mating pool organized for crossover.




  1. CONCLUSIONS

In this paper, we projected a novel scheduling algorithm to work for a dynamic load balancing in three level architecture of cloud computing. As in the basics of our research we opt the combination of two scheduling technique i.e. OLB and LBMM. The existing two-phase scheduling algorithm composes OLB and LBMM to support in the selection for efficient service nodes. But the technique is applied to idle conditions of the networking i.e. static environment of the cloud computing. While the in real world it doesn’t work because the conditions are not static but always remain dynamic. That’s we simulated as an enhancement work Genetic Algorithm to implement the dynamic conditions of cloud computing. In this paper, GA has been proposed as a load balancing technique for cloud computing to find worldwide optimum processors for job in a cloud. The entrance of job is considered as linear and rescheduling of jobs is not measured as the solution will be worldwide optimal in nature.




  1. FUTURE WORK

As the enhancement algorithm is generated to enhance the cloud performance. Despite the fact that it has been assumed that all the jobs are of the similar precedence which may not be the actual case, this can be accommodated in the JUV and afterward taken care in suitability function. Also a very simple approach of GA has been used though variation of the crossover and selection strategies could be functional as a future work for getting more proficient and tuned results.




  1. REFERENCES




  1. L. Garces-Erice, et aI., "Hierarchical Peer-to-peer Systems," Proceedings of ACMlIFIP International Conference on Parallel and Distributed Computing (Euro-Par), 2003.

  2. F. Halsall, Data Links, Computer Networks and Open Systems. 4th ed., Addison-Wesley Publishers, pp. 112-125, 1995.

  3. K.Q., Van, et aI., "A Hybrid Load Balancing Policy Underlying Grid Computing Environment. Computer Standards & Interfaces, Vol. 29, Issue 2, pp 161-173, 2007.

  4. T. R. Armstrong, D. Hensgen, “The relative performance of various mapping algorithms is independent of sizable variances in runtime predictions”, in Proc. of 7th IEEE Heterogeneous Computing Workshop (HCW 98), pp. 79-87,1998.

  5. Yang Xu, Lei Wu, Liying Guo, Zheng Chen, Lai Yang, Zhongzhi Shi, “An Intelligent Load Balancing Algorithm Towards Efficient Cloud Computing”, in Proc. of AI for Data Center Management and Cloud Computing: Papers, from the 2011 AAAI Workshop (WS-11-08), pp. 27–32, 2008.

  6. Ratan Mishra and Anant Jaiswal, “Ant colony Optimization: A Solution of Load balancing in Cloud”,in International Journal of Web & Semantic Technology (IJWesT), Vol.3, No.2, pp. 33–50, 2012.

  7. Brototi Mondal,Kousik Dasgupta and Paramartha Dutta, “Load Balancing in Cloud Computing using Stochastic Hill Climbing-A Soft Computing Approach”, in Proc. of C3IT-2012, Elsevier, Procedia Technology 4(2012), pp.783-789, 2012

  8. R. Yamini, “Power Management In Cloud Computing Using Green Algorithm”, IEEE-International Conference On Advances In Engineering, Science And Management (ICAESM-2012), March 30, 31,2012.pp-128-133.

  9. Anton Beloglazov and Rajkumar Buyya, Adaptive Threshold-Based Approach for Energy-Efficient Consolidation of Virtual Machines in Cloud Data Centers, Proceedings of the 8th International Workshop on Middleware for Grids, Clouds and e-Science (MGC 2010, ACM Press, New York, USA), In conjunction with ACM/IFIP/USENIX 11th International Middleware Conference 2010, Bangalore, India, November 29 - December 3, 2010.

  10. T. D. Braun, H. J. Siegel, N. Beck, L. L. Boloni, M. Maheswaran, A. I. Reuther, J.P. Robertson, M. D. Theys, B. Yao, D. Hensgen, and R. F. Freund, “A comparison of eleven static heuristics for mapping a class of independent tasks onto heterogeneous distributed computing systems,” Journal of Parallel and Distributed Computing, vol. 61, issue 6, pp. 810-837, Jun. 2001.

  11. F. Dong, J. Luo, L. Gao, and L. Ge, "A Grid Task Scheduling Algorithm Based on QoS Priority Grouping," In the Proceedings of the Fifth International Conference on Grid and Cooperative Computing (GCC’06), IEEE, 2006

  12. E.Ullah Munir, J. Li, and Sh. Shi, 2007. QoS Sufferage Heuristic for Independent Task Scheduling in Grid. Information Technology Journal, 6 (8): 1166-1170.

  13. Agarwal, A., Kumar, P.: Economical Duplication Based Task Scheduling for Heterogeneous and Homogeneous Computing Systems. In: Proceedings of the Advance Computing Conference, 2009(IACC’ 09), pp. 87-93, IEEE Computer Society(2009)

  14. T.D.Braun, H.J.Siegel, N.Beck, D.A.Hensgen, R.F.Freund, A comparison of eleven static heuristics for mapping a class of independent tasks on heterogeneous distributed systems, Journal of Parallel and Distributed Computing,2001,pp.810- 837

  15. H.Liu, A.Abraham, A.E.Hassanien, Scheduling jobs on computational grids using a fuzzy particle swarm optimization algorithm, Future Generation Computer Systems (2009), doi:10,1016/j. future. 2009.05.022


Download 63.24 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page