EMULATION OF THE POX CONTROLLER AS A LOAD BALANCER

By its nature, the server assessed in this study experiences bottlenecking, making load balancing an essential service within the network, particularly for users who increase the load. This paper investigates loadbalancing related to software-defined networks (SDNs). Also, three load-balancing algorithms are evaluated and analyzed: the random algorithm (used as a default in Mininet), building and implementing the code of the roundrobin algorithm, and building and implementing the code of the weight round-robin algorithm (which depends on two types of servers-namely, python and ruby servers). All evaluations were based on the tools used (i. e. , Ali tool, siege tool, and apache tool). The considered parts constitute three servers, one client, one pox controller, and one open virtual switch. The proposed topology needs to be connected to the pox controller-the OpenFlow protocol was utilized. The results show that the random algorithm performed better than the round-robin algorithm, specifically processing time and response time. Also, the round-robin algorithm had a better processing time and response time than the weight round-robin algorithm. keywords: Software-defined network (SDN), Load balancing, random algorithm, Mininet, Round-robin algorithm, Weight round-robin algorithm, OpenFlow.

Local Queue Algorithm. They differ from static algorithms in that the workload is distributed among the processors at runtime. The master assigns new processes to the slaves based on the new information collected. Unlike static algorithms, dynamic algorithms allocate processes dynamically when one of the processors becomes under-loaded. Instead, they are buffered in the main host's queue and allocated dynamically upon requests from remote hosts [4]. Static load-balancing algorithms are more stable than dynamic algorithms, and it is also easier to predict static algorithms' behavior. Accordingly, the current paper aims to evaluate static load-balancing algorithms, bearing in mind that the planned topology includes a Pox controller, a virtual switch, and three servers. II. RELATED WORK Presented in [4] is the performance analysis of various load balancing algorithms based on different parameters, considering two typical load balancing approaches: static and dynamic. The analysis indicates that static and dynamic algorithms can both have advantages and weaknesses over each other. Deciding which type of algorithm to implement will be based on the type of parallel applications to solve. Little work has been on the subject of emulation of the pox controller as a load balancer. In [5], a two-level balancing solution related to the SDN networks is proposed, consisting of the usual balancing between servers and load balancing among network devices. The path is used to transmit data by the usual path load balancing protocol, producing much crowding on a single node. This kind of problem is solved by balancing the path load with the SDN's use alongside the distribution of the workload by adopting various parameters, [1] to correct the errors in the existing SDN controllers and networks they control and present a method of load balancing with the use of multiple servers. With this concept, the combination of a controller as a load balancer, together with widelyavailable architecture, is constructed mainly to develop the process marking's uptime structure. [6] Proposed an efficient load balancing technique by considering different parameters to maintain the load efficiently using Open Flow Switches connected to an ONOS controller. The flow requests are changed into a queuing model, managing the traffic propagation delay and controller capacity as two key factors impacting the multi-controller deployment. In [7], load balance is prepared using paths to connect load balancing by piecing together various physical links with several virtual links for transferring the required data. The source and the destination node, interrelated by a path, consist of various links and switches to join those nodes. The current paper proposes to create a novel process of simulation to evaluate the algorithms of static load balancing employing a pox controller using three servers to distribute the load between them. In [8] , the authors propose a distributed load balancing algorithm using a weighted round-robin (WRR) mechanism. An analytical model of WRR is proposed to represent the task of the load balancing algorithm. Throughput and delay parameters are used to test the performance of the proposed algorithm. The results show that the WRR algorithm increases the availability of bandwidth and data transfer size. This approach decreases latency in SDN, their work based on an open-daylight controller.

III. THE CONTROLLER
At first, the control plane defines the regulation, which defines where the flow regulations on network instruments and SDN switches are attained to apply them on the data plane [9]. The control plane components are the software's device management and a theoretically integrated interface in the SDN network. With that, to handle several services, various frameworks shall be accessible in the controller. The North Bound Interface (NBI) is the accurate representation of the interface between the controller and the applications. The South Bound Interface (SBI) and networking equipment are regarded as the key parts of the data plane, playing the adapter's role between the networking devices and the controller. It works as a secure channel inside the switch that enables the flow's power to enter the switches flow tables. The procedure of administrating the whole process is mutually proactively or reactively completed in response to the existing packages.
Transferring topology and flow is the main function of the controller to be applied in this area. Preparing organized queries for external ports related to packets messages is done by the Link Discovery Module. All through the shape of a communications packet, these test messages are returned to assist the controller in designing the network topology.
The topology's director controls the topology itself, as the best paths among network nodes are defined by the decision module's paths. These paths design process enables the path construction to develop the various security policies and the Quality of Services (QoS). Along with queue administrators, the different information administrators are considered other key elements of the controller required to handle and process numerous outgoing and incoming queues. One of the major modules directly needed to connect with the data plane's flow tables is the Flow Manager (FM), relying on the southbound interface to make the required arrangements. Maintains that SDN controllers are attained in various settings, forms, and programs. There are several examples such as the following: firstly, Network Operating System (NOX), which is C + + -based and Python-based; secondly, Python Network Operating System (POX), which is Python-based; and thirdly, Beacon and Floodlight, both of which are Java-based [10]. Due to the increase in switches, a minimal latent change is shown in POX, where POX requires additional physical resources to accomplish its tasks effectively [11]. The Beacon controller is regarded as a system for releasing an app related to robust Java. Likewise, the Floodlight controller is a Java-based freeflow system that stems from the Beacon framework's origins. This controller is considered the most innovative free-flow controller, and it is marked by its simplicity to configure and its excellent performance. According to [12], OpenFlow, an open, standards-based communications protocol, is labelled with an example of device-based SDN. OpenFlow enables access to a network switch or router's forwarding plane over the network, which simplifies sophisticated traffic management, particularly for cloud and virtualized environments. The Open Networking Foundation (ONF) manages and standardizes the OpenFlow protocol, and it consists of SDN technologies related to the promotion of all together [13]. A major theme of OpenFlow's aims involves separating the control plane from the data plane. In [14] and [15], the authors indicated that there are also several well-known efforts, such as having the control plane separated from the data plane and standardizing information exchange between the control and data planes, which are done by the ForCES proposed by IETF [16]. Fig. 2 illustrates the POX Controller. Figure 2: The pox controller [16] IV. PROPOSED WORK OF LOAD BALANCING The proposed aim of this paper is to evaluate the load-balancing of three algorithms and who's better than the other: random algorithm, round-robin algorithm, and weight round-robin algorithm. The evaluation process depends on response time and processing time. Response time is the total time it takes from when a user sends requests until the user gets the response. The response time will calculate depend on the following equation: WhereR t represents response time, L t represents latency time, and Pt represents processing time. Latency time is part of the response time and is calculated from the time the message spends on the wire. Processing time part of the response time and is calculated from the time it takes from the processing of the algorithm's code at the controller plus the distribution processing between the servers. When processing time or latency change, response time change [17] of three different load-balancing algorithms: a round-robin algorithm, a random algorithm, and a weighted round-robin algorithm. First, the python servers [18] were used and python servers supporting (HyperText Transfer Protocol) HTTP 1.1 [19]. So the tools that support this type of HTTP and calculate response time were searched. The tool that was found is called the Ali tool [19]  so another type of the server was used called WEBRICK Server [20] that uses the Ruby language and supports HTTP1.0.
Apache tool was used to calculate the response time and processing time for the three algorithms.

Start
Send Request Stage: Step-1: • A request is sent to the Internet Protocol (IP) of the load balancer by the client.
• The IP load-balancer sends the Internet Control Message Protocol (ICMP) echo-request packet to all the servers (broadcast) by the controller to check if they are alive or not. Check Servers and Resolve Address Stage: Step-2: • Alive server: The server sends the ARP server reply to the controller.
• Otherwise: the ICMP echo request packet is re-sent to the servers by the controller. Step-3: • In a random algorithm, if the address of the server is resolved, the controller forwards the request to one of the servers that reply randomly (which is the default in Mininet).
• In a round-robin algorithm, if the server's address is resolved, it distributes the requests in the following manner as it gives the first request to the first server, the next request to the second server, and so on. Then, it gives a request to the last server and if a request comes after that it returns to the first server to gives it this request and so the process continues.
• In the weighted round-robin algorithm, if the server's address is resolved, a weight is assigned to each server based on criteria chosen by the site administrator. The higher the weight, the larger the proportion of client requests.
• Otherwise, the ICMP packet is in a queue until resolving the servers address the servers. Step-4: Evaluate Random and Round-Robin Algorithm using the Ali Tool • First, Python servers that support HTTP1.1 were used.
• The Ali tool that supports HTTP1.1 was used with these servers to calculate the response time for each request received, and the results are shown in the form of a chart without any manipulation. Step-5: Evaluate Random and Round-Robin Algorithm using Siege Tool • First, Python servers that support HTTP1.1 were used.
• A Siege tool was used that supports HTTP1.1, Which Sends requests to Python servers and evaluates the response time. And because the results looked similar, and this is impossible, a new tool called Apache was used. Step-6: Evaluate Random and Round-Robin Algorithm using Apache Tool • Use Apache tool, which supports HTTP1.0, which sends requests to the WEBRICK servers and evaluates response time and processing time. Step-7: Evaluate weight Round-Robin Algorithm using Siege Tool • First, Python servers that support HTTP1.1 were used.
• A Siege tool was used that supports HTTP1.1, Which Sends requests to Python servers and evaluates the response time. And because the results looked similar, and this is impossible, a new tool called Apache was used. Step-8: Evaluate weight Round-Robin Algorithm using Apache Tool • Activate WEBRICK servers that support HTTP1.0 to activate the apache tool.
• Use Apache tool, which supports HTTP1.0, which sends requests to the WEBRICK servers and evaluates response time and processing time.  results. This tool sends 500 requests to python servers to evaluate response time to each received request. The chart shows the mean, that is, the average response time, which is 20ms. Fig. 4 the number of requests on the x-axis, while the response time is shown on the y-axis humans with these results. This tool sends 500 requests to the python servers to evaluate the response time for each request received. In this chart, we can see that the mean, which is the average response time (22ms), is more than in the random algorithm. Fig. 5 shows the x-axis, which is the number of requests, and the y-axis, which is the response time. 3) Turn on the pox controller with the round-robin algorithm of load balancing. Activate the topology with one client and three WEBRICK servers that use the ruby language and support HTTP1.0. Use Apache tool (supporting HTTP1.0) , which sends 1000 requests. First, send one connection without concurrent, then send two connections concurrently. After that, send three connections concurrently, then send four connections concurrently, and finally send five connections concurrent to show the response time and processing time.

C. Weighted Round-Robin Algorithm
The weight for server one equals 1, the weight for server two equals 2, and the weight for server three equals 3. The percentage of weight for server one will be 16.66% cause (100/6=16.66%) , the percentage of weight for server two will be 33.32% cause (100/6=16.66% * 2=33.32%) , and the percentage of weight for server three will be 49.98% cause (100/6=16.66% * 3=49.98%). Fig. 6 below shows the weight distribution of the load balance for each server.   and in round-robin 54ms, as shown in Fig. 8. We can conclude that the random algorithm is better than the round-robin algorithm because the code lines of the random algorithm are less than the round-robin algorithm for that when the CPU cycle processing line by line of the code of the random algorithm it takes less processing time also when the microcontroller of the controller multitasking this algorithm take less processing time than round-robin algorithm because the line of code less. Moreover, the mechanism of distribution requests between the servers of the random algorithm depends on distributing the requests randomly between servers without the need to take hard decisions to decide which server will receive the next request, which makes it less response time than the mechanism of the round-robin algorithm, which distributes the requests in sequential order of processes, coming back to the first when all processes have been given tasks. This requires more time to decide which server will receive the next request. (round-robin: 54ms, weighted round-robin: 82ms), as shown in Fig. 10. The results can conclude that the round-robin algorithm is better than the weighted round-robin algorithm because the round-robin algorithm has fewer code lines than the weighted round-robin algorithm, for that when the CPU cycle processing line by line of the code of the round-robin algorithm, it takes less processing time also when the microcontroller of the controller multitasking this algorithm take less processing time than weighted round-robin algorithm because the line of code less. Also, the mechanism of distributing the requests of the round-robin algorithm depends on passing out the requests in sequential order of processes coming back to the first one when all the processes have been given tasks. In contrast, in the weighted round-robin algorithm, a weight is assigned to each server based on the site administrator's criteria. The higher the weight, the larger the proportion of client requests, which causes overload on one server without the others, leading to increase processing time, especially when using the same type of hardware for all the servers because what's the benefit of fall the load on one server without the others and increase the response time and processing time. The weight round-robin algorithm will be a benefit in case use it with multicore. VIII. CONCLUSION 1) This paper described using a random algorithm, round-robin algorithm, and weighted round-robin algorithm as loadbalancing techniques for software-defined networks.
2) The code of the round-robin and weighted round-robin algorithms was building and implementing in this paper.
3) Evaluation using a simulation implemented in the Python programming language.
4) The simulation was undertaken using the Ali tool, which provides information about the response times for every request received in a chart to evaluate response times.
5) The Siege tool was used with a Python server. Because the siege tool shows the same response time results for the three algorithms, another tool that was used to evaluate is the Apache tool.
6) Apache tool didn't work in the python server due to its HTTP1.0 and python server support HTTP1.1. So a Ruby server was used to evaluate response times and processing times.
7) The siege tool is not accurate tool because it gives the same results of response time in the three algorithms, which is impossible.
8) According to the results, the random algorithm better than the round-robin algorithm, which is due to the less number of lines of code in the random algorithm, so when the CPU cycle processing line by line of the code of the random algorithm, it takes less processing time also when the microcontroller of the controller multitasking this algorithm take less processing time than round-robin algorithm because the line of code less. Additionally, the mechanism of Moreover, the mechanism of distribution requests between the servers of the random algorithm depends on distributing the requests randomly between servers without the need to take hard decisions to decide which server will receive the next request, which makes it less response time than the mechanism of the round-robin algorithm, which distribute the requests in sequential order of processes, coming back to the first when all processes have been given tasks. This requires more time to decide which server will receive the next request. 9) The results also showed that the round-robin algorithm is better than the weighted round-robin algorithm because the round-robin algorithm has fewer code lines than the weighted round-robin algorithm, for that when the CPU cycle processing line by line of the code of the round-robin algorithm, it takes less processing time also when the microcontroller of the controller multitasking this algorithm take less processing time than weighted round-robin algorithm because the line of code less. Also, the mechanism of distributing the requests of the round-robin algorithm depends on passing out the requests in sequential order of processes coming back to the first one when all the processes have been given tasks. In contrast, in the weighted round-robin algorithm, a weight is assigned to each server based on the site administrator's criteria. The higher the weight, the larger the proportion of client requests, which causes overload on one server without the others, leading to increase processing time, especially when using the same type of hardware for all the servers because what's the benefit of fall the load on one server without the others and increase the response time and processing time. The weight round-robin algorithm will be a benefit in case use it with multicore.