Load balancer
A load balancer distributes incoming network traffic across cloud servers to improve service availability. It optimizes request distribution and reduces the load on individual servers. If one server fails, the load balancer automatically redirects traffic to another healthy server.
Billing
The load balancer billing model depends on the selected balancer configuration:
Type |
Basic |
Basic with reservation |
Advanced with reservation |
---|---|---|---|
Configuration |
1GB RAM, 1 VCPU |
1GB RAM, 1 VCPU |
2GB RAM, 2 VCPU |
Fault tolerance |
Only Single-mode |
Fault-tolerant with redundancy |
Fault-tolerant with extended redundancy |
Redundancy |
Amph-single-standard |
Amph-failover-standard |
Amph-failover-advanced |
Recommend for |
For test environments or projects that do not require 24/7 service availability |
For small and medium projects where service availability is important |
For projects with high load and requirement for constant service availability |
Choose a plan based on your requirements for load balancer performance and fault tolerance. Keep in mind that this choice is final — to switch plans, you’ll need to create a new load balancer.
Create a load balancer
Prepare your infrastructure in advance:
Active virtual servers with pre-installed applications (e.g., Nginx, Apache, API services, etc.) to handle incoming traffic.
A shared Layer 2 (L2) virtual network — either internal or external.
If a firewall is enabled, make sure to allow incoming connections on the load balancer’s port in its settings.
To create a balancer:
Go to the Control panel in the Cloud section and click the +Create button.
Specify the region and availability zone.
Select the network and port with the IP address to which the balancer will receive traffic. Selecting an internal or external network determines whether the balancer will be accessible from the Internet or limited to the local network only.
Configure traffic routing rules:
Enter the name of the target group handler, specify the protocol, and the port.
In the Group Servers tab, select the required virtual servers from the existing ones. All backend servers must respond to the same port.
Go to the Settings tab and set the balancing method - Round Robin or Least Connections.
Enable Sticky sessions if it is important for your application that requests from the same client are sent to the same server, allowing you to save the session state. Binding is performed by IP address or cookie - HTTP or APP.
It is recommended to enable monitoring of the target group servers to avoid redirecting incoming traffic to unavailable servers.
Enter the name of the balancer and, if necessary, its description.
Complete the creation of the balancer by clicking the Create balancer button.
Return to the Balancers tab and enable the balancer you need by clicking the ⋁ button.
Available protocol combinations
The following protocol combinations are available for receiving traffic on the balancer and assigning traffic to the target group:
TCP–TCP — classic balancing.
TCP–PROXY — client information is not lost and is transmitted in a separate connection header.
UDP–UDP — The UDP protocol is faster than TCP, but less reliable.
HTTPS–HTTPS — balancing with encryption and TLS termination on the end server side.
Update the load balancer
To update the load balancer routing rule:
Go to the Balancers tab.
In the card of the required balancer, click the More button.
In the form that opens, you can change the name and description of the balancer itself or add a new routing rule.
Delete load balancer
To delete the load balancer:
Go to the Balancers tab.
In the card of the required balancer, click the ⋁ button.
In the form that opens, click the Delete button.