Using ngnix with WildFly – JBoss

In this tutorial we will learn how to configure nginx -the popular load balancing solution- in front of a cluster of WildFly or JBoss servers.

Nginx is a powerful opensource balancing solution which offers many benefits in terms of load balancing, server health checks, HTTP/2 support, active monitoring and more. Let’s start from the installation. You can install nginx on a Fedora/RHEL distribution with:

$ yum install nginx 

Let’s see a sample configuration which can be used to front two WildFly/JBoss servers running on these HTTP ports:

JBoss1	8080
JBoss2	8180

Here is the /etc/nginx/nginx.conf configuration:

http {
    include       mime.types;
    default_type  application/octet-stream;

 

    server {
        listen       80;
        server_name  localhost;

 

        location /balancer {
           proxy_set_header  Host $host;
           proxy_set_header  X-Real-IP $remote_addr;
           proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_pass        http://jboss;
        }
  

   }

   upstream jboss {
      # Sticky session
      ip_hash;

      server localhost:8080;
      server localhost:8180;
   }

}

As you can see, JBoss servers are enlisted in the upstream section as jboss. The proxy_pass directive links the upstream servers with the http protocol. The balancing is done using Sticky Session which is done by calculating an Hash over the IP of the server where the connection started. Subsequent connections will be routed to the same server, unless the node fails. Additionally an Header is set for request intercepted by nginx

To check your balancing, start nginx:

$ systemctl start nginx.service

If you want to enable nginx to start at each boot, also execute:

$ systemctl enable nginx.service

Other Balancing options

The default algorithm, if you don’t provide any, is Round Robin:

   upstream jboss {
      server localhost:8080;
      server localhost:8180;
   }

Another options is directing the traffic to nodes which the least number of active connections:

   upstream jboss {
      least_conn;
      server localhost:8080;
      server localhost:8180;
   }

If you are using nginx, you can also use least time which selects the server with the lowest average latency and the least number of active connections:

upstream jboss {
      least_time header;
      server localhost:8080;
      server localhost:8180;
   }

The number of features available in nginx are really a lot so I strongly advise to have a look at the Site’s resouces: https://www.nginx.com/resources/admin-guide

One particularly interesting feature is the Health checking which can be used to stop sending requests to a server when it is not considered active. Health checking can be done in two ways:

Passive Health Checking

In this case, nginx considers a server unavailable when a timeout or failure happens during the balancing. Here is for example how to exclude one server when the 5 failed attempts happen with a timeout of 30 seconds on each.

   upstream jboss {
      server localhost:8080 max_fails=5 fail_timeout=30s;
      server localhost:8180 max_fails=5 fail_timeout=30s;
   }

Active Health Checking

In the latter case, some heartbeats are sent to servers to check if they are available or not. Let’s see an example:

http {
    upstream jboss {
        zone jboss 64k;

      server localhost:8080;
      server localhost:8180;
    }

    server {
        location / {
            proxy_pass http://jboss;
             health_check interval=30 fails=3 passes=3;
        }
    }
}

In this case, an health check will be performed every 30 seconds. A Jboss server will be considered unhealthy after 3 consecutive failed health checks. The JBoss server will be considered back healthy after 3 successful health checks.

References: https://www.nginx.com/resources/admin-guide/load-balancer/

Found the article helpful? if so please follow us on Socials