We are sunsetting On-Premises API. Refer to our On-Premises API Sunset document for details, and to learn how to migrate to our next-generation Cloud API.
The standard WhatsApp Business API Client solution runs on a single Docker container. Having multiple Docker containers running will cause problems and result in your account being temporarily banned. This guide will walk you through how to set up high availability, allowing you to have Docker containers on stand-by in case the primary Docker container goes down.
This high availability solution requires an existing WhatsApp Business API Client single-instance installation to run on top of it. If you haven't set up your WhatsApp Business API Client phone number yet, review the Installation documentation before proceeding with this solution.
A high availability cluster requires at least two Master nodes and two Coreapp nodes as seen in the following diagram:
All of nodes are recommended to run on different machines/racks to avoid single machine/rack failure affecting multiple nodes at the same time.
When a cluster starts up, all Master nodes will compete to grab the master lease to become primary. Only one node will succeed and others will become secondary Masters. If there are N number of Master nodes in the cluster, there will be one primary Master and N-1 secondary Masters. The primary Master is responsible for registration, database schema upgrade, configuration changes broadcast, reporting database stats, cluster management, etc. If the primary Master dies and loses the master lease, other secondary m=Masters will compete to take over the primary Master position.
When a Master becomes primary, it will first load the shard map table from the database to learn who is the current primary Coreapp. If there is no primary Coreapp in the cluster, the primary Master will promote one healthy secondary Coreapp to primary Coreapp and update the shard map table in the database so that the Webapp can look up which Coreapp node to send API requests to. In this way, even if all Masters are down, it could still serve API requests in the Coreapp nodes to achieve High Availability.
When a Coreapp node starts up, it will run as a secondary Coreapp until the primary Master promotes it to be primary Coreapp to connect to the WhatsApp server. After that, it's responsible for handling API requests.
Each Coreapp node will update the database every minute to claim its liveness. The primary Master will check the database periodically to detect unhealthy Coreapp nodes. If a primary Coreapp node hasn't updated the database for more than 2 minutes, the primary Master will consider it unhealthy and promote other Coreapp nodes to primary. In this way, downtime is of about 2 minutes.
If a cluster has more than one running Master, heartbeat-based monitoring detects node failures faster than database-based monitoring. In heartbeat-based monitoring, all Masters are responsible for monitoring Coreapp nodes by sending heartbeats to them every 5 seconds (configured by heartbeat_interval
). If a primary Coreapp hasn't responded to the primary Master and one secondary Master for 30 seconds (configured by unhealthy_interval
), it is considered unhealthy and the primary Master will promote a healthy secondary Coreapp to primary Coreapp. In this way, downtime is about 30 seconds, by default. You may decrease the unhealthy_interval
value if a lower downtime is preferred. Check the Settings documentation for example payloads.
In a High Availability cluster there are three kinds of nodes: Webapp, Master, and Coreapp. They could be started separately in different machines, but they are required to be in the same network so that they can talk to each other.
A Webapp node is responsible for handling API traffic like the original Webapp container. A Coreapp node is responsible for handling messaging traffic to and from WhatsApp. Finally, a Master node is responsible for monitoring Coreapp nodes in the cluster, so if one Coreapp node dies, it will redirect traffic to another Coreapp node for high availability. There could be multiple Webapp nodes, Coreapp nodes, and Master nodes in the cluster.
Active nodes are no longer referred to as slave nodes. They are called Coreapp nodes.
Note: For production environments, in most cases, the database should be run on a separate physical server from the Coreapp and Webapp containers. For true High Availability, it's recommended to run the Master, Webapp and Coreapp containers on different physical machines.
If you don't care about media messages, skip this step.
To support sending/receiving media messages, it's required to set up a NFS file system and mount it to a local directory on all Webapp, Master, and Coreapp nodes. Make sure read/write permissions are granted on the shared directory.
mkdir new-local-directory mount -t nfs nfs_server_IP_addr:/share_directory new-local-directory
This guide requires Docker, a container platform that lets you run the WhatsApp Business API Client. Docker Compose is also required. Docker Compose is bundled with Docker for macOS and Windows but requires separate installation on Linux.
multiconnect-compose.yml
and db.env
configuration files:
WhatsApp_Configuration_Files.zip.
db.env
file to reflect your MySQL configuration. If you do not have MySQL installed, the multiconnect-compose.yml
and db.env
files have a default configuration to bring up an instance in a local container.docker-compose -f your-single-connect-yml-filename stop
docker-compose -f multiconnect-compose.yml upYou will get some output while the script downloads the Docker images and sets everything up. To run the containers in the background, use the
-d
parameter:
docker-compose -f multiconnect-compose.yml up -d
Once you have completed these steps, ensure that the containers are running with the following command:
docker-compose ps
By default, the Webapp container will be running on port 9090.
multiconnect-coreapp.yml
, multiconnect-master.yml
, multiconnect-webapp.yml
, and db.env
configuration files:
WhatsApp_Configuration_Files.zip, and save each one to its respective server.
db.env
file to reflect your MySQL configuration.docker-compose -f your-single-connect-yml-filename stop
The environment variable EXTERNAL_HOSTNAME should be an IP address or hostname that is accessible from the machines running other containers. The ports exposed in a service YML file should be open to connections from machines running other containers. For example, ports defined as COREAPP_EXTERNAL_PORTS
in multiconnect-coreapp.yml
need to be open for incoming traffic on the host that runs coreapp
containers.
EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-coreapp.yml up # on the Coreapp server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-master.yml up # on the Master server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-webapp.yml up # on the Webapp serverYou will get some output while the script downloads the Docker images and sets everything up. To run the containers in the background, use the
-d
parameter:
EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-coreapp.yml up -d # on the Coreapp server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-master.yml up -d # on the Master server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-webapp.yml up -d # on the Webapp server
Once you have completed these steps, ensure that the containers are running with the following command:
EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-coreapp.yml ps # on the Coreapp server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-master.yml ps # on the Master server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-webapp.yml ps # on the Webapp server
Running multiple instances of the same service (e.g., running 2 Coreapps on the same host) will not work by default due to host port conflict. To avoid port conflict, you need to modify the respective service YML file, in this case multiconnect-coreapp.yml
, to expose different host ports for each instance as follows:
ports:
- "HOST_PORT_RANGE:6250-6253"
By default, the Webapp container will be running on port 9090.
The multiconnect-compose.yml
file has fields indicating container versions. For example:
services: ... waweb: image: docker.whatsapp.biz/web:v2.19.4 ... master: image: docker.whatsapp.biz/coreapp:v2.19.4 ... wacore: image: docker.whatsapp.biz/coreapp:v2.19.4
To upgrade an installation, change the version numbers in the multiconnect-compose.yml
file:
services: ... waweb: image: docker.whatsapp.biz/web:v2.19.7 ... master: image: docker.whatsapp.biz/coreapp:v2.19.7 ... wacore: image: docker.whatsapp.biz/coreapp:v2.19.7
Then, restart the Docker containers:
docker-compose -f multiconnect-compose.yml up
The YAML files have fields indicating container versions. For example:
services: ... waweb: image: docker.whatsapp.biz/web:v2.19.4
services: ... wacore: image: docker.whatsapp.biz/coreapp:v2.19.4
services: ... master: image: docker.whatsapp.biz/coreapp:v2.19.4
To upgrade an installation, change the version numbers in the respective files:
services: ... waweb: image: docker.whatsapp.biz/web:v2.19.7
services: ... wacore: image: docker.whatsapp.biz/coreapp:v2.19.7
services: ... master: image: docker.whatsapp.biz/coreapp:v2.19.7
Then, restart the Docker containers:
EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-coreapp.yml up # on the Coreapp server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-master.yml up # on the Master server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-webapp.yml up # on the Webapp server
If you have media volumes from a previous installation, replace the following volume definition in the YAML files:
volumes: whatsappData: driver: local whatsappMedia: driver: local
with:
volumes: whatsappData: external: true whatsappMedia: external: true
This is only recommended if you want to maintain an existing bind mount volume.
If you wish to directly mount a host path (an existing location on your host) into the container, you can do that by changing the volume line inside the service section to point to the host path.
wacore: volumes: /filepath/waent/data:/usr/local/waent/data /filepath/wamedia:/usr/local/wamedia
You'll have to repeat this for all the machines where you have nodes running.
If you need to reset your development environment by removing all containers, run the following command from the directory containing the multiconnect-compose.yml
file:
docker-compose -f multiconnect-compose.yml down
In order to get rid of all volumes defined in the multiconnect-compose.yml
file in addition to the containers, run down
with the -v
parameter:
docker-compose -f multiconnect-compose.yml down -v
If you need to reset your development environment by removing all containers, run the following command from the directory containing the YAML file on each server:
EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-coreapp.yml down # on the Coreapp server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-master.yml down # on the Master server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-webapp.yml down # on the Webapp server
In order to get rid of all volumes defined in the YAML files in addition to the containers, run down
with the -v
parameter:
EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-coreapp.yml down -v # on the Coreapp server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-master.yml down -v # on the Master server EXTERNAL_HOSTNAME=MACHINE_HOSTNAME docker-compose -f multiconnect-webapp.yml down -v # on the Webapp server
To obtain logs for troubleshooting, run the following command on your servers:
docker-compose logs > debug_output.txt