I'm evaluating Docker for production use. Something that in my opinion is fundamental is the possibility of scaling without any need of further configuration, my primary concern was for the Load Balancer.
If i need to scale my Restful API how i can explain this to the load balancer?
I started digging around searching for the solution, i finally found this: HAProxy with a listener on Docker events. Here follows my demo configuration, and below the explanation.
This is a Docker Compose file, if you're not familiar whit this tool i suggest to study docker-compose and then continue the reading
version: '2' services: whoami: image: jwilder/whoami lb: image: dockercloud/haproxy depends_on: - whoami environment: - ADDITIONAL_SERVICES=infrastructure:whoami - DOCKER_HOST=tcp://172.16.0.5:2375 ports: - 80:80 - 1936:1936
Obviously the first service is just for testing purpose.
Let's start analyzing the load balancer service:
image: dockercloud/haproxy - This is the image that i used, is a modified HAProxy image made by the docker's guys.
depends_on: whoami - yust to be sure that the back-end is running when the LB starts.
ADDITIONAL_SERVICES=infrastructure:whoami - here is where you tell to the LB which service is the back-end pool, in this example the
docker-compose.yml file is in the
infrastructure directory, so the format is
DOCKER_HOST=tcp://172.16.0.5:2375 - ane here is where you tell to the service where is your swarm master node, it will connect to the master waiting for the back-end scaling events.
This is all for now, I'm testing multiple backend. Stay tuned!