Monday 20 April 2015

Kerberos support in Keycloak

From the version 1.2.0.Beta1, Keycloak supports login with Kerberos ticket through SPNEGO. SPNEGO (Simple and Protected GSSAPI Negotiation Mechanism) is used to authenticate transparently through the web browser after the user has been authenticated with Kerberos when logging-in his desktop session. For non-web cases or when Kerberos ticket is not available during login, Keycloak also supports login with Kerberos username/password.

Flow

A typical use case for web authentication is the following:

  • User logs into his desktop (Such as a Windows machine in Active Directory domain or Linux machine with Kerberos integration enabled).
  • User then uses his browser (IE/Firefox/Chrome) to access a web application secured by Keycloak.
  • Application redirects to Keycloak login.
  • Keycloak sends HTML login screen together with status 401 and HTTP header WWW-Authenticate: Negotiate
  • In case that browser has Kerberos ticket from desktop login, it transfers the desktop sign on information to the Keycloak in header Authorization: Negotiate 'spnego-kerberos-token' . Otherwise it just displays login screen.
  • Keycloak validates token from browser and authenticates user. It provisions user data from LDAP (in case of LDAPFederationProvider with Kerberos authentication support) or let user to update his profile and prefill data (in case of KerberosFederationProvider).
  • Keycloak returns back to the application. Communication between Keycloak and application happens through OpenID Connect or SAML messages. The fact that Keycloak was authenticated through Kerberos is hidden from the application. So Keycloak acts as broker to Kerberos/SPNEGO login.

Keycloak also supports credential delegation. In this case, the web application might be able to reuse the Kerberos ticket and forwards it to another service secured by Kerberos (for example LDAP server or IMAP server). The tricky part is, that SPNEGO authentication happens on Keycloak server side, but you want to use the ticket on the application side. For this scenario, we serialize the GSS Credential with the underlying ticket and send it to the application in the OpenID Connect access token. For this scenario there are 2 more points:

  • Application deserializes the GSS credential with Kerberos ticket sent to it in access token from Keycloak
  • Application uses Kerberos ticket for sending request to another service secured by Kerberos

The whole flow may look complicated, but from the user perspective, it's the opposite! User with Kerberos ticket just visits the URL of the web application and he is logged automatically without even seeing Keycloak login screen.

Setup

For the Kerberos setup, you need to install and configure the Kerberos server and Kerberos client. You also need to setup your web browser. Also you need to export keytab file from your kerberos server and make it available to Keycloak. But these steps are not specific for Keycloak (you always need to do them for SPNEGO login support in any kind of web applications) and they are specific according to your OS and Kerberos vendor, so they are not described in details here.

Keycloak specific steps are just:

  • Setup of federation provider in keycloak admin console. You can either setup:
    • Kerberos federation provider - This provider is useful if you want to authenticate with Kerberos NOT backed by LDAP server.
    • LDAP federation provider - This provider is useful if you want to authenticate with Kerberos backed by LDAP server. Kerberos backed by LDAP server is very often the case in production for environments like FreeIPA or MSAD Windows domain
  • Enable GSS credential protocol mapper for your application (This is mandatory only if you need credential delegation).

For more details, take a look at Keycloak documentation, which describes everything in details and also points you to the Kerberos credential delegation example and FreeIPA Keycloak docker image .

Tuesday 14 April 2015

Keycloak on Kubernetes with OpenShift 3

This is the second of two articles about clustered Keycloak running with Docker, and Kubernetes. In the first article we manually started one PostgreSQL docker container, and a cluster of two Keycloak docker containers. In this article we’ll use the same docker images from DockerHub, and configure Kubernetes pods to run them on OpenShift 3.


See the first article for more detailed instructions on how to set up Docker with VirtualBox.

Let’s get started with Kubernetes.


Installing OpenShift 3


OpenShift 3 is based on Kubernetes, so we'll use it as our Kubernetes provider. If you have a ready-made Linux system with Docker daemon running, then the easiest way to install OpenShift 3 is by installing Fabric8.


Open a native shell, and make sure your Docker client can be used without sudo - you either have to have your environment variables set up properly, or you have to be root.


To set up shell environment determine your Docker host’s IP, and set the following:


 export DOCKER_IP=192.168.56.101
 export DOCKER_HOST=tcp://$DOCKER_IP:2375


Make sure to replace the IP with the one of your Docker host.


Now, simply run the following one-liner that will download and execute Fabric8 installation script, which among other things installs OpenShift 3 as a Docker container, and properly sets up your networking using iptables / route ...


 bash <(curl -sSL https://bit.ly/get-fabric8) -f


It will take a few minutes for various Docker images to be downloaded, and started. At the end a browser window may open up - if you are in a desktop environment. You can safely close it as we won’t need it.


The next thing to do is to set up an alias for executing an OpenShift client tool:


 alias osc="docker run --rm -i -e KUBERNETES_MASTER=https://$DOCKER_IP:8443 --entrypoint=osc --net=host openshift/origin:v0.3.4 --insecure-skip-tls-verify"


Note: OpenShift development moves fast. By the time you're reading this the version may not be v0.3.4 any more. You can use docker ps to identify the current version used.

Every time we execute osc command in the shell, a new Docker container is created for one-time use from OpenShift image, and its local copy of osc is executed.


Let’s make sure that it works:

 osc get pods


We should get back a list of several pods created by OpenShift and Fabric8.


Kubernetes basics



Kubernetes is a technology for provisioning and managing of Docker containers.


While the scope of Docker is one host running one Docker daemon, Kubernetes works at the level of many hosts each running a Docker daemon, and a Kubernetes agent called Kubelet. There is also a Kubernetes master node running a kubernetes daemon, providing central management, monitoring, and provisioning of components.


There are three basic types of components in Kubernetes:

Pods

Pod is a virtual server that is composed of one or more Docker containers - which are like processes in this virtual server. Each pod gets a newly allocated IP address, hostname, port space, and process space which are all shared by the docker containers of that pod (they can even communicate via SystemV IPC or POSIX message queues).


Services

Service is a front end portal to a set of pods providing the same service. Each service gets a newly allocated IP, where it listens on a specific port, and tunnels established connections to backend pods in round-robin fashion.


Replication controllers

These are agents that monitor pods. Each agent enforces that a specified number of instances of its monitored pod is available at any one time. If there are more it will randomly delete some, it there are less it will create new ones.



Every requested action at the level of Kubernetes occurs at the level of pods. While you can still directly interact with Docker containers using docker client tool, the idea of Kubernetes is that you shouldn’t. Any operation at Docker container level is supposed to be performed automatically by Kubernetes as necessary. If a certain Docker container started and monitored by Kubernetes dies, Kubernetes will create a new one.


When one pod needs to connect to another - like in our case Keycloak needs to connect to PostgreSQL - that should be done through a service. While individual pods come and go, constantly changing their IP addresses, the service is a more permanent component and also its IP address is thus more permanent.


Armed with that knowledge we can now define and create a new Keycloak cluster that uses PostgreSQL.


Creating a cluster using Kubernetes


There is an example container definition file available on GitHub that makes use of the same Docker images we used in the previous article.


In this example configuration we define three services:
  • postgres-service ... listens on port 5432 and tunnels to postgres pods to port 5432
  • keycloak-http-service … listens on port 80 and tunnels to keycloak pods to port 8080
  • keycloak-https-service … listens on port 443 and also tunnels to keycloak pods, but to port 8443


We then define two replication controllers:
  • postgres-controller … monitors postgres pods, and makes sure exactly one pod is available at any one time
  • keycloak-controller … monitors keycloak pods, and makes sure exactly two pods are available at any one time


And we define two pods:
  • postgres-pod … contains one docker container based on latest official ‘postgres’ image
  • keycloak-pod … contains one docker container based on latest jboss/keycloak-ha-postgres image


With this file we can now create, and start up our whole cluster with one line:


 osc create -f - < keycloak-kube.json


We can monitor progress of new pods coming up, by first listing pods:


$ osc get pods

POD                         IP            CONTAINER(S)                
keycloak-controller-559a8   172.17.0.12   keycloak-container
keycloak-controller-zorqg   172.17.0.13   keycloak-container
postgres-controller-exkqq   172.17.0.11   postgres-container          


(there are more columns, but I did not include them here)


What we are interested in here are the exact pod ids so we can attach to their output.


We can check how PostgreSQL is doing:


 osc log -f postgres-controller-exkqq


And then make sure each of the Keycloak containers started up properly, and established a cluster:

 osc log -f keycloak-controller-559a8


In my case the first container has started up without error, and I can see the line in the log that tells the cluster of two Keycloak instances has been established:


2015-04-02T09:26:18.683827888Z 09:26:18,678 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,shared=udp) ISPN000094: Received new cluster view: [keycloak-controller-559a8/keycloak|1] (2) [keycloak-controller-559a8/keycloak, keycloak-controller-zorqg/keycloak]


When things go wrong


Let’s also check the other instance:


 osc log -f keycloak-controller-zorqg



In my case I see a problem with the second instance - there is a nasty error:


2015-04-02T09:26:37.124344660Z 09:26:37,074 ERROR [org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider] (MSC service thread 1-1) Change Set META-INF/jpa-changelog-1.1.0.Final.xml::1.1.0.Final::sthorger@redhat.com failed.  Error: Error executing SQL ALTER TABLE public.EVENT_ENTITY RENAME COLUMN TIME TO EVENT_TIME: ERROR: column "time" does not exist: liquibase.exception.DatabaseException: Error executing SQL ALTER TABLE public.EVENT_ENTITY RENAME COLUMN TIME TO EVENT_TIME: ERROR: column "time" does not exist

...


What’s going on?


It turns out that Keycloak version that we used for the Docker image at the time of this writing contains a bug that appears when multiple Keycloak instances connecting to the same PostgreSQL database start up at the same time. The bug can be tracked in project’s JIRA.


In my case all I have to do is kill the problematic instance, and Kubernetes will create a new one. 

The proper handling would be for Kubernetes to detect that one pod has failed to start up properly, and kill it. But then Kubernetes would have to understand how to detect a fatal startup condition in a still running Keycloak process. As an alternative we could have Keycloak exit the JVM with error code when detecting improper start up. In that case Kubernetes would create another pod instance automatically.

 osc delete pod keycloak-controller-zorqg


Kubernetes will immediately determine that it should create another keycloak-pod to lift their count to two.


$ osc get pods

POD                         IP            CONTAINER(S)                
keycloak-controller-559a8   172.17.0.12   keycloak-container
keycloak-controller-xkq43   172.17.0.14   keycloak-container
postgres-controller-exkqq   172.17.0.11   postgres-container          


We can see another pod instance: keycloak-controller-xkq43 with a new IP address.


Let’s make sure it starts up:


 osc log -f postgres-controller-xkq43


This time the instance starts up without errors, and we can also see that a new JGroups cluster is established:


2015-04-02T10:09:32.615783260Z 10:09:32,615 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000094: Received new cluster view: [keycloak-controller-559a8/keycloak|3] (2) [keycloak-controller-559a8/keycloak, keycloak-controller-xkq43/keycloak]


Making sure things work


We can now try to access Keycloak through each of the pods - just to make sure - since the proper way to access Keycloak now is through keycloak-service.


In my case the following two pod urls properly work (they can be accessed from Docker host): http://172.17.0.12:8080, and http://172.17.0.14:8080


The ultimate test is to use a keycloak-service IP address.


Let’s list the running services:


$ osc get services


NAME                    SELECTOR           IP              PORT
keycloak-http-service   name=keycloak-pod  172.30.17.192   80
keycloak-https-service  name=keycloak-pod  172.30.17.62    443
postgres-service        name=postgres-pod  172.30.17.246   5432


(there are more columns, but I did not include them here)


We can see all our services listed, and we can see their IP addresses. Here we’re interested in keycloak-http-service so let’s try to access Keycloak through it from Docker host: http://172.30.17.192


Note, that if you want to access this IP address from another host (not the one hosting Docker daemon) you would have to set up routing or port forwarding.


For example, using boot2docker on OS X, and accessing a VirtualBox instance running Docker daemon I have to go to native Terminal on OS X and type:


sudo route -n add 172.30.0.0/16 $DOCKER_IP


When browser establishes a TCP connection to port 80 of Keycloak service’s IP address, there is a tunneling proxy there that creates another connection to one of the Keycloak pods (chosen in round robin fashion), and tunnels all the traffic through to it. Each Keycloak instance will therefore see the client IP to be equal to the service IP. Also, during our browser session many connections will be established - half of them will be tunneled to one pod, the other half to the other pod. Since we have set up Keycloak in clustered mode it doesn’t matter which pod the request hits - they both use the same distributed cache, and consequently always generate the same response - without a need for sticky sessions.


Conclusion


We used an example Kubernetes configuration to start up PostgreSQL, and a cluster of two Keycloak instances on OpenShift 3 through Kubernetes, using the same Docker images available on DockerHub that we used in the previous article where we created the same kind of cluster - in that case using Docker directly.

For production quality scalable cloud we still need to provide a monitoring mechanism that detects when a Keycloak instance isn't operational.

Also worth noting is that Keycloak clustered setup used in our image requires multicast, and will only work when Keycloak pods are deployed on the same Docker host - the same Kubernetes worker node. Multicast is generally not available in production cloud environments, and the fact that it does work here is a side-effect of current implementation of OpenShift 3, and may change in the future. For a more proper cloud setup, a Kubernetes-aware direct TCP discovery mechanism should be configured on JGroups. One candidate solution for that is a kubeping project.

Also, for real high availability we should also make sure the database is highly available. In this example we used PostgreSQL, for which there are multiple ways to make it highly available, with different tradeoffs between data consistency and performance. Maybe a topic for another post.

Friday 10 April 2015

Securing Fuse applications and Hawtio with Keycloak

From version 1.1.0.Final Keycloak supports securing your web applications running inside JBoss Fuse or Apache Karaf . It leverages Keycloak Jetty adapter as both JBoss Fuse 6.1 and Apache Karaf 3 are bundled with Jetty 8.1 server under the covers and Jetty is used for running various kinds of web applications.

What is supported for Fuse/Karaf is:

Where to start?

The best place is to look at Fuse demo bundled as part of Keycloak examples. It's in keycloak sources and it's also bundled in directory examples/fuse of Keycloak appliance distribution, which can be downloaded from Sourceforge .It's recommended to download latest 1.2.0.Beta1 .

For SSH and JMX admin access, you can take a look at this README .

For securing Hawtio, keycloak integration is available from version 1.4.47 and is described here . There is also effort to secure Hawtio 2.x with Keycloak, which is here .

Tuesday 7 April 2015

Running Keycloak cluster with Docker

This is the first of two articles that will describe how to run Keycloak in clustered mode - first with Docker, and then with Kubernetes running on OpenShift 3.


The preferred way to run a Keycloak server - an authentication, and authorization server with support for single sign-on - is to run it as an isolated application in its own process. What you specifically don’t want is run any kind of applications in the same JVM instance, and it’s also not the best idea to run any other publicly facing applications on the same server.

The reason is of course security, but also stability. You don’t want your Keycloak process to suffer security vulnerabilities, or ‘Out of memory’ errors because of another application deployed in the same JVM. It is one thing to lose one application, another - more serious thing - to lose login capability for many applications and services, or even have your Keycloak private keys compromised.

Even an isolated instance, though, can occasionally experience a failure. Therefore, the proper way is to have a cluster of instances with load balancing router or reverse proxy in the front, that detects a failed instance, and diverts traffic away from it. One way to set that up would be to have one production instance at a time, and another failover instance idling until it’s needed. Another, even better way is to use all the running instances as production instances - this way having a horizontal scaling, whereby bringing up more instances linearly increases the number of requests your cluster is capable of handling.

It is this horizontal scaling capability that is the goal of Kubernetes project - the open source solution for provisioning of Docker containers.

In this first article I’ll show how to set up two Keycloak instances in clustered mode, each running in its own Docker container, and using a PostgreSQL database running in another Docker container.

In the next article we’ll enhance that set up by configuring these Docker instances as Kubernetes Pods using OpenShift 3. That will give us a scalable runtime environment where we can remove and add server instances virtually unnoticeable to clients.

Buckle up, and let’s get started.

Installing Docker


The first thing we’ll need is Docker. Docker is a containerization technology - as opposed to virtualization, and is Linux specific. Multiple processes can run each in its own isolated chrooted environment with its own filesystem image, its own IP address within the same bridged network, while they’re all using the host’s Linux kernel - the same kernel process.

Since we can natively use Docker only on Linux, what we do when we’re on Windows or OS X is use a solution that runs a simple, small, headless (no desktop) Linux distribution on VirtualBox - it’s called boot2docker.

If you already use VirtualBox, and have an already created virtual Linux instance you can also use that one. If it includes a desktop it can even simplify things as you can reach Docker containers IP addresses directly from browser. You may want to add another network adapter to your virtual instance - by default there is 'Adapter 1' using NAT, and you should configure 'Adapter 2' to be of type Host-only Adapter. That will simplify connecting from you Windows / OS X host to your Linux guest.

You can find Docker installation instructions for your platform on docker.io site.


Starting Docker daemon


Once you have Docker installed make sure that your Docker daemon process is running. In your Linux shell you can execute:

ps aux | grep docker

You should see a line similar to:

root     31237  2.8  1.2 1203628 25188 ?       Ssl  Mar29  30:57 /usr/bin/docker -d --selinux-enabled -H unix://var/run/docker.sock -H tcp://0.0.0.0:2375 --insecure-registry 172.0.0.0/8

If you don’t see that, then your Docker daemon is not yet running, and it’s time to start it - you may have to prepend ‘sudo ’ if you’re not root:

service docker start


Using Docker client


We can now use Docker client to issue commands to the daemon. We first have to make sure that our shell environment has some environment variables set to allow Docker client to communicate with the daemon.

One way to provide proper environment is to execute Docker client through sudo or as a root user (su -) on the Docker host system.

Another is to specify an environment variable:

export DOCKER_HOST=tcp://192.168.56.101:2375

Where the IP address is one of the public interfaces on the Docker host system - one that can also be reached from you client terminal (which can be on another host). You can use ifconfig or ip addr to list the available interfaces and their IPs. Note, that docker configures virtual networks that are not directly reachable from another host, here we are not interested in those.

We can now list currently running Docker containers:

 docker ps

If this is the first time you’re using Docker, or if you have just started up the daemon, then no Docker container is running yet.


Starting Postgres as Docker container


We’re now going to set up a PostgreSQL database.

Docker uses a central repository of Docker images - each image representing a filesystem with startup configuration for application, and is thus a mechanism to package an application.

We’ll use the latest official PostgreSQL image to start PostgreSQL as a new container instance. You can learn more about it on DockerHub.

docker run --name postgres -e POSTGRES_DATABASE=keycloak -e POSTGRES_USER=keycloak -e POSTGRES_PASSWORD=password -e POSTGRES_ROOT_PASSWORD=password -d postgres

The basic form of this command is: docker run -d postgres

That command instructs Docker daemon to download the latest official postgres image from DockerHub and start it up as new Docker container. The -d switch instructs docker client to return immediately, while any processes executed in container keep running in a background.

Additionally we specified several environment variables to be passed to the container which are used to configure a new database, and a new user for accessing the database. Note that we used 'password' - you should really change it to something else!

By using --name postgres we assigned a name to the new container. We’ll use this name whenever we need to refer to this running container in subsequent invocations of docker client.

Note: if this is not the first time you’re working through these steps you may already have a container named 'postgres'. In that case, you won’t be able to create another one with the same name. You have two options - choose a different name for this one, or destroy the existing one using: docker rm postgres  


We can attach to the container output using:

docker logs -f postgres

We use -f to keep following the output - analogous to how tail -f works.

You should see the output finish with something like:

PostgreSQL stand-alone backend 9.4.1
backend> statement: CREATE DATABASE "keycloak" ;

backend>

PostgreSQL stand-alone backend 9.4.1
backend> statement: CREATE USER "keycloak" WITH SUPERUSER PASSWORD 'password' ;

Use CTRL-C to exit the client.

We can now check that the database accepts connections, since it will be accessed via TCP from other docker instances.

We can start a new shell process within the same Docker container:

 docker exec -ti postgres bash

With this command we don’t start a new container - that would create a whole new copy of the chrooted file system environment with a new IP address assigned. Rather, we execute another process within the existing container.

By using -ti we tell docker that we want to allocate a new pseudo tty, and that we want this terminal’s input to be attached to container. That will allow us to interactively use the container’s bash.

We can find out what the container’s IP address is:

ip addr

We should see two interfaces:
  • lo with address 127.0.0.1
  • eth0 with address 172.17.0.x


eth0 will have an IP address within 172.17.x.x network.

This IP address is visible from all other docker containers running on the same host.

Let’s make sure that we are in fact attached to the same container running PostgreSQL server. Let’s use postgres client to connect as user keycloak to the local db:

# psql -U keycloak
psql (9.4.1)
Type "help" for help.


keycloak=# \l
                                List of databases
  Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
-----------+----------+----------+------------+------------+-----------------------
keycloak  | postgres | UTF8     | en_US.utf8 | en_US.utf8 |
postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
          |          |          |            |            | postgres=CTc/postgres
template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
          |          |          |            |            | postgres=CTc/postgres
(4 rows)




We’re in fact inside the correct container, and have confirmed that the database is correctly configured with user keycloak.

Exit postgres client with \q. And then exit the shell with exit.



Another way to find out the container’s address is using docker’s inspect command:

 docker inspect -f '{{ .NetworkSettings.IPAddress }}' postgres

That should return the same IP address as we saw assigned to eth0 inside ‘postgres’ container.


Testing remote connectivity


We can test that remote connectivity works by starting a new docker container based on the same postgres image so that we have access to psql tool:

  docker run --rm -ti --link postgres:postgres postgres bash

We use run, therefore the last postgres argument is not a reference to existing running container, but an id of a Docker image to use for a new container. An extra bash argument instructs docker to skip executing the default startup script (the one that starts up a local postgres server), and to execute the command that we specified - bash.

The --rm argument instructs docker to completely clean up the container instance once the command exits - i.e. once we type exit in the bash.

We have also specified --link postgres:postgres which instructs Docker to add the IP address of existing ‘postgres’ container to ‘/etc/hosts’ file mapped to host name postgres. We can thus use postgres as a hostname, instead of having to look for its IP address.


Run the following:

# psql -U keycloak -h postgres
Password for user keycloak:
psql (9.4.1)
Type "help" for help.


keycloak=#

We have successfully connected to PostgreSQL server on another host.

It is now time to set up Keycloak.


Starting new Keycloak cluster as Docker container


We’ll use a prepared Docker image from DockerHub to run two Keycloak containers, each connecting to the PostgreSQL container we just started. In addition, the two Keycloak containers will establish a cluster for a distributed cache so that any state in between requests is instantly available to both instances. That way any one instance can be stopped, and users redirected to the other, without any loss of runtime data.

Issue the following command to start the first Keycloak container - make sure that environment variables are the same as those passed to postgres container previously:

 docker run -p 8080:8080 --name keycloak --link postgres:postgres -e POSTGRES_DATABASE=keycloak -e POSTGRES_USER=keycloak -e POSTGRES_PASSWORD=password -d jboss/keycloak-ha-postgres

Docker will download jboss/keycloak-ha-postgres image from DockerHub, and then create a new container instance from it, allocating a new IP address in the process. We used -p to map the port 8080 of the Docker host to port 8080 of the new container so that we don’t need to know container’s IP in order to connect to it. We can simply connect to the host’s port.

Monitor Keycloak as it’s coming up:

 docker logs -f keycloak


Let’s now start another container, and let’s name it keycloak2 - this one will get another IP address:

 docker run -p 8081:8080 --name keycloak2 --link postgres:postgres -e POSTGRES_DATABASE=keycloak -e POSTGRES_USER=keycloak -e POSTGRES_PASSWORD=password -d jboss/keycloak-ha-postgres

Wait for it to start completely:

 docker logs -f keycloak2


Pay attention to the following section towards the end of the log:

20:07:16,843 INFO  [stdout] (MSC service thread 1-1) -------------------------------------------------------------------
20:07:16,844 INFO  [stdout] (MSC service thread 1-1) GMS: address=f25f922ce14d/keycloak, cluster=keycloak, physical address=172.17.0.10:55200
20:07:16,846 INFO  [stdout] (MSC service thread 1-1) -------------------------------------------------------------------
20:07:17,044 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000094: Received new cluster view: [b5356f1050cc/keycloak|1] (2) [b5356f1050cc/keycloak, f25f922ce14d/keycloak]
20:07:17,049 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000079: Cache local address is f25f922ce14d/keycloak, physical addresses are [172.17.0.10:55200]
20:07:17,083 INFO  [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-1) ISPN000128: Infinispan version: Infinispan 'Infinium' 6.0.2.Final

We can see that JGroups cluster was formed over two nodes - in bold. We can also find this container’s IP address in the log - it’s 172.17.0.10 in this case.


Each Keycloak instance can now be accessed from Docker host (where Docker daemon is running) via port 8080 of its container’s IP address. Since we mapped ports 8080, and 8081 of Docker host to Keycloak containers, we can connect directly to these ports on Docker host.

As an alternative we could forego mapping container ports to Docker host’s ports, and instead set up routing / forwarding using Docker host’s iptables - to let the traffic through the firewall, and set up routes on client hosts connecting to those instances to direct any trafic bound for 172.17.0.0/16 through Docker host.


Customizing the Keycloak image


The jboss/keycloak-ha-postgres image we have used was built from official JBoss Docker project on GitHub.

In keycloak-ha-postgres subdirectory there is a Docker file used to build the image.

From this directory you can perform your own build using:

 docker build --tag myrepo/keycloak-ha-postgres .

Where you can replace myrepo/keycloak-ha-postgres with some other image id.

See README.md file for more information.


Conclusion


We have shown how to start multiple Docker containers running a cluster of Keycloak servers, and connecting to another Docker container running a PostgreSQL database.

In the process we have demonstrated Docker client usage, and techniques for checking if the different servers running inside these containers have started up properly, and can connect to one another.

In the next article we’ll show how to install OpenShift 3, and run these Docker images as Kubernetes services, and virtual servers (pods).