Harbor, Keycloak and Istio — a good dance troupe

Jehoszafat Zimnowoda
8 min readApr 28, 2021


Dance troupe

A dance troupe has to know the choreography, so together they make a great show. Similarly micro-services as dancers on a stage (Kubernetes) are instructed by their choreographers (DevOps engineers) about how to interact with each other.

My story is about building a multi-tenant Kubernetes environment that facilitates various DevOps teams (tenants) with their own Kubernetes namespace and private container registry (Harbor v2.1.0) with Single Sign On (Keycloak v10.0.0) and service mesh (Istio 1.6.14) included.

Harbor fat but versatile container registry

Harbor provides container image registry, vulnerability scanning, container image signature and validation and OIDC based authentication and authorization. The fully featured version is composed by ten micro-services: chartmuseum, clair, core, database, jobservice, portal, redis, registry, trivy and notary.

In Harbor a project (Figure 1) represents a container image registry, exposed under a unique URL. For example harbor.otomi.io/team-demo/, where team-demo is a project name.

By creating many projects you can achieve multi-tenant container image repository for workloads in your Kubernetes cluster.

Figure 1. Projects in Harbor

In Harbor, you can also define project membership, first by defining OIDC groups (Figure 2) and then assigning them with a given project (Figure 3). For example OIDC group team-demo is a member of team-demo project.

Figure 2. OIDC groups in Harbor

From the Figure 3 the team-demo group has Developer role that limits users permissions in this project.

Figure 3. Project membership in Harbor

Next, we want Pods from a given Kubernetes namespace to pull container images from a private registry. The Harbor robot accounts are made for that purpose. I recommend creating two robot accounts in each Harbor project (see: Figure 4). The first one to use within Kubernetes as a pullSecret at a given namespace; and the second one used for CI/CD pipelines.

The first one for using at Kubernetes as a pullSecret at a given namespace and the second one for CI/CD pipeline.

Figure 4. Robot account for team-demo project

Multi-tenancy with Keycloak and Harbor

The Keycloak application can be used as an Identity Provider. In Harbor a user can login by performing OIDC authentication code flow with Keycloak. Upon successful authentication, a user is redirected back to Harbor with a JSON Web Token (JWT) signed by Identity Provider (Keycloak). The JWT contains ID token, which are describe below.

Harbor can verify JWT signature and automatically assign a user and role to a projects based on groups claim from the ID token.

The following code snippet present an ID token with a groups claim:

"iss": "https://keycloak.otomi.io/realms/master",
"sub": "xyz",
"name": "Joe Doe",
"groups": [
"given_name": "Joe",
"family_name": "Doe",
"email": "joe.doe@otomi.io"

There is Joe Doe user that belongs to team-dev and team-demo groups, which in Harbor can be matched to predefined OIDC groups. The ID token is issued by the Keycloak (as the iss property) that is running in the same Kubernetes cluster.

The Figure 4 presents OIDC configuration at Harbor.

Figure 5. Harbor OIDC configuration

There is an OIDC endpoint URL, which is matched against iss property from ID token. Next, OIDC Client Id with OIDC Client Secret are used by Harbor to authenticate with a client at Keycloak. The Group Claim Name is crucial for enabling Harbor OIDC groups matching.

If you want to perform automatic user onboarding process you should provide the following OIDC scopes: openid (iss and sub properties) and email scope (email and email_verified properties).

Do not forget about Keycloak, which requires an additional configuration of the OIDC client.

Figure 5. Keycloak OIDC client settings

From the the above figure, the Client Id is otomi and Client secret is defined in the Credentials tab. There are also valid redirect URIs and Web origns that have to be set, so a user can be redirected from and to the Harbor dashboard upon successful login.

Secure connectivity with Istio service mesh

The Istio ensures service interconnectivity, encrypted traffic (mTLS) and routing (VirtualService + Gateways).

Integrating Harbor with Istio is mostly about setting up URI path routing to the right micro-service. The Harbor Helm chart provides also nginx as a reverse server proxy service. You don’t need it, instead you can deploy the following Istio VirtualService:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
name: harbor
- harbor.otomi.io
- match:
- uri:
prefix: '/api/'
- uri:
prefix: '/c/'
- uri:
prefix: '/chartrepo/'
- uri:
prefix: '/service/'
- uri:
prefix: '/v1/'
- uri:
prefix: '/v2/'
- destination:
host: harbor-harbor-core.harbor.svc.cluster.local
number: 80

- match:
- uri:
prefix: /
uri: /
- destination:
host: harbor-harbor-portal.harbor.svc.cluster.local
number: 80

The Virtual Service redirects /api/, /c/, /chartrepo/, /service/, /v1/, /v2/ URI paths into Harbor core service. All other URI paths are redirected to harbor portal service (dashboard).

The destination hosts form harbor VirtualService is a Fully Qualified Domain Name (FQDN) that indicates Kubernetes namespace of the Harbor services. It makes possible for Istio Ingress gateway to route the incoming traffic .

You might have noticed that traffic is routed to port 80 (HTTP) instead of 433 (HTTPS). It is because I disabled harbor internal TLS in favor of istio proxy sidecar that enforces mTLS for each harbor service.

Automation with otomi to support multi-tenancy

With Otomi Container Platform (or simply Otomi), we strive to integrate best of breed Open Source projects and to provide multi-tenancy awareness out of the box.

Mutli-tenancy is challenging and requires configuration automation to ensure scalability. Part of Otomi’s automation is to configure applications, so they are aware of each other and their multi-tenant support is enabled. We do it either by using a declarative approach when that is possible, or else by interacting with their (REST) APIs directly.

We have generated REST API clients based on the OpenApi specification for Harbor and Keycloak. You are welcome to use our factory for building and publishing openapi clients: https://github.com/redkubes/otomi-clients

Next, we implemented idempotent tasks that leverage these REST API clients and automate service configuration. These are run as kubernetes Jobs whenever a configuration changes.

For Harbor, we have automated the creation of projects, OIDC groups, project membership and OIDC settings.

For Keycloak we have automated configuration of external Identity Provider, group names normalization, deriving Client Id, Client Secret and more. Be inspired and take a look at our open source code: https://github.com/redkubes/otomi-tasks


Each OSS project has its own goals and milestones, thus it may be challenging to integrate various project to work together. Here, I share just few issues that I stumbled on.


The container image registry, provided by Harbor, and docker CLI do not support the OIDC protocol. Instead it uses username/password based authentication. It means that whenever you perform docker login/push/pull commands, the HTTPS traffic from a docker client to container registry does not carry JWT. Make sure to exclude /v1/ /v2/ and /service/ Harbor URI paths from the JWT verification. Otherwise you won’t be able to use the registry.

Next, OIDC users may experience issues with their docker credentials (CLI secrets) that suddenly are invalidated. The CLI secret depends on the validity of the ID token, which has nothing in common with container registry. This hybrid security solution is something that a regular docker user does not expect and can be a source of many misunderstandings. Follow this thread to get more insight https://github.com/goharbor/harbor/issues/14172

The good news is that if you are an automation freak like me, then you don’t actually need CLI secrets. Instead, you can use Harbor robot accounts that do not depend on OIDC authentication.

Keycloak or other identity provider

If your organization decides to migrate users to another identity provider you may experience a duplicated user error: “Conflict, the user with same username or email has been onboarded.”.

It is because sub and/or iss scopes from ID token may change, so the same user trying to login to the Harbor dashboard will be treated as a new one. The onboarding process starts but fails because Harbor requires each user to have a unique email address. I ended up with removing existing OIDC users from Harbor and allowing them to onboard once again. Interestingly the community of Harbor users are having a broad debate about using of OIDC protocol and could not agree on final solution so far. I encourage you to take a look at very insightful conversations about it.


While making Harbor services part of the Istio service mesh, it is very important that Kubernetes services are using port names that follow the istio convention <protocol>[-arbitrary name]. For example the Harbor registry service should have http-registry port name, instead registry. See example:

apiVersion: v1
kind: Service
name: harbor-harbor-registry
- name: http-registry
port: 5000
protocol: TCP
targetPort: 5000
- name: http-controller
port: 8080
protocol: TCP
targetPort: 8080

If service port name does not follow the Istio convention, Harbor core service is not able to communicate with the Harbor registry service in Istio service mesh. Attempting to login to docker registry will end up with an “authentication required” error.


Harbor is suitable solution for deploying self-hosted container image repository in multi-tenant Kubernetes cluster. Nevertheless, lack of configuration automation makes it hard to maintain it in a constantly changing environment.

Using the OIDC groups claim can be used for granting users default role and access to Harbor project(s).

While working with OIDC, always check if ID tokens contain required scopes.

While working with Istio, do not mess up with named ports.

Stay in touch with open source communities for projects that you are using, as they are a true treasure trove of information.

I hope that this article provides you a good insight into more advanced Harbor integration in the Kubernetes cluster.



Jehoszafat Zimnowoda

Passionate about computer networks and distributed system. OSS contributor and occasionally technical writer.