OpenSearch Node Management and Certificates in Docker Rootless

Kadriye Taylan
5 min readFeb 16, 2024

Hi Everyone!

It’s been a long time! :)

I will explain OpenSearch installation and configuration with Docker in rootless mode. I have some experience on this topic and I thought I might be helpful for someone.

I will not explain what OpenSearch is because if you’re here, you probably already know some basic things about it. :)

Let’s dive in! 🌊

Actually, installing OpenSearch on Docker is quite straightforward. You can find detailed documentation on the OpenSearch website. However, depending on the Docker mode you’re using, some things might become a bit confusing.

In my case, I’m using Docker in rootless mode on a Linux Rocky 8 server. Additionally, I’m working within a specific user scope, so I have encountered various restrictions during installation and configuration.

The docker version I use is 24.0.4. Furthermore, during the installation, I define a different bridge IP range in the daemon.json file, in addition to using Docker in rootless mode.

The reason for defining a bridge IP is because I’m using Docker in rootless mode, which means I can’t use the Docker host network. So I’m using Docker bridge network. In this scenario, by specifying the desired IP block, I enable my container to access other applications over the desired network. This allows me to achieve the desired network setup and connectivity for my container despite using Docker in rootless mode.

My purpose is to create 2 nodes on different servers. So, I have 2 different servers for that purpose. I configured them exactly the same because I’m using automation for configuration :)

My docker compose for installation of opensearch:

version: '3.8'
services:
opensearch:
container_name: ${containerName}
image: ${opensearch_application_image_repo}:${opensearch_application_image_tag}
environment:
- RUN_USER=root
- RUN_GROUP=root
- RUN_UID=0
- RUN_GID=0
- DISABLE_INSTALL_DEMO_CONFIG=true
volumes:
- ${opensearch_application_directory}/data:/usr/share/opensearch/data:Z
- ${opensearch_application_directory}/certs:/usr/share/opensearch/config/certificates
- ${opensearch_application_directory}/config/opensearch.yml:/usr/share/opensearch/config/opensearch.yml
- ${opensearch_application_directory}/config/security/internal_users.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/internal_users.yml
- ${opensearch_application_directory}/config/security/roles_mapping.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/roles_mapping.yml
- ${opensearch_application_directory}/config/security/config.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml
ports:
- "${opensearch_ports}:${opensearch_ports}"
deploy:
resources:
limits:
memory: ${container_memory}

I am executing same docker-compose file on both servers (with automation tool).

Now, lets look at the opensearch configs:

---
cluster.name: opensearch-cluster-#{env}
network.host: 0.0.0.0
network.publish_host: #{Octopus.Machine.Hostname}
bootstrap.memory_lock: false node.name: #{Octopus.Machine.Hostname}discovery.seed_hosts: #{each node in opensearch_nodes}
- #{node} #{/each}
cluster.initial_master_nodes: #{each node in opensearch_nodes}
- #{node} #{/each}

action.auto_create_index: ".watches,.triggered_watches,.watcher-history-*"
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 93%
cluster.routing.allocation.disk.watermark.high: 95%
# Disable ssl http. Cannot use securityadmin when disabled
# Bitbucket results in http error once enabled
plugins.security.ssl.transport.pemcert_filepath: /usr/share/opensearch/config/certificates/#{Octopus.Machine.Hostname}.pem
plugins.security.ssl.transport.pemkey_filepath: /usr/share/opensearch/config/certificates/#{Octopus.Machine.Hostname}-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: /usr/share/opensearch/config/certificates/root-ca.pem
plugins.security.ssl.http.pemcert_filepath: /usr/share/opensearch/config/certificates/#{Octopus.Machine.Hostname}.pem
plugins.security.ssl.http.pemkey_filepath: /usr/share/opensearch/config/certificates/#{Octopus.Machine.Hostname}-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: /usr/share/opensearch/config/certificates/root-ca.pem
plugins.security.ssl.http.enabled: false
plugins.security.ssl.transport.enabled: true
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.allow_default_init_securityindex: true
plugins.security.allow_unsafe_democertificates: false
plugins.security.restapi.roles_enabled: ["all_access"]
plugins.security.ssl.transport.enabled_protocols:
- "TLSv1.2"
plugins.security.ssl.http.enabled_protocols:
- "TLSv1.2"

plugins.security.authcz.admin_dn:
- 'CN=ADMIN,O=YOURORG,L=YOURCITY,ST=YOURSTATE,C=YOURCOUNTRY'
plugins.security.nodes_dn: #{each node in opensearch_nodes}
- 'CN=#{node},O=YOURORG,L=YOURCITY,ST=YOURSTATE,C=YOURCOUNTRY' #{/each}

A parameterized structure has been created using parameters starting with ‘Octopus,’ indicating that the YAML file above was deployed with Octopus.

The most important parameters to pay attention to here are as follows:

network.publish_host: #{Octopus.Machine.Hostname} #ContainerHostname
node.name: #{Octopus.Machine.Hostname} #ContainerHostname
discovery.seed_hosts: #{each node in opensearch_nodes}
- #{node} #ContainerHostname #{/each}
cluster.initial_master_nodes: #{each node in opensearch_nodes}
- #{node} #ContainerHostname #{/each}

The reason for defining the network.publish_host is as follows: In Docker, when using bridge IP and both servers have the same IP range, if the same two containers are launched with the same configurations on both servers, they will receive the same private IP address and won’t be able to communicate with each other. Therefore, to enable containers to access each other via publish host, I set the hostname of the container to its running host using the network.publish_host parameter.

The primary function of node configurations is to manage the creation of an OpenSearch cluster and communication between nodes. These configurations are used to specify the initial master nodes of the cluster and ensure the discovery of nodes when the cluster is being formed.

First, let’s take a look at the technical details of these cluster settings.

node.name:

Description: Specifies the unique name of the OpenSearch node within the cluster.

Purpose: Identifies the node within the cluster and is used for communication and coordination among cluster nodes.

discovery.seed_hosts:

Description: Specifies the initial list of host addresses that a new OpenSearch node should connect to in order to discover other nodes in the cluster.

Purpose: Facilitates the discovery of existing cluster nodes by new nodes during cluster formation.

cluster.initial_master_nodes:

Description: Specifies the initial list of nodes that should serve as master nodes when forming a new OpenSearch cluster.

Purpose: Defines the initial master nodes of the cluster during its initial setup.

I provided the hostname to the node.name parameter for both clarity and discovery purposes. The discovery seed hosts and cluster initial master nodes parameters should contain the same value as specified in the node name. Otherwise, the nodes won’t be able to discover each other.

Simply put, if you’re using Docker rootless mode and have OpenSearch nodes running on different servers, and encounter connectivity issues between the nodes, you may try configuring as shown in the example below.

Node1

network.host: 0.0.0.0
network.publish_host: containerhostname123
node.name: containerhostname123
discovery.seed_hosts:
- containerhostname123
- containerhostname456
cluster.initial_master_nodes:
- containerhostname123
- containerhostname456

Node2

network.host: 0.0.0.0
network.publish_host: containerhostname456
node.name: containerhostname456
discovery.seed_hosts:
- containerhostname123
- containerhostname456
cluster.initial_master_nodes:
- containerhostname123
- containerhostname456

Another essential point is if you want your nodes to connect securely, OpenSearch requires you to create self-signed certificates and use them on the nodes. There’s a very important aspect to be mindful of here. The detailed configuration of OpenSearch TLS is explained in this link, and you can follow it. However, I want to emphasize a critical point that might easily be overlooked.

Maybe it’s not that important, but I’ve spent a lot of time on it! 😅

When creating certificates, you generate a separate certificate for each node or administrator, and when creating these certificates, you provide subject information.

Sample code:

openssl req -new -key node2-key.pem -subj "/C=YOURCOUNTRY/ST=YOURSTATE/L=YOURCITY/O=YOURORG/CN=YOURCOMMONNAMEFORNODE" -out node2.csr

You must absolutely define the exact same subject information used here in the OpenSearch YAML configuration file where your OpenSearch configurations are located.

If you have used the following script for your configuration with two nodes:

openssl req -new -key admin-key.pem -subj "/C=CA/ST=ONTARIO/L=TORONTO/O=ORG/CN=ADMIN" -out admin.csr
openssl req -new -key node1-key.pem -subj "/C=CA/ST=ONTARIO/L=TORONTO/O=ORG/CN=NODE1" -out node1.csr
openssl req -new -key node2-key.pem -subj "/C=CA/ST=ONTARIO/L=TORONTO/O=ORG/CN=NODE2" -out node2.csr

For your OpenSearch YAML file configuration, it should look like this:

plugins.security.authcz.admin_dn:
- 'CN=ADMIN,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
plugins.security.nodes_dn:
- 'CN=NODE1,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
- 'CN=NODE2,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'

This may not seem like a big deal, but even a small typo here can cause you hours of frustration because you may not realize the error is coming from here. 😄

In this article, I’ve explained managing an OpenSearch nodes with Docker rootless networking, detailing configuration, certificate creation, and tips for secure connectivity between nodes.

I hope it’s been helpful. Feel free to ask any questions.

Have a great day 😊🌸

--

--