IraCluster Kubernetes Components Installation

1. Install NATS 

Execute the below command to download the nats-server binary.


Move the nats-server binary to /usr/local/bin.


sudo mv nats-server /usr/local/bin/

Make the binary executable.


sudo chmod +x /usr/local/bin/nats-server

2. Generating Nkeys (Nats Keys)

Create NKeys (required for authentication in NATS)

You need to generate two pairs of nkeys, one for the normal user and one for the system user. The system user is used by iracluster, whereas, rest of the applications use the normal user for authentication.


To generate nkeys, go to this link and download the package that is relevant to your operating system

https://github.com/nats-io/nkeys/releases/tag/v0.4.7

Example,

Go to the above link and download nkeys-v0.4.7-linux-amd64.zip, unpack the zip and you will find the nk binary which you can copy to /usr/local/bin 


Then, execute the below command to generate pair of nkeys

nk -gen user -pubout


We need to execute the above command twice, once for a pair of keys for system user and another for normal user.

The string starting from S is the seed key and starting from U is the public key.

Set one pair for nats_public_key, nats_seed_key and the other pair for sys_nats_public_key and sys_nats_seed_key in the later steps.

For more information on nkeys, check this link https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro/nkey_auth

2. Download the required JSON files and Irawatch folder

For the kubernetes components installation we need to have json files ready.

They can be downloaded from this URL. 

3. Editing the JSON files

Make sure to edit the common_config.json with the correct nkeys, nats_urls of the machines and correct cluster-id.  format for nats url is nats://<nats-url public-ip>:4222


Add the nats-server.conf and the service file in the machines.

Replace the server1-ip ,server2-ip ,clustername, server1name and nats-Keys placeholder in the below nats-server.conf with the respective IP's of the machine.

Example: nats-server.conf for HA

host: 0.0.0.0
port: 4222

jetstream: enabled

server_name: "server1name"

cluster {
        name:"clustername"
        listen: "server1ip:4248"
        routes: ["nats://server2ip:4248"]
}

accounts: {
sysAcc: {
    users: [
        {nkey: "Sys-Key1"},
        {nkey: "Sys-key2"}
    ],
    exports: [
            {stream: ira.sys.disconnect.>}
    ]
},
normal: {
    users: [
            {nkey: "normal-key1"},
            {nkey: "normal-key2"}
    ],
    imports: [
            {stream: {account: sysAcc, subject: ira.sys.disconnect.>}}
    ],
    jetstream: enabled
}
}

system_account: sysAcc

OR  for Non-HA Mode template for nats-server.conf is 

host: 0.0.0.0
port: 4222

jetstream: enabled

accounts: {
sysAcc: {
    users: [
        {nkey: "Sys-key1"}
    ],
    exports: [
            {stream: ira.sys.disconnect.>}
    ]
},
normal: {
    users: [
            {nkey: "normal-key1"}
    ],
    imports: [
            {stream: {account: sysAcc, subject: ira.sys.disconnect.>}}
    ],
    jetstream: enabled
}
}

system_account: sysAcc


Place the above nats-server.conf in /etc/ directory. 

Nats-server.service file content

[Unit]
Description=NATS Server
After=network-online.target ntp.service

[Service]
PrivateTmp=true
Type=simple
ExecStart=/usr/local/bin/nats-server -c /etc/nats-server.conf
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s SIGINT $MAINPID

User=root
Group=root

Restart=always
RestartSec=5

KillSignal=SIGUSR2


LimitNOFILE=800000

[Install]
WantedBy=multi-user.target

Place the above nats-server.service file in /etc/systemd/system and start the service with 


sudo systemctl start nats-server.service


Note: common_config.json, irapass.json, iracpa.json and irawatch folder is to be placed in the /usr/local/epi/conf directory

The cluster_id is equivalent to unique_hive_name in the older deployments. This can be created in license.epicode.in.

Make sure the sitekey.pem file exists in the /usr/local/epi/conf folder and has the encrypted data in it. This can be downloaded from the license.epicode.in.

The file /usr/local/epi/conf/irawatch/analyser/yamls/iradialer.yaml expects cluster-id for irawatch to run as expected. Please run the following command to replace the placeholders with your cluster-id.

sudo sed -i 's/cluster_name: {{clusterName}}/cluster_name: my-cluster/' /usr/local/epi/conf/irawatch/analyser/yamls/iradialer.yaml

example: if the cluster id is "acqueonCluster" then it will be as follows

sudo sed -i 's/cluster_name: {{cluster_name}}/cluster_name: acqueonCluster/' /usr/local/epi/conf/irawatch/analyser/yamls/iradialer.yaml


Common config should be as 

{
    "cluster_id": "<cluster-id>",
    "nats_conf": {
      "nats_url": "nats://<Server1ip>:4222,nats://<Server2ip>:4222",
      "nats_public_key": "<normal-key1>",
      "nats_seed_key": "<normal-seedkey1>",
      "sys_nats_public_key": "<sys-keys1>",
      "sys_nats_seed_key": "<sys-seedkey1>"
    },
    "enable_log": true,
    "app_log_level": "info",
    "app_detailed_log": false
}


Run the command to enable the nats service in both the servers.

sudo systemctl enable nats-server.service

4.Installation of kubernetes

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -

sudo systemctl start k3s
sudo systemctl status k3s
sudo systemctl enable k3s

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

5. Download the YAML files

 Download the yamls file zip and extract it to the home directory of the machine.

Make sure to add the correct NATS System key and NATS Normal Key in nats-values.yaml file

6. Apply the Yaml files.

Note: The ingress routes and the persistent volumes may have to be changed based on the deployment Kubernetes environment.


kubectl apply -f volumes/iracluster-conf-volume.yaml
kubectl apply -f volumes/iracluster-logs-path-volume.yaml
kubectl apply -f volumes/iracpa-recordings-path-volume.yaml
kubectl apply -f volume-claims/iracluster-conf-volume-claim.yaml
kubectl apply -f volume-claims/iracluster-logs-path-volume-claim.yaml
kubectl apply -f volume-claims/iracpa-recordings-path-volume-claim.yaml

kubectl apply -f deployments/watcher.yaml
kubectl apply -f deployments/tracker.yaml
kubectl apply -f deployments/irapass.yaml
kubectl apply -f deployments/iramonitor.yaml
kubectl apply -f deployments/iradialerweb.yaml
kubectl apply -f deployments/iradialer-exporter.yaml
kubectl apply -f deployments/iracpa.yaml
kubectl apply -f deployments/event-dispatcher.yaml
kubectl apply -f deployments/cpatracker.yaml
kubectl apply -f deployments/collector.yaml
kubectl apply -f deployments/analyser-depl.yaml


kubectl apply -f services/analyser-svc.yaml
kubectl apply -f services/iracpa-lb.yaml
kubectl apply -f services/iradialerweb-svc.yaml
kubectl apply -f services/iramonitor-svc.yaml

kubectl apply -f ingress/iradialer-ingress.yaml

7. Verifying the installation

After running the script, please follow these steps to verify that all pods are running correctly:

Verify Pod Status: To ensure that all your Kubernetes components have been deployed successfully and your pods are running as expected. Run the following command 

kubectl get pods


Verify Service (SVC) Status

In addition to pods, you should verify that the services (SVC) are correctly deployed and running. Run the following command to list all services

kubectl get svc


This command will display all services in the current namespace with services with the ClusterIP, NodePort, or LoadBalancer type, based on how they are configured. Ensure the services are assigned an external IP or cluster IP.

Check for Additional Resources

You can also check the status of other Kubernetes resources such as deployments, replicasets, or daemonsets to ensure everything is running smoothly:

kubectl get deployments
kubectl get replicasets
kubectl get daemonsets

The order for the pods specified should come up mainly is as follows post every machine restart.

  1. IraPodWatcher

  2. IraPodTracker

  3. CPATracker (If present)

  4. IraPass

  5. IraCallRouter

  6. IraCPA

  7. Collector

  8. Exporter

  9. Analyser

Check the state of the pods and also check the logs in /var/log/epi/k8s folder and also /var/log/epi folder