02 May 2019 ~ 0 Comments

Instalacija OpenShift okruženja sa 3 nodea

Uvod

Iako na prvi pogled trijvijalna kako su dostupni odlični OpenShift Ansible playbookovi, vrlo lako postane zamorna i prepuna grešaka ukoliko se ne pripazi na par ključnih “zamki”. Kroz ovaj post pokušati ću opisati kroz cijeli postupak instalacije OpenShift Origin (tj. OKD) okuženja na 3 nodea , jedan master i dva compute. Logging i monitoring nisam uključio u ovaj post kako na računalu nemam dovoljno resursa za tu komponentu.

Za potrebe instalacije biti će potrebne 4 virtualke (može i 3 ako će na jednom od OpenShift nodeova biti Ansible) sa pripremljenom mrežnom infrastrukturom. Svaki node mora moći komunicirati sa drugim prema OKD dokumentaciji. Također, biti će potreban DNS server kako bi mogao resolvati potrebne adrese (npr. apps.lab.local, console.lab.local, itd). Za PV u ovom primjeru koristiti će se NFS koji se više ne preporuča pa i samo setup PV-a na NFS nije ovdje prikazan.

Česti propusti na koje treba obratiti pozornost

DNS – DNS je obavezan te bez DNS-a instalacija neće biti uspješna. Detalje oko DNS zapisa moguće je pronaći ovdje. Ukratko potrebno je kreirati DNS zapise za console.lab.local i console-int.lab.local za IP adresu mastera. Također, potrebno je kreirati DNS zapise na sve nodeove te je peporučljivo kreirati wildcard zapis npr. “*.apps.lab.local” da nije potrebno kreiati za svaku rutu novi DNS zapis

Hardverski resursi – OpenShift je hardverski zahtjevan. Detalji o hardverskim potrebama moguće je pronaći ovdje. Nakon puno pokušaja i promašaja, zaključak je da je apsolutni minimum za OpenShift instalaciju 4GB RAM za master, 1GB RAM za compute te 2vCPU za master i compute. Ispod navedenih minimuma instalacija neće uspješno završiti.

Konfiguracija nodeova – Obratiti posebnu pozornost na mrežnu konfiguraciju ETH adaptera na nodeovima (Gateway je obavezan iako možda imate sve repozitorije lokalno te NM_CONTROLED=yes je također obavezan). Također, konfiguracija hostname je obavezna.

OpenShift inventory – Greške i tipfeleri u inventory datoteci dovode nepotpune instalacije, grešaka u instalaciji, itd. Prije pokretanja Ansible playbooka potrebno je dobro provjeriti ispravnost inventory datoteke

Kompatibilnost Ansible i OpenShift – Ansible verzije i OpenShift verzije je potrebno uskladiti. Prilikom testiranja i čitanja dokumentacije i issuea na GitHubu, verzija Ansiblea koja je korištena je 2.7.9 za instalaciju OpenShift 3.11 pokazala se najbolja.

1. Okruženje

Za pokretanje VM-ova ja sam koristio Hyper-V, te se neću kroz post fokusirati na konfiguraciju Hyper-V. Za potrebe okruženja pripremio sam 4 VM-a:

  • lab-centos01.lab.local (2vCPU, 1GB RAM, 1xHDD 50GB) za Ansible
  • lab-os-master01.lab.local (2vCPU, 4GB RAM, 1xHDD 50GB, 1xHDD 40GB i 1xHDD 30GB) za OpenShift master node
  • lab-os-node01.lab.local (2vCPU, 1GB RAM, 1xHDD 50GB, 1xHDD 40GB i 1xHDD 30GB) za OpenShift compute node
  • lab-os-node02.lab.local (2vCPU, 1GB RAM, 1xHDD 50GB, 1xHDD 40GB i 1xHDD 30GB) za OpenShift compute node

Dodatni diskovi na OpenShift VM-ovima su za potrebe Dockera i OpenShifta. Mrežna infrastruktura između VM-ova nema ograničenja te svi VM-ovi mogu izaći prema Internetu.

2. Instalacija operativnog sustava

Nakon instalacije operativnog sustava (ja sam koristi Centos 7), potrebno je napraviti nadogradnju OS-a i aplikativnih paketa:

[[email protected] ~]# yum -y update

Nakon instalacije i nadogradnje potrebno je klonirati VM na ostala 3 VM-a (ili napraviti instalaciju OS-a od nule). Neke radnje biti će potrebno napraviti na samim poslužiteljima dok je za većinu pripremljen Ansible playbook.

Na svakom od poslužitelja konfigurirati ETH adapter

[[email protected] ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

Primjer ifcfg-eth0 je niže (obratite pozornost na BOOTPROTO=static, ONBOOT=yes, IPADDR zamijeniti prema viziji mrežne infrastrukture, GATEWAY je obavezan, DNS je obavezan te NM_CONTROLLED=yes)

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.1.212
PREFIX=24
GATEWAY=192.168.1.254
DNS1=192.168.1.190
DNS2=192.168.1.254
DOMAIN=lab.local
NM_CONTROLLED=yes

Također na svakom od poslužitelja konfigurirati hostname (zamijeniti prema shemi naziva servera.

[[email protected] ~]# hostnamectl set-hostname lab-centos01.lab.local

Dodano, u nekim uputama i rješavanjima grešaka, gore navedeno nije dovoljno (iz meni nepoznatih razloga) pa je dodatno poželjno editirati network

[[email protected] ~]# vi /etc/sysconfig/network

dodati liniju (zamijeniti sa nazivom servera)

HOSTNAME=lab-centos01.lab.local

te editirati hosts

[[email protected] ~]# vi /etc/hosts

i dodati zapis (zamijeniti IP adresu i nazive prema serveru na kojem se radi izmjena)

192.168.1.212 lab-centos01.lab.local lab-centos01

Nakon napravljenog, restartati servere.

[[email protected] ~]# reboot

3. Ansible priprema

Slijedeći koraci izvode se na Ansible serveru (lab-centos01.lab.local).

Kreirati folder za Ansible playbook za pripremu OpenShift nodeova

[[email protected] home]# mkdir /home/ansible_playbooks/

Dodati EPEL Release

[[email protected] ~]# yum -y install epel-release

Instalirati aplikacije potrebne za Ansible i playbook izvođenje

[[email protected] ~]# yum -y install nano git python2-pip httpd-tools java-1.8.0-openjdk-headless wget net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct openssl-devel python-cryptography python-devel python-passlib "@Development Tools"

Nadograditi PIP

[[email protected] ~]# pip install --upgrade pip

Instalirati Ansible (verzija 2.7.9)

[[email protected] ~]# pip install ansible==2.7.9

Kreirati hosts za Ansible

[[email protected] ~]# mkdir /etc/ansible/
[[email protected] ~]# nano /etc/ansible/hosts

Primjer hosts datoteke

[os-master]
lab-os-master01.lab.local

[os-non-master]
lab-os-node01.lab.local
lab-os-node02.lab.local

Kreirati SSH ključ

[[email protected] ~]#  ssh-keygen -t RSA

Rezultat bi trebao biti

Generating public/private RSA key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is: xxx [email protected]

Deployati SSH ključ na OpenShift VM-ove

[[email protected] ~]]# ssh-copy-id [email protected]
[[email protected] ~]]# ssh-copy-id [email protected]
[[email protected] ~]]# ssh-copy-id [email protected]

A rezultat bi trebao biti

Number of key(s) added: 1

Provjeriti da li su Ansibleu dostupni OpenShift VM-ovi (sve bi trebalo biti zeleno :))

[[email protected] ~]]# ansible all -m setup

4. Priprema OpenShift nodeova za deploy

Slijedeći koraci izvode se na Ansible serveru (lab-centos01.lab.local).

Iako OpenShift Ansible playbook je “prerequisites” ne uključuje sve potrebno te sam iz tog razloga pripremio dodatne Ansible playbook koji su svi dostupni i na GitHub-u.

Kreirati playbook za dodavanje EPEL

[[email protected] ~]# cd /home/ansible_playbooks/
[[email protected] ansible_playbooks]# nano 1_add_epel.yml

1_add_epel.yml

---
- hosts: all
  sudo: yes
  tasks:
  - name: Add EPEL.
    yum:
      name: https://dl.fedoraproject.org/pub/epel/epel-release-latest-{{ ansible_distribution_major_version }}.noarch.rpm
      state: present
  - name: Import EPEL GPG.
    rpm_key:
      key: /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-{{ ansible_distribution_major_version }}
      state: present

Pokrenuti 1_add_epel.yml playbook

[[email protected] ansible_playbooks]# ansible-playbook 1_add_epel.yml

Kreirati playbook za instalaciju potrebno softvera

[[email protected] ansible_playbooks]# nano 2_install_apps.yml

2_install_apps.yml

---
- hosts: all
  sudo: yes
  tasks:
  - name: Install nano (step 1/20)
    yum:
      name: nano
      state: latest
  - name: Install wget (step 2/20)
    yum:
      name: wget
      state: latest
  - name: Install git (step 3/20)
    yum:
      name: git
      state: latest
  - name: Install lvm2 (step 4/20)
    yum:
      name: lvm2
      state: latest
  - name: Install net-tools (step 5/20)
    yum: 
      name: net-tools
      state: latest
  - name: Install Docker (step 6/20)
    yum:
      name: docker-1.13.1
      state: latest
  - name: Install bind-utils (step 7/20)
    yum:
      name: bind-utils
      state: latest
  - name: Install iptables-services (step 8/20)
    yum:
      name: iptables-services
      state: latest
  - name: Install bridge-utils (step 9/20)
    yum:
      name: bridge-utils
      state: latest
  - name: Install openssl-devel (step 10/20)
    yum:
      name: openssl-devel
      state: latest
  - name: Install bash-completion (step 11/20)
    yum:
      name: bash-completion
      state: latest
  - name: Install kexec-tools (step 12/20)
    yum:
      name: kexec-tools
      state: latest
  - name: Install sos (step 13/20)
    yum:
      name: sos
      state: latest
  - name: Install psacct (step 14/20)
    yum:
      name: psacct
      state: latest
  - name: Install python-cryptography (step 15/20)
    yum:
      name: python-cryptography
      state: latest
  - name: Install python2-pip (step 16/20)
    yum:
      name: python2-pip
      state: latest
  - name: Install python-devel (step 17/20)
    yum:
      name: python-devel
      state: latest
  - name: Install python-passlib (step 18/20)
    yum:
      name: python-passlib
      state: latest
  - name: Install java-1.8.0-openjdk-headless (step 19/20)
    yum:
      name: java-1.8.0-openjdk-headless
      state: latest
  - name: Install Development Tools (step 20/20)
    yum:
      name: "@Development Tools"
      state: latest

Pokrenuti playbook za instalaciju softvera

[[email protected] ansible_playbooks]# ansible-playbook 2_install_apps.yml

Kreirati playbook za konfiguraciju diskova (volume za Docker i OpenShift)

[[email protected] ansible_playbooks]# nano 3_setup_volumes.yml

3_setup_volumes.yml

---
- hosts: all
  tasks: 
    - name: Setup volumes (step 1/16) 
      lineinfile: 
        path: "/etc/sysconfig/docker-storage-setup"
        line: "DEVS=/dev/sdb"
    - name: Setup volumes (step 2/16)
      lineinfile: 
        path: "/etc/sysconfig/docker-storage-setup"
        line: "VG=docker-vg"
    - name: Setup volumes (step 3/16)
      shell: docker-storage-setup
- hosts: os-master
  tasks: 
    - name: Setup volumes (step 5/16)
      shell: "vgcreate etcd-vg /dev/sdc"
    - name: Setup volumes (step 6/16)
      shell: "lvcreate -n etcd-lv -l 100%VG etcd-vg"
    - name: Setup volumes (step 7/16)
      shell: "mkfs.xfs /dev/mapper/etcd--vg-etcd--lv"
    - name: Setup volumes (step 8/16)
      file:
        path: "/var/lib/etcd"
        state: directory
    - name: Setup volumes (step 9/16)
      lineinfile: 
        path: "/etc/fstab"
        line: "/dev/mapper/etcd--vg-etcd--lv /var/lib/etcd xfs defaults 0 0"
    - name: Setup volumes (step 10/16)
      shell: "mount -a"
- hosts: os-non-master
  tasks: 
    - name: Setup volumes (step 11/16)
      shell: "vgcreate origin-vg /dev/sdc"
    - name: Setup volumes (step 12/16)
      shell: "lvcreate -n origin-lv -l 100%VG origin-vg"
    - name: Setup volumes (step 13/16)
      shell: "mkfs.xfs /dev/mapper/origin--vg-origin--lv"
    - name: Setup volumes (step 14/16)
      file:
        path: "/var/lib/origin"
        state: directory
    - name: Setup volumes (step 15/16)
      lineinfile: 
        path: "/etc/fstab"
        line: "/dev/mapper/origin--vg-origin--lv /var/lib/origin xfs defaults 0 0"
    - name: Setup volumes (step 16/16)
      shell: "mount -a"

Pokrenuti playbook za setup volumea

[[email protected] ansible_playbooks]# ansible-playbook 3_setup_volumes.yml

Kreirati playbook za post konfiguraciju (insecure registry i rješavanje jednog buga)

[[email protected] ansible_playbooks]# nano 4_post_config.yml

4_post_config.yml

---
- hosts: all
  tasks: 
    - name: Setup insecure registry (step 1/2) 
      lineinfile: 
        path: "/etc/sysconfig/docker"
        line: "OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'"
    - name: IPv4 Forward (step 2/2)
      shell: "sysctl -w net.ipv4.ip_forward=1"

Pokrenuti post config playbook

[[email protected] ansible_playbooks]# ansible-playbook 4_post_config.yml

I na kraju napraviti restart svih OpenShift VM-ova

[[email protected] ansible_playbooks]# ansible all -a 'reboot'

5. Instalacija OpenShift okruženja

Svi koraci se izvode na Ansible serveru (lab-centos01.lab.local)

Klonirati Ansible-OpenShift sa GitHuba

[[email protected] ~]# cd /opt
[[email protected] opt]# git clone https://github.com/openshift/openshift-ansible.git
[[email protected] opt]# cd openshift-ansible/
[[email protected] openshift-ansible]# git checkout release-3.11

Kreirati OpenShift inventory

[[email protected] openshift-ansible]# nano os_inventory

os_inventory (obratite pozornost na user/pass tj. htpasswd credentialse s čime se kasnije prijavljujete u OpenShift web konzolu; mogu se generirati i besplatnim alatima na webu)

[OSEv3:children]
masters
etcd
nodes

[OSEv3:vars]
# Ansible user who can login to all nodes through SSH 
ansible_user=root

# Deployment type: "openshift-enterprise" or "origin"
openshift_deployment_type=origin
deployment_type=origin

# Version
openshift_release=v3.11
openshift_pkg_version=-3.11.0
openshift_image_tag=v3.11.0
openshift_service_catalog_image_version=v3.11.0
template_service_broker_image_version=v3.11.0
openshift_metrics_image_version="v3.11"
openshift_logging_image_version="v3.11"
openshift_logging_elasticsearch_proxy_image_version="v1.0.0"
osm_use_cockpit=true
openshift_metrics_install_metrics=false
openshift_logging_install_logging=false

# Service address space
openshift_portal_net=172.30.0.0/16

# Pod address space
osm_cluster_network_cidr=10.128.0.0/14

##Subnet Length of each node
osm_host_subnet_length=9

# Master API port
openshift_master_api_port=443

# Master console port
openshift_master_console_port=443

# Clustering method
openshift_master_cluster_method=native

# Hostname used by nodes and other cluster internals
openshift_master_cluster_hostname=console-int.lab.local

# Hostname used by platform users
openshift_master_cluster_public_hostname=console.lab.local

# Application wildcard subdomain
openshift_master_default_subdomain=apps.lab.local

# identity provider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]

# Users being created in the cluster
openshift_master_htpasswd_users={'admin': '$apr1$alusv16k$038bczg.ozez5yJIR4IzS1', 'osuser': '$apr1$alusv16k$038bczg.ozez5yJIR4IzS1'}

# Persistent storage, NFS
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_host=lab-centos01.lab.local
openshift_hosted_registry_storage_nfs_directory=/openshift
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=50Gi

# Misc
containerized=True
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability

# NFS check bug
openshift_enable_unsupported_configurations=True

# Another Bug 1569476 
skip_sanity_checks=true

openshift_node_kubelet_args="{'eviction-hard': ['memory.available<100Mi'], 'minimum-container-ttl-duration': ['10s'], 'maximum-dead-containers-per-container': ['2'], 'maximum-dead-containers': ['5'], 'pods-per-core': ['10'], 'max-pods': ['25'], 'image-gc-high-threshold': ['80'], 'image-gc-low-threshold': ['60']}"

[OSEv3:vars]

[masters]
lab-os-master01.lab.local openshift_node_group_name=”node-config-master”

[etcd]
lab-os-master01.lab.local openshift_node_group_name=”node-config-master”

[nodes]
lab-os-master01.lab.local openshift_node_group_name=”node-config-master”
lab-os-node01.lab.local  openshift_node_group_name=”node-config-compute”
lab-os-node02.lab.local openshift_node_group_name=”node-config-infra”

Provjeriti dostupnost nodeova Ansibleu (sve bi trebalo biti zeleno :))

[[email protected] openshift-ansible]# ansible -i os_inventory OSEv3 -m ping

A rezultat bi trebao biti

lab-os-master01.lab.local | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
lab-os-node02.lab.local | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
lab-os-node01.lab.local | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Ako je sve OK pokrenuti prvi dio instalacije (prerequisites)

[[email protected] openshift-ansible]# ansible-playbook -i os_inventory playbooks/prerequisites.yml

Ako sve završi OK pokrenuti deploy klastera

[[email protected] openshift-ansible]# ansible-playbook -i os_inventory playbooks/deploy_cluster.yml

I nadam se da je sve završilo OK. Ako nije Googlajte grešku, vrlo vjerojatno na GitHubu već postoji otvoren issue :)

Post instalacija

Slijedeće naredbe se izvršavaju na lab-os-master01.lab.local

Validirati da li je moguće ulogogiati se u OpenShift klaster

[[email protected] ~]# oc login -u system:admin -n default

A rezultati bi trebao biti

Logged into "https://console-int.lab.local:443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project ':

  * default
    kube-public
    kube-system
    logging
    management-infra
    openshift
    openshift-infra
    openshift-node
    openshift-web-console

Using project "default".

Provjeriti dostupnost OpenShift nodeova

[[email protected] ~]# oc get no

A rezultati bi trebao biti

NAME                        STATUS    ROLES     AGE       VERSION
lab-os-master01.lab.local   Ready     master    4d        v1.9.1+a0ce1bc657
lab-os-node01.lab.local     Ready     compute   4d        v1.9.1+a0ce1bc657
lab-os-node02.lab.local     Ready     compute   4d        v1.9.1+a0ce1bc657

Provjeriti dostupnost servisa

[[email protected] ~]# oc get svc

A rezultat bi trebao biti

NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
docker-registry    ClusterIP   172.30.45.136            5000/TCP                  4d
kubernetes         ClusterIP   172.30.0.1               443/TCP,53/UDP,53/TCP     5d
registry-console   ClusterIP   172.30.17.1              9000/TCP                  4d
router             ClusterIP   172.30.249.169           80/TCP,443/TCP,1936/TCP   4d

Dodati “osuser” korisnika u cluster-admin grupu kako bi se mogao prijaviti u OpenShift kao admin

[[email protected] ~]# oc adm policy add-cluster-role-to-user cluster-admin osuser

Sad je moguće prijaviti se u OpenShift web konzolu sa “osuser” credentialsima.

Za kraj

Svi playbook za potrebe pripreme nodeova te primjer inventorya dostupni su na GitHubu. Svjestan sam da ima puno prostora za poboljšanja (npr. jedan playbook za sve radove) ali mora nešto ostati i za v1.1. :)

Svi komentari, kritike, prijedlozi su dobrodošli ili ovdje ili na GitHubu, birajte gdje se osjećate ugodnije :)

Leave a Reply