Openshift

Im Moment lerne ich OpenShift: hier schreibe ich Zeugs auf, das grad spannend ist und ich immer wieder vergesse.
Die Liste hat (fast) keine Ordnung und ist sehr zufällig.

OpenShift Kleinzeugs

create and switch projects

oc new-project thespark
oc projects
oc project thespark
oc delete project thespark

rollback to latest successful deployment

oc rollout undo dc/hallospark
re-enable triggers:  
oc set triggers --auto dc/hallospark

expose applications

oc expose dc/hallogo --port=8080 (generates a service from deployment configuration)
oc expose svc/hallogo (generares a route from service)

describe etc. resource types, print to yaml

oc describe bc
oc describe dc
oc get secret gitlab-hallospark -o yaml
oc get user
oc get nodes

templates

oc get templates -n openshift 
oc describe template postgresql-persistent -n openshift
oc get template jenkins-pipeline-example -n openshift -o yaml
oc export all --as-template=javapg

collect all logs of a namespace:

#!/bin/bash

namespace='openshift-monitoring'

oc project $namespace

mkdir -p $namespace-logs/pod

for pod in $(oc get pods -o name -n $namespace); do 
     echo "$pod"
     echo "===============$pod (tailed to 2000 lines)===================" >> $namespace-logs/$pod-logs.txt
     oc logs $pod --all-containers --tail 2000 >> $namespace-logs/$pod-logs.txt 
done

tar -cvf $namespace-logs.tar $namespace-logs/
rschumm@kassandra:~$ tar -cvf monlogs.tar monlogs/

Garbage collection (Müllabfuhr)

configure garbage collection.

get rid of evicted pods:

oc get pod --all-namespaces  | grep Evicted
oc get pod --all-namespaces  | awk '{if ($4=="Evicted") print "oc delete pod " $2 " -n " $1;}' | sh 

get rid of garbage by pruning:
(as a cluster admin on a master node, so that registry is accessible)

oc login -u rschumm

oc adm prune builds --confirm 
oc adm prune images --confirm 

list, describe and delete all resources with a label:

oc get all --selector app=hallospark -o yaml
oc describe all --selector app=hallospark
oc delete all --selector app=hallospark 
oc delete pvc --all 

… e.g. delete (and restart) all Prometheus node-exporter and prometheus logs pods:

oc project openshift-monitoring
oc get -o name pods --selector app=node-exporter
oc delete pods --selector app=node-exporter
oc delete pods --selector app=prometheus

look for Pods with DiskPressure

oc describe node  | grep -i NodeHasDiskPressure

drain and reboot node:

oc adm manage-node ocp-app-1 --schedulable=false
oc adm drain ocp-app-1

systemctl restart atomic-openshift-node.service

Admin, Access etc.

minishift: 
minishift addons apply admin-user
oc login -u admin (admin)

normal: (on first master)
oc login -u system:admin
oc adm policy add-cluster-role-to-user cluster-admin rschumm

allow root user etc: 
oc adm policy add-scc-to-user anyuid -z default -n myproject --as system:admin

Edit Node Config Map:

oc edit cm node-config-compute -n openshift-node

minishift, eclipse che addon etc.

minishift start --cpus 3 --memory 6GB
minishift addons apply che
minishift addons remove che
minishift addons list
minishift update 

Zugriff von Aussen auf die Datenbank etc. (Routes in OpenShift sind nur für HTTP):

rschumm@kassandra:~/git/schufi$ oc port-forward postgresql-1-6pkns 15432:5432

oder noch direkter:

oc exec postgresql-1-nhvs5 -- psql -d explic -c "select experiment from video"

kubernetes dashboard UI

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy 
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.

Builds

also cf. my Polyglot Example Blog

s2i maven “binary workflow”

mvn package fabric8:resource fabric8:build fabric8:deploy

s2i “source workflow” for different languages:

oc new-app fabric8/s2i-java~https://github.com/rschumm/hallospark.git
oc new-app fabric8/s2i-java~https://gitlab.com/rschumm/hallospark.git --source-secret='gitlab-hallospark'
oc new-app openshift/php~https://github.com/rschumm/hallophp.git
oc new-app openshift/dotnet~https://github.com/rschumm/hallodotnet.git

Doku for different languages.

apply a resource

oc apply -f src/main/fabric8/pipeline_bc.yml 

Openshift Multi-Stage-Deployment

Einfachstes Blueprint für Multi-Stage Deployment mit Jenkins-Build-Pipeline

tag manuell:

oc tag sparkpipe/hallospark:latest sparkpipe/hallospark:prod

Deploy von Images aus anderem Namespace:

Service account default will need image pull authority to deploy images from sparkpipe. You can grant authority with the command:

oc policy add-role-to-user system:image-puller system:serviceaccount:sparkpipe-prod:default -n sparkpipe

Jenkinsfile

try {
    //node('maven') {
    node {
        stage('deploy to dev'){
            openshiftBuild(buildConfig: 'hallospark', showBuildLogs: 'true')
        }
        //stage ('deploy'){
        //    openshiftDeploy(deploymentConfig: 'hallospark')
        //}
        stage("approve the deployment") {
            input message: "Test deployment: Isch guät?", id: "approval"
        }
        stage("deploy prod"){
            openshift.withCluster() { 
                openshift.tag("sparkpipe/hallospark:latest", "sparkpipe/hallospark:prod")
            }
        }
    }
} catch (err) {
   echo "in catch block"
   echo "Caught: ${err}"
   currentBuild.result = 'FAILURE'
   throw err
}    

Cluster install and update

install

on bastion host:

[cloud-user@bastion openshift-ansible]$ cd /usr/share/ansible/openshift-ansible/
[cloud-user@bastion openshift-ansible]$ ansible-playbook playbooks/prerequisites.yml 
[cloud-user@bastion openshift-ansible]$ ansible-playbook playbooks/deploy_cluster.yml

host inventory file is in /etc/ansible/hosts
See Documentation

update

Sample update Release. See the automated update Doku

on bastion host as root:

subscription-manager refresh

update repos:

yum update -y openshift-ansible

or update everything with yum update

check:

/etc/ansible/hosts
openshift_master_manage_htpasswd=false 

perform the update:

[cloud-user@bastion ~]$ cd /usr/share/ansible/openshift-ansible
[cloud-user@bastion ~]$ ansible-playbook playbooks/byo/openshift-cluster/upgrades/v3_11/upgrade.yml --check
[cloud-user@bastion ~]$ ansible-playbook playbooks/byo/openshift-cluster/upgrades/v3_11/upgrade.yml

as a cluster admin:

reboot all nodes.

oc adm diagnostics

PostgreSQL kaputt

thanks to the Government of British Columbia, Canada the solution to access and fix a corrupted and crash-looping PostgreSQL Database:

Log Signature:

pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
..... done
server started
=> sourcing /usr/share/container-scripts/postgresql/start/set_passwords.sh ...
ERROR:  tuple concurrently updated

in principle:

oc debug (a crashing pod)

scale down the postgresql deployment to 0 replicas

in the debug session:

run-postgresql (should provoke the same error)
pg_ctl stop -D /var/lib/pgsql/data/userdata 
pg_ctl start -D /var/lib/pgsql/data/userdata
pg_ctl stop -D /var/lib/pgsql/data/userdata 

end the debug session and re-init the postgresql deployment.

credits

Häcks

to change the Branding of the WebConsole, you can change the /etc/origin/master/master-config.yaml File or - much easier:
In the Namespace openshift-web-console just manipulate the ConfigMap webconsole-config and add the following snippet to the config-yml:

extensions:
  properties: {}
  scriptURLs: []
  stylesheetURLs: 
    - https://schumm.ch/exp/okd.css

The configs are watched, and after a few minutes all WebConsole Pods will be reloaded with the new config.


zurück zum Seitenanfang