Un giro su OKD 4 (parte 1)

L’impatto del ritorno dopo le vacanze è stato tragico. Avendo dimenticato tutto in merito a cosa fa un DevOps ho iniziato con un’attività molto leggera. Giocare con OKD 4, la versione community da cui deriva Openshift.

Già da un po’ di tempo è stato annunciato OKD 4 https://www.openshift.com/blog/okd4-is-now-generally-available e messo a disposizione della community un installer per provare il prodotto in locale. Mi ero abituato con Minishift o a farmi un cluster a due nodi (un master e un worker, 2 vm Vagrant e via con i playbook di installazione) ma devo dire che crc (Code Ready Containers) è molto comodo.

All’interno del repo GitHub del progetto (https://github.com/code-ready/crc) si trova un link a https://cloud.redhat.com/openshift/install/crc/installer-provisioned per scaricare ed installare il tool.

Una volta installato (fate in modo che sia in PATH, ho optato per un classico link simbolico /usr/local/bin/crc -> /Users/foobar/WORK/Openshift4/crc/crc) basta dare semplicemente crc setup e crc start. Non ha molto senso entrare in ssh nell’istanza di CoreOS di OKD 4 installata con crc ma per curiosità vi scrivo che si può fare così:

~$ ssh -i /Users/foobar/.crc/machines/crc/id_rsa core@"$(crc ip)"
Red Hat Enterprise Linux CoreOS 45.82.202007240629-0
  Part of OpenShift 4.5, RHCOS is a Kubernetes native operating system
  managed by the Machine Config Operator (`clusteroperator/machine-config`).

WARNING: Direct SSH access to machines is not recommended; instead,
make configuration changes via `machineconfig` objects:
  https://docs.openshift.com/container-platform/4.5/architecture/architecture-rhcos.html

---
Last login: Sat Aug 22 17:50:00 2020 from 192.168.64.1
[core@crc-fd5nx-master-0 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux CoreOS release 4.5

Una volta partita la vm, in output verrà mostrato come accedere da riga di comando tramite oc. Con oc console si può comunque reperire la password in qualsiasi momento

~$ crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p DhjTx-8gIJC-2h2tK-eksGY https://api.crc.testing:6443'

Tramite crc console secco, invece, si aprirà magicamente la console di Openshift sul browser. A prima vista ci sono un sacco di cose che mi piacciono in confronto alla versione 3.11.

Feature interessanti da un primo sguardo alla web-console

  • La view divisa tra administrator e developer
  • La search bar per le resource e gli event. Molto utile anche se era più divertente fare dei bashoni con oc
  • La sezione OperatorHub
  • La sezione workloads per administrator è cross-namespace. Non c’è più bisogno di entrare nel namespace per vederne le risorse (questo è molto comodo, si fa troubleshooting più in fretta)
  • Nella sezione storage ci sono i PV. Da paura. Ci sono anche le storage-class. Mi spiace che tra i provisioner non ci sia quello per OpenEBS (https://openebs.io/)
  • Ok la parte compute è veramente utile (devo provarla però). La feature più interessante di questa sezione è il MachineAutoscaler per cui ciao ciao scale-up manuale (peccato era la cosa più facile lo scale-up tramite Ansible, su OKD. L’aggiornamento invece, su OKD e non su Openshift, era un cosiddetto bagno di sangue.
  • Sezione OAuths in Cluster Settings. È possibile configurare vari identity provider direttamente da web console

Aggiunta identity provider HTPasswd

# Creazione password file con htpasswd 
htpasswd -b -c fooobar foo bar

Upload del file tramite web console

Prova di login

$ oc login -u foo -p bar https://api.crc.testing:6443
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

Direi che OKD4, da un primo sguardo, ci piace parecchio ma ci sono molti aspetti da verificare e funzionalità da provare.

crc è sicuramente utilissimo per gli sviluppatori ma anche per i DevOps in caso vogliano dare uno sguardo.

Bella.

DNS Server Master/Slave tramite Ansible

Devo mettere in piedi un DNS server master/slave e mi trovo davanti al solito quesito sulle opzioni per l’implementazione. Come lo faccio?

  1. a manina
  2. Chef
  3. Ansible

L’opzione numero uno (ogni tanto e soprattutto per i miei ambienti di laboratorio) inizia a sembrarmi quella più rapida, ma per tenersi allenati su IaC è bene usare la 2 o la 3.

Siccome il DNS mi serve per un cluster Openshift direi di procedere con la 3, così farò tutto con Ansible.

1_hdwjXl1x4Q3VXmL7UG1XrQ

Ho preso in affitto una macchina fisica su Kimsufi con sopra Proxmox come hypervisor.

Screen Shot 2018-07-15 at 11.27.50

La rete è configurata così.

ocmaster39 (eth0: 10.10.10.10/24, eth1: 192.168.56.10/16)

ocslave39 (eth0: 10.10.10.11/24, eth1: 192.168.56.11/16)

Il nostro DNS ascolterà sulla 192.168.0.0 mentre sulla 10.10.10.0 attesterò i servizi di OC.

Una volta completato l’inventory file che è veramente molto scarno in questo caso eseguirò i playbook.

[dns_master]
192.168.56.10 ansible_connection=local

[dns_slave]
192.168.56.11



Eseguiamo qualche comando per vedere che la comunicazione funzioni…

[root@ocmaster39 ansible-role-bind]# ansible all -a 'whoami' -m shell
192.168.56.10 | SUCCESS | rc=0 >>
root

192.168.56.11 | SUCCESS | rc=0 >>
root

Ok si.. tutti usano il ping e quindi lo userò anche io,

[root@ocmaster39 ansible-role-bind]# ansible all -m ping
192.168.56.10 | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}
192.168.56.11 | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}

Convergenza del master (sotto è riportato il playbook usato)

ansible-playbook master.yml

Convergenza slave

ansible-playbook slave.yml

A questo punto Bind è installato e configurato, per cui interroghiamo il master…

[root@ocmaster39 ~]# dig @192.168.56.10 google.it | grep -n1 "ANSWER SECTION"
13-
14:;; ANSWER SECTION:
15-google.it. 188 IN A 172.217.18.195
[root@ocmaster39 ~]# dig @192.168.56.10 ocslave39.openshift.local | grep -n1 "ANSWER SECTION"
13-
14:;; ANSWER SECTION:
15-ocslave39.openshift.local. 1209600 IN A 192.168.56.11

… ora lo slave …

[root@ocmaster39 ~]# dig @192.168.56.11 ocslave39.openshift.local | grep -n1 "ANSWER SECTION"
13-
14:;; ANSWER SECTION:
15-ocslave39.openshift.local. 1209600 IN A 192.168.56.11

Ho forkato il playbook originale per una PR dove ho aggiunto i playbook usati e un po’ di doc.

Trovate l’esempio usato qui

Ciao!

Devops trends from MongoDB, Grafana and spammed mailbox. Update #1

Hi all!

this is the first update from my new Big data experimental project…

How can I be updated with devops trend technologies without reading tons of blog?

I started to collect raw data from Twitter timeline and Gmail spammed mailbox that receives messages from websites for job seekers and others like Stack Exchange…

This is the result and the first day of data collecting…I am curious to see what happens with tons of data and some adjustments…

Link to Grafana dashboard

http://www.congruit.io/

screen-shot-2017-03-06-at-11-11-20

Install Chef Server on Suse Linux Enterprise 11

Hi Folks!

Today I dealt with a problem… and I found a solution because Chef is a great tool!

At moment there is not an RPM for Suse Linux available from the official website, but this does not matter 🙂

Problem: Install Chef Server, Chefdk, Chef-manage into a Suse Linux Enterprise 11 virtual machine without installing the rpm packages of RHEL systems.

Screen Shot 2016-08-02 at 19.02.17.png

This is what you can do:

  1. Dowload the following packages:
    • chef-server-core-12.8.0-1.el6.x86_64.rpm,
    • chefdk-0.16.28-1.el6.x86_64.rpm,
    • chef-manage-2.4.1-1.el6.x86_64.rpm
  2. Extract all stuff from RPM with:
    • rpm2cpio  chef-manage-2.4.1-1.el6.x86_64.rpm   | cpio -idmv

  3. Move content of the extraction to the correct folders: /opt/{chef,chef-manage,opscoode}
  4. Set PATH=”/opt/opscode/bin:/opt/chefdk/bin/:/data/opt/chef-manage/bin:$PATH” in your profile login script
  5. chef-server-ctl reconfigure
  6. chef-manage-ctl reconfigure
  7. again chef-server-ctl reconfigure

At the end all services are up and running

Screen Shot 2016-08-02 at 19.12.35.png

and my workstation too 🙂

Autoscaling with EC2 and Chef

Dear all,

It has been a long time since my last post and here I am with a new one, just to keep track of my current study case…

I would like to put in place an auto-scaling mechanism for my lab platform.

Currently I have one Ha-Proxy load balancer with 2 backends. I will perform stress test on my front-end  with Jmeter and create automatically a virtual machine joined to my Chef infrastructure in order to increase resources.

In this post I will describe just how to set  up an initial configuration of autoscaling-group + Chef ( today it is Friday… on Monday I will do the rest 😉

Let’s start  with the needed components:

  1. a Chef server
  2. one HaProxy load balancer
  3. two tomcat backend

Now I try the script for the unattended bootstrap. This script adds a new node under the Chef Server. I tried it on a simple virtual machine locally, using a Centos 7 running in Virtualbox.

[ ! -e /etc/chef ] && mkdir /etc/chef

cat <<EOF > /etc/chef/validation.pem
-----BEGIN RSA PRIVATE KEY-----
your super secret private key :)
-----END RSA PRIVATE KEY-----
EOF

cat <<EOF > /etc/chef/client.rb
log_location STDOUT
chef_server_url "https://mychefserver.goofy.goober/organizations/myorg"
ssl_verify_mode :verify_none
validation_client_name "myorg-validator"
EOF

cat <<EOF > /etc/chef/first-boot.json


{
 "run_list": ["role[tomcat_backend]"]
}


EOF

curl -L https://www.opscode.com/chef/install.sh | \
bash -s -- -v 12.9.41 &> /tmp/get_chef.log
chef-client -E amazon_demo -j /etc/chef/first-boot.json  \
&> /tmp/chef.log 


If things have done correctly you will see the new node into your Chef server dashboard..Check the logs on the new node in case of problems..

/tmp/chef.log
/tmp/get_chef.log

Now let’s create the autoscaling-group in Amazon EC2

Screen Shot 2016-05-06 at 13.43.35.png

Then select your preferred instance… I am using RHEL 7.2

Screen Shot 2016-05-06 at 13.44.48.png

Insert the bootstrap script “User data file” (the one we just created)

Screen Shot 2016-05-06 at 13.49.17.png

I have no instances running on my cloud, so the following configuration will generate a virtual machine due to the min required is 1.

Screen Shot 2016-05-06 at 17.42.56.png

After a minute I got an email saying:

Description: Launching a new EC2 instance: $my_id_istance
Cause: At 2016-05-06T15:10:17Z an instance was started in response to a 
difference between desired and actual

Finally I have a new configured node in my Chef server.. . which is the autoscaling_node01.

Screen Shot 2016-05-06 at 16.00.44.png

That’s all folks!

Bye for now…

Eugenio Marzo
DevOps Engineer at SourceSense

 

Build chef LWRP and manage OpenSSH server banner with Chef

Hi guys,
in this article we will se how to build a small LWRP Chef cookbook..The final result will be:

ssh_banner_banner “banner” do
banner_file _banner_file
sshd_config_file node[‘ssh_banner’][‘sshd_config_file’]
paranoic_mode true
action :create
notifies :restart, “service[sshd]”
end


If “paranoic mode” is true, chef will change configuration file and restart sshd, but after 20 seconds (by default) it will do a rollback of configuration

You can try it use Vagrant and Virtualbox..

1. clone git repo from github:

  git clone https://github.com/EugenioMarzo/cookbook-ssh-banner.git

2. show the new banner to copy:

 cat files/default/chef_ssh_banner

3. start vagrant virtual machine:

  vagrant up

4. once the deploy is completed:

Screen Shot 2014-09-01 at 16

Let’s see how to create a simple LWRP:

1. Declare variables in resources/banner.rb

actions :create, :delete

default_action :create

attribute :sshd_config_file, :kind_of => String

attribute :banner_file, :kind_of => String

attribute :paranoic_mode

2. create an action in providers/banner.rb.. Let’s see the :delete function :

action :delete do
#check if ssh banner file is present
check_banner_file new_resource.banner_file
#check if paranoic mode is enabled
paranoic_mode

if ::File.open(new_resource.sshd_config_file).grep(/Banner\ .*/).size >= 1
Chef::Log.info(“Deleting SSH Banner..”)
execute ” sed -i s/Banner\\\ .*//g #{new_resource.sshd_config_file}”

#the next function will inform that the state is changed, an action has been done. This is important because after this will be executed a notify action like a sshd restart

new_resource.updated_by_last_action(true)
else
Chef::Log.info(“SSH Banner not found … doing nothing..”)
new_resource.updated_by_last_action(false)

end

end

3. use it in a recipe.. Delete a banner:

ssh_banner_banner “banner” do
banner_file _banner_file
sshd_config_file node[‘ssh_banner’][‘sshd_config_file’]
paranoic_mode false
action :delete
notifies :restart, “service[sshd]”
end

4. for adding a banner use:

ssh_banner_banner “banner” do
banner_file _banner_file
sshd_config_file node[‘ssh_banner’][‘sshd_config_file’]
paranoic_mode false
action :create
notifies :restart, “service[sshd]”
end

Setting up Wildfly8 Cluster in 5 minutes with Chef and Vagrant

Hi! This is my first post of this blog and I would like start with my last cookbook “wildfly-clu”.   We will create a simple Wildlfy cluster(domain mode) composed by 3 servers. For this test I will use CentOS release 6.3 (Final).

Final result:

  • Reach HelloWorld application to http://myserver1/helloworld  ( passing through reverse proxy)

You can reach the app directly from the nodes:
http://myserver1:8080/helloworld/
http://myserver2:8080/helloworld/
http://myserver3:8080/helloworld/

  • http://myserver1:9990/console  (user: admin,  password: admin)

WILDLFYCONS

  • http://myserver1:22002/ HAPROXY admin console.

haproxy

 

Quick HowTo:
check if the vagrant-berkshelf plugin is installed. If not, launch ” vagrant plugin install vagrant-berkshelf”
git clone https://github.com/EugenioMarzo/cookbook-wildfly-clu
cd wildfly-clu
vagrant up

Detailed Description:

Node1 (myserver1) => Domain controller –  Application Server – Reverse proxy

Node2 (myserver2) => Slave – Application Server

Node3 (myserver3) => Slave – Application Server

Prerequisites:  Virtualbox 4.3.10 ,Vagrant 1.4.3 , Ruby 1.9.3,Git

Let’s start..

1. Configure your /etc/hosts in order to resolve locally the name of all VMs
33.33.33.11 myserver1
33.33.33.13 myserver3
33.33.33.12 myserver2

2.  Clone the cookbook
`git clone https://github.com/EugenioMarzo/cookbook-wildfly-clu`

3. Quick overview of Vagrantfile
Vagrant can configure multiple virtual-machines. An example of ./mycookbook/Vagrantfile:
#This will be the configuration for myserver1. Being a cluster you will have the same configuration for myserver2 and myserver3

config.vm.define "myserver1" do |myserver1|

myserver1.vm.hostname = “myserver1”
myserver1.vm.network :private_network, ip: “33.33.33.11”
myserver1.vm.network :public_network
myserver1.vm.provision :chef_solo do |chef|
chef.json = {
:java => {:jdk_version => “7”}

}

chef.run_list = [
“recipe[java]”,  #to install java
“recipe[wildfly-clu::default]”,  #install wildfly
“recipe[wildfly-clu::logs]”, #log rotation
“recipe[wildfly-clu::domain]” # when is in run_list configure the domain mode

]
end
end

3. show the VMs configured in Vagrantfile

check if the vagrant-berkshelf plugin is installed. If not, launch ” vagrant plugin install vagrant-berkshelf”
root@myclient1:~/vagrantlab/wildfly-clu# vagrant status
Current machine states:

myserver1 not created (virtualbox)
myserver2 not created (virtualbox)
myserver3 not created (virtualbox)

 

4. a quick overview of the most important attributes of the cookbook

Version and URL of Wildfly8
default['wildfly-clu']['wildfly']['version'] = "8.0.0"
default['wildfly-clu']['wildfly']['url']="http://download.jboss.org/wildfly/8.0.0.Final/wildfly-8.0.0.Final.tar.gz"

#######################################################################
## Set the following variable to true if you want use the domain mode.
default[‘wildfly-clu’][‘mode’][‘domain’] = true
##
#######################################################################

#if you create this file the recipe will not change domain.xml,host.xml and mgmt-******.properties after the first installation
default[‘wildfly-clu’][‘wildfly’][‘lock’] = “/usr/local/#{node[‘wildfly-clu’][‘name’]}/conf.lock”

The cluster schema

default[‘wildfly-clu’][‘cluster_schema’] = {
“myserver1” => { :role => “domain-controller” ,
:ip => “33.33.33.11”,
:port_offset => “0” },
“myserver2” => { :role => “slave” ,
:ip => “33.33.33.12” ,
:master => “myserver1” ,
:port_offset => “0”},
“myserver3” => { :role => “slave” ,
:ip => “33.33.33.13” ,
:master => “myserver1” ,
:port_offset => “0” }

}

#set this to true in order to deploy an helloworld application
default[‘wildfly-clu’][‘wildfly’][‘deploy_hello_world’] = true

#set this to true in order to configure an haproxy with the slaves declared in the cluster_schema
default[‘wildfly-clu’][‘wildfly’][‘haproxy’] = true

#DEFAULT java options to use in all slaves and the master for run the application.
default[‘wildfly-clu’][‘java_opts’] = {
“heap-size” => “64m”,
“max-heap-size” => “64m”,
“permgen-size” => “64m”,
“max-permgen-size” => “64m” }

5. Setting up the test environment with Vagrant

root@myclient1:~/vagrantlab/wildfly-clu# vagrant up

Bringing machine ‘myserver1’ up with ‘virtualbox’ provider…
Bringing machine ‘myserver2’ up with ‘virtualbox’ provider…
Bringing machine ‘myserver3’ up with ‘virtualbox’ provider…

#downloading the virtualbox machine used as template
[myserver1] Importing base box ‘Berkshelf-CentOS-6.3-x86_64-minimal’…
Progress: 90%

[myserver1] Available bridged network interfaces:
1) eth0
2) virbr0
3) lxcbr0
4) virbr1
#Choose 1 if you want bridge the network cards of the VM to eth0

[myserver1] Booting VM…
[myserver1] Waiting for machine to boot. This may take a few minutes…
[myserver1] Machine booted and ready!
[myserver1] Configuring and enabling network interfaces.

Through Chef-Solo will be configured the VMs just created. Below the most important steps
Running chef-solo…
[2014-04-03T13:25:55+00:00] INFO: *** Chef 10.14.2 ***
[2014-04-03T13:26:01+00:00] INFO: Run List is [recipe[java], recipe[wildfly-clu::default], recipe[wildfly-clu::logs], recipe[wildfly-clu::domain]]
[2014-04-03T13:26:08+00:00] INFO: package[java-1.7.0-openjdk] installing java-1.7.0-openjdk-1.7.0.51-2.4.4.1.el6_5 from upda
[2014-04-03T13:26:35+00:00] INFO: package[java-1.7.0-openjdk-devel] installing java-1.7.0-openjdk-devel-1.7.0.51-2.4.4.1.el6_5 from updates repository

#Downloading Wildfly..
[2014-04-03T13:26:46+00:00] INFO: user[wildfly] created
[2014-04-03T13:27:00+00:00] INFO: remote_file[wildfly] updated
[2014-04-03T13:27:00+00:00] INFO: remote_file[wildfly] owner changed to 502
[2014-04-03T13:27:00+00:00] INFO: remote_file[wildfly] group changed to 503
[2014-04-03T13:27:00+00:00] INFO: remote_file[wildfly] mode changed to 775
[2014-04-03T13:27:00+00:00] INFO: remote_file[wildfly] sending run action to bash[wildfly_extract] (immediate)
[2014-04-03T13:27:01+00:00] INFO: bash[wildfly_extract] ran successfully
[2014-04-03T13:27:01+00:00] INFO: bash[wildfly_extract] sending create action to link[/usr/local/wildfly] (immediate)
[2014-04-03T13:27:01+00:00] INFO: link[/usr/local/wildfly] created
[2014-04-03T13:27:01+00:00] INFO: link[/usr/local/wildfly] sending create action to template[/etc/default/wildfly.conf] (immediate)

# Copy configuration read by init script
[2014-04-03T13:27:01+00:00] INFO: template[/etc/default/wildfly.conf] updated content

#Copy configuration for domain mode
[2014-04-03T13:27:06+00:00] INFO: template[/usr/local/wildfly/domain/configuration/domain.xml] mode changed to 775

#installing Haproxy
[2014-04-03T13:27:06+00:00] INFO: package[haproxy] installing haproxy-1.4.24-2.el6 from base repository

#deploy Hello world!
[2014-04-03T13:27:15+00:00] INFO: cookbook_file[helloworld.war] sending run action to bash[deploy_helloworld] (delayed)
[2014-04-03T13:27:19+00:00] INFO: bash[deploy_helloworld] ran successfully
[2014-04-03T13:27:19+00:00] INFO: Chef Run complete in 77.838215277 seconds

The same operation will be for each virtual machines described in Vagrantfile.

 

6. a quick overview inside the virtual machines:

#use it to connect via SSH
root@myclient1:~/vagrantlab/wildfly-clu# vagrant ssh myserver1

[vagrant@myserver1 ~]$ sudo su
[root@myserver1 vagrant]# cat /etc/redhat-release
CentOS release 6.3 (Final)

[root@myserver1 vagrant]# /etc/init.d/wildfly status
wildfly is running (pid 5032)

#cron job for rotate logs
[root@myserver1 vagrant]# cat /var/spool/cron/root
# Chef Name: Wildfly log rotation 0
0 0 * * * find /usr/local/wildfly/domain/log -name ‘*’ -a ! -name ‘*.gz’ -mtime +1 -a ! -name ‘console.log’ -a ! -name ‘boot.log’ -exec gzip ‘{}’ ;
# Chef Name: Wildfly log rotation 1
0 0 * * * find /usr/local/wildfly/domain/log -name ‘*.txt.gz’ -mtime +30 -exec rm -f ‘{}’ ;

[root@myserver1 vagrant]# cat /etc/default/wildfly.conf
export JBOSS_USER=wildfly
export JBOSS_HOME=/usr/local/wildfly
export JBOSS_CONSOLE_LOG=/usr/local/wildfly/domain/log/console.log
export JBOSS_MODE=”domain”

Posted 3th April by Eugenio Marzo