Skip to content

Commit 6b71464

Browse files
author
Aldo Lacuku
committed
Adding ansible playbooks for deploying the pcn-k8s component.
The main difference between this vagrant solution and the old one is that it does the provision of the VMs with a bash script embedded in the vagrant file while the new one is based entirely on ansible: - the number of nodes in the k8s-cluster to be deployed is done through a "configuration/variable file" in the ansible playbook; - choosing the version of the packages to be installed (like kubeadm, kubelet ...) is as simple as declaring it in the configuration file; - the configuration of the vagrant environment is also done through the ansible playbook. When the playbook is played a new Vagrantfile is generated and then used to bring up the VMs; The user does not need to know how a Vagrantfile works to modify the deployment of the cluster.the network configuration of the VMs is setted in bridge mode possibly in a different network interface than the default one of the host machine and the NATed interface brought up by the vagrant engine is disabled. This decision was driven by debug needs when sniffing network traffic on the k8s cluster; - the access to the machines is done directly through a pre-generated ssh-key without the need of the vagrant command. a to do feature is to automatically generate the key and configure it in the home folder of the user.
1 parent 6a72cb4 commit 6b71464

17 files changed

Lines changed: 590 additions & 0 deletions

File tree

tests/ansible_vagrant/.gitignore

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
.vagrant/
2+
kubernetes-playbooks/host_vars/
3+
*.log
4+
kubernetes-playbooks/join-command
5+
Vagranfile
6+
kubernetes-playbooks/inventory.ini

tests/ansible_vagrant/README.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
# Ansible playbook
2+
An ansible playbook to install a simple k8s cluster with a master node and two worker nodes. The CNI plugin is pcn-k8s. Vagrant is used to provide the VMs.
3+
4+
## Requirements
5+
* Vagrant 2.2.5 or higher
6+
* Ansible 2.8.5 or higher
7+
8+
## Usage
9+
* `ansible-playbook --ask-become-pass master-playbook.yml`
10+
* Connecting to a node: `ssh vagrant@k8s-master-1`.
11+
12+
## Configuration
13+
All the configuration variable can be found in *kubernetes-playbooks/group_vars/all/vars.yml*. Before playing the playbook have a look at the vars.yml file and change the variables according to your needs. Please pay attention to the **Vagrant Configuration Section** and modify the variables in such a way that suits the host's network interfaces and resources such RAM and CPUs.
14+
By default the playbook checks if ssh key named *vagrant_machines* exists in *~/.ssh*. If it does exist than is copied in the VMs and used to login during the playbook execution and by the user to access via SSH the k8s nodes. If it does not exist than a new ssh key is generated and used to configure the VMs.
15+
If you want to change a parameter on the fly without modifying it in the file *kubernetes-playbooks/group_vars/all/vars.yml* you can pass it to the **ansible-playbook** command like this: `ansible-playbook --ask-become-pass --extra-vars '{"k8s_worker_nodes_ips": [192.168.0.24]}' master-playbook.yml`
16+
17+
## How does it work?
18+
The first playbook to be executed is *bootstrap.yml* which does the following:
19+
* Reads the variables in *kubernetes-playbooks/group_vars/all/vars.yml*
20+
* Generates the **Vagrantfile** from the *kubernetes-playbooks/roles/bootstrap/templates/VagrantFile.j2*
21+
* Generates the **kubernetes-playbooks/inventory.ini** from the *kubernetes-playbooks/roles/bootstrap/templates/inventory.ini.j2*
22+
* For each IP in **k8s_worker_nodes_ips** generates a new file in *kubernetes-playbooks/host_vars* using the **k8s_worker_node_prefix** and populates it with an ip taken from the list.
23+
* For each node to be part of the k8s-cluster it adds the hostname and IP address associated to that host in */etc/hosts* in the localhost filesystem.
24+
* At the end it runs the vagrant program to provision the VMs.
25+
26+
Then *vagrant-netconfig*, *k8s-master-node* and *k8s-worker-node* playbooks are played. Their names are self-explanatory on what the do.
27+
28+
29+
30+
## Adding additional worker-nodes
31+
If more worker-nodes are needed:
32+
* add the ip address of the node in group_vars/all/vars.yml
33+
34+
## Remote user
35+
The remote user which runs the ansible commands is defined in two places:
36+
* ansible.cfg
37+
* group_vars/all/vars.yml
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
[defaults]
2+
inventory = inventory.ini
3+
remote_user = vagrant
4+
host_key_checking = False
5+
become = False
6+
7+
[ssh_connection]
8+
retries=2
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
# Playbook used to configure the files before launching the vagrant environment
3+
# Check the role for more information
4+
- hosts: localhost
5+
become: no
6+
roles:
7+
- bootstrap
8+
Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
#These variables are shared among all roles
2+
3+
docker_packages:
4+
- docker-ce=5:18.09.9~3-0~ubuntu-xenial
5+
- docker-ce-cli=5:18.09.9~3-0~ubuntu-xenial
6+
- containerd.io
7+
8+
#Latest versions
9+
k8s_packages:
10+
- kubelet
11+
- kubeadm
12+
- kubectl
13+
14+
#Needed for apt over HTTPS
15+
apt_packages:
16+
- apt-transport-https
17+
- ca-certificates
18+
- curl
19+
- gnupg-agent
20+
- software-properties-common
21+
22+
##path to the public key used to ssh to the machines
23+
#pub_key_path: /home/aldo/.ssh/vagrant_machines.pub
24+
25+
#pod-network-cidr used in k8s
26+
pod_network_cidr: 10.244.0.0/16
27+
28+
#user to be added to the k8s group
29+
user: vagrant
30+
31+
# zone to which set the time
32+
time_zone: Europe/Rome
33+
34+
#####################
35+
#Nodes Configuration#
36+
#####################
37+
38+
#Node configuration. The cluster can have only one master node and many worker nodes as needed.
39+
k8s_master_nodes_ips:
40+
- 192.168.0.23
41+
42+
k8s_worker_nodes_ips:
43+
- 192.168.0.24
44+
- 192.168.0.25
45+
- 192.168.0.26
46+
- 192.168.0.27
47+
48+
#theese prefixes are used when generating the names of the nodes. Used in the host_vars files, inventory.ini and in /etc/hosts in the local system.
49+
k8s_master_node_prefix: k8s-master-
50+
51+
k8s_worker_node_prefix: k8s-node-
52+
53+
54+
############################
55+
#Vagrant file configuration#
56+
############################
57+
58+
#flavor of the operating system to be used in the VMs
59+
image_name: "ubuntu/bionic64"
60+
#the nic on your host to which the VM interface will be bridged.
61+
bridge_nic: enp1s0f0
62+
63+
#Path to the generated SSH private key file
64+
ssh_key_path: ~/.ssh
65+
ssh_key_name: vagrant_machines
66+
67+
#path to the public key used to ssh to the machines, if this key does not exist than a new one is generated with the same name
68+
pub_key_path: "{{ssh_key_path}}/{{ssh_key_name}}.pub"
69+
70+
#Amount of RAM memory for a single VM
71+
virtual_memory_size: 4096
72+
73+
#Amount of Virtual CPUs for a single VM
74+
virtual_cpus: 2
75+
76+
###################
77+
#CNI configuration#
78+
###################
79+
80+
cni_manifests:
81+
- https://raw.githubusercontent.com/polycube-network/polycube/master/src/components/k8s/standalone_etcd.yaml
82+
- https://raw.githubusercontent.com/polycube-network/polycube/master/src/components/k8s/pcn-k8s.yaml
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
# Playbook used to configure a master node.
3+
# Check the role for more information
4+
- hosts: k8s_master_nodes
5+
become: yes
6+
roles:
7+
- k8s-master
8+
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
---
2+
# Playbook used to configure a worker node.
3+
# Check the role for more information
4+
- hosts: k8s_worker_nodes
5+
become: yes
6+
roles:
7+
- k8s-worker
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
---
2+
# Master playbook to run a once the whole configuration.
3+
# Check the specific playbooks and roles to have more information.
4+
- import_playbook: bootstrap.yml
5+
- import_playbook: vagrant-netconfig.yml
6+
- import_playbook: k8s-master-node.yml
7+
- import_playbook: k8s-worker-node.yml
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
---
2+
# tasks file for bootstrap
3+
#This task generates the vagrant file for the VMs to be created.
4+
5+
- name: Template the VagrantFile.j2 configuration file to ../Vagrantfile
6+
template:
7+
src: VagrantFile.j2
8+
dest: ../Vagrantfile
9+
delegate_to: localhost
10+
11+
- name: Template the inventory.ini.j2 configuration file to invetory.ini
12+
template:
13+
src: inventory.ini.j2
14+
dest: inventory.ini
15+
delegate_to: localhost
16+
17+
- name: Refresh inventory to ensure that the new generated one is used
18+
meta: refresh_inventory
19+
20+
- name: Find and save in a local variable all host_vars files
21+
find:
22+
paths: ./host_vars
23+
patterns: "*.yml"
24+
register: files_to_delete
25+
26+
- name: Delete all the host_vars files
27+
file:
28+
path: "{{ item.path }}"
29+
state: absent
30+
with_items: "{{ files_to_delete.files }}"
31+
32+
- name: Creating the "host_vars" file for each k8s' master-node
33+
file:
34+
path: ./host_vars/{{item}}.yml
35+
state: touch
36+
mode: u=rw,g=r,o=r
37+
loop: "{{ groups.k8s_master_nodes }}"
38+
39+
- name: Populating the k8s-master-nodes host_vars files with the node ip
40+
lineinfile:
41+
path: "./host_vars/{{ item.0 }}.yml"
42+
line: "node_ip: {{ item.1 }}"
43+
loop: "{{ groups.k8s_master_nodes|zip(k8s_master_nodes_ips)|list }}"
44+
45+
- name: Creating the "host_vars" file for each k8s' worker-node
46+
file:
47+
path: ./host_vars/{{item}}.yml
48+
state: touch
49+
mode: u=rw,g=r,o=r
50+
loop: "{{ groups.k8s_worker_nodes }}"
51+
52+
- name: Populating the k8s-worker-nodes host_vars files with the node ips
53+
lineinfile:
54+
path: "./host_vars/{{ item.0 }}.yml"
55+
line: "node_ip: {{ item.1 }}"
56+
loop: "{{ groups.k8s_worker_nodes|zip(k8s_worker_nodes_ips)|list }}"
57+
58+
- name: Removing k8s-master-nodes from /etc/hosts in the localhost if they already exist
59+
become: yes
60+
lineinfile:
61+
path: /etc/hosts
62+
regexp: ".*{{ item }}.*"
63+
state: absent
64+
loop: "{{ groups.k8s_master_nodes }}"
65+
66+
- name: Adding k8s-master-nodes from /etc/hosts in the localhost
67+
become: yes
68+
lineinfile:
69+
path: /etc/hosts
70+
line: "{{ item.1 }} {{item.0}}"
71+
loop: "{{ groups.k8s_master_nodes|zip(k8s_master_nodes_ips)|list }}"
72+
73+
- name: Removing k8s-worker-nodes from /etc/hosts in the localhost if they already exist
74+
become: yes
75+
lineinfile:
76+
path: /etc/hosts
77+
regexp: ".*{{ item }}.*"
78+
state: absent
79+
loop: "{{ groups.k8s_worker_nodes }}"
80+
81+
- name: Adding k8s-worker-nodes to /etc/hosts in the localhost
82+
become: yes
83+
lineinfile:
84+
path: /etc/hosts
85+
line: "{{ item.1 }} {{item.0}}"
86+
loop: "{{ groups.k8s_worker_nodes|zip(k8s_worker_nodes_ips)|list }}"
87+
88+
- name: Check if the ~/.ssh directory exists, if not create it
89+
file:
90+
path: "{{ ssh_key_path }}"
91+
state: directory
92+
mode: '0755'
93+
94+
- name: Checking if ssh key exists and if not generate a new one
95+
openssh_keypair:
96+
path: "{{ ssh_key_path }}/{{ssh_key_name}}"
97+
98+
- name: Run "vagrant up" with the Vagrantfile as input.
99+
command: vagrant up
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
IMAGE_NAME = "{{ image_name }}"
2+
BRIDGE_NIC = "{{ bridge_nic }}"
3+
4+
Vagrant.configure("2") do |config|
5+
config.ssh.insert_key = true
6+
7+
config.vm.provider "virtualbox" do |v|
8+
v.memory = {{ virtual_memory_size }}
9+
v.cpus = {{ virtual_cpus }}
10+
end
11+
{% for node_ip in k8s_master_nodes_ips %}
12+
config.vm.define "k8s-master" do |master|
13+
master.vm.box = IMAGE_NAME
14+
# change the bridge interface to match the one on your host machine
15+
master.vm.network "public_network", bridge: BRIDGE_NIC, ip: "{{ node_ip }}"
16+
master.vm.hostname = "k8s-master"
17+
master.vm.provision "ansible" do |ansible|
18+
# Configures the ssh-key
19+
ansible.playbook = "kubernetes-playbooks/vagrant-ssh-key.yml"
20+
ansible.extra_vars = {
21+
pub_key_path: "{{ pub_key_path }}"
22+
}
23+
end
24+
end
25+
{% endfor %}
26+
27+
{% for node_ip in k8s_worker_nodes_ips %}
28+
config.vm.define "node-{{ loop.index }}" do |node|
29+
node.vm.box = IMAGE_NAME
30+
# change the bridge interface to match the one on your host machine
31+
node.vm.network "public_network", bridge: BRIDGE_NIC, ip: "{{ node_ip }}"
32+
node.vm.hostname = "node-{{ loop.index }}"
33+
node.vm.provision "ansible" do |ansible|
34+
# Configures the ssh-key
35+
ansible.playbook = "kubernetes-playbooks/vagrant-ssh-key.yml"
36+
ansible.extra_vars = {
37+
pub_key_path: "{{ pub_key_path }}"
38+
}
39+
end
40+
end
41+
{% endfor %}
42+
end

0 commit comments

Comments
 (0)