Vagrant :: SSH Inter-Connectivity of Multi Virtual Machines

Vagrant is one of the best example of Infrastructure as a Code (IAC) tools (VM based). It works based on the declarative configuration file which consists of requirements like OS, Apps, Users and Files etc…

By using Vagrant, we can reduce mundane tasks of downloading OS images, manual installation of OS, APPs, User Configuration and security etc… It saves a lot of time and efforts for the developers, Admins and as well as Architects. Vagrant is a cross-platform product and its free to use the community edition. Vagrant also has its own cloud where a thousands of OS with Apps images are uploaded by the active contributors. For more info and to download this great product please visit here. Please install Oracle Virtualbox which is one of the basic requirements to run the vagrant VMs.

Multi Machines: Is a type of Vagrant configuration where multiple machines can be build using a single configuration file. This is best suited for development where multiple VMs are required whether its a homogeneous/heterogeneous configuration. For example in a typical webapp development, a separate Web, DB, Middleware, Proxy servers along with client VMs required to match the production class environment.

Below Vagrant configuration file is a use-case for setting up ‘Ansible Practice Lab’. 6 nodes are build by vagrant running on CentOS 7. This LAB environment is build for the purpose of learning Ansible hands-on workshop to try out all the features offered by Ansible to automate configuration management and infrastructure automation. Ansible package is installed on node 1 and rest of the nodes are managed by ansible workstation – node1.

Later YUM is installed and configured by downloading EPEL repository across all 6 nodes using the global shell script. Post yum configuration, basic packages like wget, curl and sshpass are installed.

Most important requirement for Ansible to work is to enable SSH key based authentication between all 6 nodes. For this to work, a shell script ssh is written and added to configuration file which will be executed by vagrant during the build process. An interface with Private IP is configured across all nodes which is used to for the nodes inter-connectivity via SSH.

Here is the vagrant multi machines configuration file along with custom scripts to install packages and setup SSH key based authentication between the nodes

# Vagrant configuration file for multi machines with inter connectivity via SSH key based authentication
numnodes=6
baseip="192.168.10"

# global script
$global = <<SCRIPT

# Allow SSH to accept SSH password authentication. Find and replace if the line is commented out
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config

# Add Google DNS to access internet. 
echo "nameserver 8.8.8.8" | sudo tee -a  /etc/resolv.conf 

# Download and install  Centos 7 EPEL package to configure the YUM repository
sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
# Update yum
sudo yum update -y
# Install wget curl and sshpass
sudo yum install wget curl sshpass -y

# Exclude node* from host checking
cat > ~/.ssh/config <<EOF
Host node*
   StrictHostKeyChecking no
   UserKnownHostsFile=/dev/null
EOF

# Populate /etc/hosts with the IP and node names 
for x in {11..#{10+numnodes}}; do
  grep #{baseip}.${x} /etc/hosts &>/dev/null || {
      echo #{baseip}.${x} node${x##?} | sudo tee -a /etc/hosts &>/dev/null
  }

done
yes y |ssh-keygen -f /home/vagrant/.ssh/id_rsa -t rsa -N ''
echo " **** SSH Key Pair created for node$c ****"

SCRIPT

# SSH configuration script
$ssh = <<SCRIPT1
numnodes=6

for (( c=1; c<$numnodes+1; c++ ))
do
    echo "$c"
    echo "node$c"
    if [ "$HOSTNAME" = "node1" ]; then
      echo "**** Install ansible on node1 ****"
      sudo yum install ansible -y
    fi
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        echo "node$c"
        continue
    fi

    # Copy the current host's id to each other host.
    # Asks for password.
    # create ssh key
    
    sshpass -p vagrant ssh-copy-id "node$c"
    echo "**** Copied public key to node$c ****"    
done

# Get the id's from each host.
for (( c=1; c<$numnodes+1; c++ ))
do
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        continue
    fi

    sshpass -p vagrant ssh "node$c" 'cat .ssh/id_rsa.pub' >> /home/vagrant/host-ids.pub
    echo "**** Copy id_rsa.pub contentes to host-ids.pub for host node$c ****"
done

for (( c=1; c<$numnodes+1; c++ ))
do
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        continue
    fi

    # Copy public keys to the nodes
    sshpass -p vagrant ssh-copy-id -f -i /home/vagrant/host-ids.pub "node$c"
    echo "**** Copy public keys to node$c ****"

done
# Set the permissions to config
sudo chmod 0600 /home/vagrant/.ssh/config
# Finally restart the SSHD daemon
sudo systemctl restart sshd
echo "**** End of the Multi Machine SSH Key based Auth configuration ****"

SCRIPT1

# Vagrant configuration
Vagrant.configure("2") do |config|
  # Execute global script
  config.vm.provision "shell", privileged: false, inline: $global
  prefix="node"
  #For each node run the config and apply settings
  (1..numnodes).each do |i|
    vm_name = "#{prefix}#{i}"
    config.vm.define vm_name do |node|
      node.vm.box = "centos/7"
      node.vm.hostname = vm_name
      ip="#{baseip}.#{10+i}"
      node.vm.network "private_network", ip: ip    
    end
    # Run the SSH configuration script
    config.vm.provision "ssh", type: "shell", privileged: false, inline: $ssh
  end
end

To execute the above configuration file, run the below commands

$vagrant up
$vagrant provision --provision-with ssh

Please note that the above example show vagrant user credentials by using sshpass -p option. If you want to secure use -f also read the sshpass documentation for more info. Many constants like EPEL repo URL, number of nodes, ssh key path etc.. need be customized according to your actual requirements.

To check the status of the nodes build by vagrant use the below command.

$vagrant status
Current machine states:

node1                     running (virtualbox)
node2                     running (virtualbox)
node3                     running (virtualbox)
node4                     running (virtualbox)
node5                     running (virtualbox)
node6                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

To login to the first node, and to SSH to other nodes use the below commands. Notice that there was no password prompts which SSHing between the nodes. By the way, Ansible is installed in node1 and ready to use. eth1 is the private network used for SSH inter-connectivity.

$vagrant ssh node1
Last login: Tue Jun 11 12:01:11 2019 from 192.168.10.12
[vagrant@node1 ~]$ssh node2
Warning: Permanently added 'node2,192.168.10.12' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:01:04 2019 from 192.168.10.11
[vagrant@node2 ~]$ssh node1
Warning: Permanently added 'node1,192.168.10.11' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:16:41 2019 from 10.0.2.2
[vagrant@node1 ~]$ssh node5
Warning: Permanently added 'node5,192.168.10.15' (ECDSA) to the list of known hosts.
[vagrant@node5 ~]$ssh node1
Warning: Permanently added 'node1,192.168.10.11' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:17:23 2019 from 192.168.10.12
[vagrant@node1 ~]$yum list ansible 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.dhakacom.com
 * epel: sg.fedora.ipserverone.com
 * extras: mirror.dhakacom.com
 * updates: mirrors.nhanhoa.com
Installed Packages
ansible.noarch                                             2.8.0-2.el7                                             @epel
[vagrant@node1 ~]$ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:26:10:60 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 80872sec preferred_lft 80872sec
    inet6 fe80::5054:ff:fe26:1060/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:7b:8a:ef brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.11/24 brd 192.168.10.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe7b:8aef/64 scope link
       valid_lft forever preferred_lft forever
[vagrant@node1 ~]$

Hope this use case help to understand how to install and configure multiple VMs at once with SSH inter-connectivity. Please leave your feedback if you found this blog useful and share suggestions in the below Comments section.

Image Courtesy: sumglobal.com

References:

https://www.vagrantup.com/docs/multi-machine/

https://www.vagrantup.com/docs/vagrantfile/

https://www.vagrantup.com/docs/provisioning/basic_usage.html

https://github.com/kikitux/vagrant-multimachine/edit/master/intrassh/Vagrantfile

Advertisements

SSH Keys – Password less authentication

I was trying to setup SSH Keys between two different flavors of Linux Host by following this Howto

I did run commands exactly as per mentioned in the how to; but it didn’t work. Error message as under

vin@CLIENT:~$ ssh vsa@192.0.0.10
vin@192.0.0.10’s password:
Last login: Mon May 22 10:45:03 2017 from 192.0.0.10
-bash: id: command not found
-bash: id: command not found
-bash: id: command not found
-bash: tty: command not found
-bash: uname: command not found

After googling and trail and error method finally found a fix.

Instead of Step 3 as per the above How to, use the below commands.

  1. Ensure on the server proper permissions set on the .ssh folder if not set permissions as

$chmod -R 775 .ssh (.ssh is located in user home directory e.g: /home/vin)

  1. Run this command to copy the Key file from client to server

$cat ~/.ssh/id_rsa.pub | ssh vin@192.0.0.10 ‘umask 0077; /bin/mkdir -p .ssh; /bin/cat >> .ssh/authorized_keys && echo “Done!”‘

After running above command, server 192.0.0.10 do not ask for password when user vin try to login.

vin@CLIENT:~$ ssh vin@192.0.0.10
Last login: Mon May 22 11:54:29 2017 from 192.0.0.10
[vin@SERVER ~]$

Wire free connect to Android Phone via SSH

I left my USB Type C cable cum charger in office yesterday and couldn’t transfer some files. I am aware of Shareit app but it doesn’t work in Linux. Both my laptop and phone are connected to wireless router.

After googling for while found a native and simplest way to connect to my android phone via SSH.

Installed SSHDroid on my phone and started the SSH service. From my Debian GNU Linux console started SSH session and connected to the IP of my phone and using SCP copied all those required files from phone to PC. BTW copied new songs from PC to phone as well 🙂