Vagrant :: SSH Inter-Connectivity of Multi Virtual Machines

Vagrant is one of the best example of Infrastructure as a Code (IAC) tools (VM based). It works based on the declarative configuration file which consists of requirements like OS, Apps, Users and Files etc…

By using Vagrant, we can reduce mundane tasks of downloading OS images, manual installation of OS, APPs, User Configuration and security etc… It saves a lot of time and efforts for the developers, Admins and as well as Architects. Vagrant is a cross-platform product and its free to use the community edition. Vagrant also has its own cloud where a thousands of OS with Apps images are uploaded by the active contributors. For more info and to download this great product please visit here. Please install Oracle Virtualbox which is one of the basic requirements to run the vagrant VMs.

Multi Machines: Is a type of Vagrant configuration where multiple machines can be build using a single configuration file. This is best suited for development where multiple VMs are required whether its a homogeneous/heterogeneous configuration. For example in a typical webapp development, a separate Web, DB, Middleware, Proxy servers along with client VMs required to match the production class environment.

Below Vagrant configuration file is a use-case for setting up ‘Ansible Practice Lab’. 6 nodes are build by vagrant running on CentOS 7. This LAB environment is build for the purpose of learning Ansible hands-on workshop to try out all the features offered by Ansible to automate configuration management and infrastructure automation. Ansible package is installed on node 1 and rest of the nodes are managed by ansible workstation – node1.

Later YUM is installed and configured by downloading EPEL repository across all 6 nodes using the global shell script. Post yum configuration, basic packages like wget, curl and sshpass are installed.

Most important requirement for Ansible to work is to enable SSH key based authentication between all 6 nodes. For this to work, a shell script ssh is written and added to configuration file which will be executed by vagrant during the build process. An interface with Private IP is configured across all nodes which is used to for the nodes inter-connectivity via SSH.

Here is the vagrant multi machines configuration file along with custom scripts to install packages and setup SSH key based authentication between the nodes

# Vagrant configuration file for multi machines with inter connectivity via SSH key based authentication

# global script
$global = <<SCRIPT

# Allow SSH to accept SSH password authentication. Find and replace if the line is commented out
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config

# Add Google DNS to access internet. 
echo "nameserver" | sudo tee -a  /etc/resolv.conf 

# Download and install  Centos 7 EPEL package to configure the YUM repository
sudo rpm -ivh
# Update yum
sudo yum update -y
# Install wget curl and sshpass
sudo yum install wget curl sshpass -y

# Exclude node* from host checking
cat > ~/.ssh/config <<EOF
Host node*
   StrictHostKeyChecking no

# Populate /etc/hosts with the IP and node names 
for x in {11..#{10+numnodes}}; do
  grep #{baseip}.${x} /etc/hosts &>/dev/null || {
      echo #{baseip}.${x} node${x##?} | sudo tee -a /etc/hosts &>/dev/null

yes y |ssh-keygen -f /home/vagrant/.ssh/id_rsa -t rsa -N ''
echo " **** SSH Key Pair created for node$c ****"


# SSH configuration script
$ssh = <<SCRIPT1

for (( c=1; c<$numnodes+1; c++ ))
    echo "$c"
    echo "node$c"
    if [ "$HOSTNAME" = "node1" ]; then
      echo "**** Install ansible on node1 ****"
      sudo yum install ansible -y
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        echo "node$c"

    # Copy the current host's id to each other host.
    # Asks for password.
    # create ssh key
    sshpass -p vagrant ssh-copy-id "node$c"
    echo "**** Copied public key to node$c ****"    

# Get the id's from each host.
for (( c=1; c<$numnodes+1; c++ ))
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then

    sshpass -p vagrant ssh "node$c" 'cat .ssh/' >> /home/vagrant/
    echo "**** Copy contentes to for host node$c ****"

for (( c=1; c<$numnodes+1; c++ ))
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then

    # Copy public keys to the nodes
    sshpass -p vagrant ssh-copy-id -f -i /home/vagrant/ "node$c"
    echo "**** Copy public keys to node$c ****"

# Set the permissions to config
sudo chmod 0600 /home/vagrant/.ssh/config
# Finally restart the SSHD daemon
sudo systemctl restart sshd
echo "**** End of the Multi Machine SSH Key based Auth configuration ****"


# Vagrant configuration
Vagrant.configure("2") do |config|
  # Execute global script
  config.vm.provision "shell", privileged: false, inline: $global
  #For each node run the config and apply settings
  (1..numnodes).each do |i|
    vm_name = "#{prefix}#{i}"
    config.vm.define vm_name do |node| = "centos/7"
      node.vm.hostname = vm_name
      ip="#{baseip}.#{10+i}" "private_network", ip: ip    
    # Run the SSH configuration script
    config.vm.provision "ssh", type: "shell", privileged: false, inline: $ssh

To execute the above configuration file, run the below commands

$vagrant up
$vagrant provision --provision-with ssh

Please note that the above example show vagrant user credentials by using sshpass -p option. If you want to secure use -f also read the sshpass documentation for more info. Many constants like EPEL repo URL, number of nodes, ssh key path etc.. need be customized according to your actual requirements.

To check the status of the nodes build by vagrant use the below command.

$vagrant status
Current machine states:

node1                     running (virtualbox)
node2                     running (virtualbox)
node3                     running (virtualbox)
node4                     running (virtualbox)
node5                     running (virtualbox)
node6                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

To login to the first node, and to SSH to other nodes use the below commands. Notice that there was no password prompts which SSHing between the nodes. By the way, Ansible is installed in node1 and ready to use. eth1 is the private network used for SSH inter-connectivity.

$vagrant ssh node1
Last login: Tue Jun 11 12:01:11 2019 from
[vagrant@node1 ~]$ssh node2
Warning: Permanently added 'node2,' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:01:04 2019 from
[vagrant@node2 ~]$ssh node1
Warning: Permanently added 'node1,' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:16:41 2019 from
[vagrant@node1 ~]$ssh node5
Warning: Permanently added 'node5,' (ECDSA) to the list of known hosts.
[vagrant@node5 ~]$ssh node1
Warning: Permanently added 'node1,' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:17:23 2019 from
[vagrant@node1 ~]$yum list ansible 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base:
 * epel:
 * extras:
 * updates:
Installed Packages
ansible.noarch                                             2.8.0-2.el7                                             @epel
[vagrant@node1 ~]$ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:26:10:60 brd ff:ff:ff:ff:ff:ff
    inet brd scope global noprefixroute dynamic eth0
       valid_lft 80872sec preferred_lft 80872sec
    inet6 fe80::5054:ff:fe26:1060/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:7b:8a:ef brd ff:ff:ff:ff:ff:ff
    inet brd scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe7b:8aef/64 scope link
       valid_lft forever preferred_lft forever
[vagrant@node1 ~]$

Hope this use case help to understand how to install and configure multiple VMs at once with SSH inter-connectivity. Please leave your feedback if you found this blog useful and share suggestions in the below Comments section.

Image Courtesy:



SSH Keys – Password less authentication

I was trying to setup SSH Keys between two different flavors of Linux Host by following this Howto

I did run commands exactly as per mentioned in the how to; but it didn’t work. Error message as under

vin@CLIENT:~$ ssh vsa@
vin@’s password:
Last login: Mon May 22 10:45:03 2017 from
-bash: id: command not found
-bash: id: command not found
-bash: id: command not found
-bash: tty: command not found
-bash: uname: command not found

After googling and trail and error method finally found a fix.

Instead of Step 3 as per the above How to, use the below commands.

  1. Ensure on the server proper permissions set on the .ssh folder if not set permissions as

$chmod -R 775 .ssh (.ssh is located in user home directory e.g: /home/vin)

  1. Run this command to copy the Key file from client to server

$cat ~/.ssh/ | ssh vin@ ‘umask 0077; /bin/mkdir -p .ssh; /bin/cat >> .ssh/authorized_keys && echo “Done!”‘

After running above command, server do not ask for password when user vin try to login.

vin@CLIENT:~$ ssh vin@
Last login: Mon May 22 11:54:29 2017 from
[vin@SERVER ~]$

Wire free connect to Android Phone via SSH

I left my USB Type C cable cum charger in office yesterday and couldn’t transfer some files. I am aware of Shareit app but it doesn’t work in Linux. Both my laptop and phone are connected to wireless router.

After googling for while found a native and simplest way to connect to my android phone via SSH.

Installed SSHDroid on my phone and started the SSH service. From my Debian GNU Linux console started SSH session and connected to the IP of my phone and using SCP copied all those required files from phone to PC. BTW copied new songs from PC to phone as well 🙂