Vagrant :: SSH Inter-Connectivity of Multi Virtual Machines

Vagrant is one of the best example of Infrastructure as a Code (IAC) tools (VM based). It works based on the declarative configuration file which consists of requirements like OS, Apps, Users and Files etc…

By using Vagrant, we can reduce mundane tasks of downloading OS images, manual installation of OS, APPs, User Configuration and security etc… It saves a lot of time and efforts for the developers, Admins and as well as Architects. Vagrant is a cross-platform product and its free to use the community edition. Vagrant also has its own cloud where a thousands of OS with Apps images are uploaded by the active contributors. For more info and to download this great product please visit here. Please install Oracle Virtualbox which is one of the basic requirements to run the vagrant VMs.

Multi Machines: Is a type of Vagrant configuration where multiple machines can be build using a single configuration file. This is best suited for development where multiple VMs are required whether its a homogeneous/heterogeneous configuration. For example in a typical webapp development, a separate Web, DB, Middleware, Proxy servers along with client VMs required to match the production class environment.

Below Vagrant configuration file is a use-case for setting up ‘Ansible Practice Lab’. 6 nodes are build by vagrant running on CentOS 7. This LAB environment is build for the purpose of learning Ansible hands-on workshop to try out all the features offered by Ansible to automate configuration management and infrastructure automation. Ansible package is installed on node 1 and rest of the nodes are managed by ansible workstation – node1.

Later YUM is installed and configured by downloading EPEL repository across all 6 nodes using the global shell script. Post yum configuration, basic packages like wget, curl and sshpass are installed.

Most important requirement for Ansible to work is to enable SSH key based authentication between all 6 nodes. For this to work, a shell script ssh is written and added to configuration file which will be executed by vagrant during the build process. An interface with Private IP is configured across all nodes which is used to for the nodes inter-connectivity via SSH.

Here is the vagrant multi machines configuration file along with custom scripts to install packages and setup SSH key based authentication between the nodes

# Vagrant configuration file for multi machines with inter connectivity via SSH key based authentication
numnodes=6
baseip="192.168.10"

# global script
$global = <<SCRIPT

# Allow SSH to accept SSH password authentication. Find and replace if the line is commented out
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config

# Add Google DNS to access internet. 
echo "nameserver 8.8.8.8" | sudo tee -a  /etc/resolv.conf 

# Download and install  Centos 7 EPEL package to configure the YUM repository
sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
# Update yum
sudo yum update -y
# Install wget curl and sshpass
sudo yum install wget curl sshpass -y

# Exclude node* from host checking
cat > ~/.ssh/config <<EOF
Host node*
   StrictHostKeyChecking no
   UserKnownHostsFile=/dev/null
EOF

# Populate /etc/hosts with the IP and node names 
for x in {11..#{10+numnodes}}; do
  grep #{baseip}.${x} /etc/hosts &>/dev/null || {
      echo #{baseip}.${x} node${x##?} | sudo tee -a /etc/hosts &>/dev/null
  }

done
yes y |ssh-keygen -f /home/vagrant/.ssh/id_rsa -t rsa -N ''
echo " **** SSH Key Pair created for node$c ****"

SCRIPT

# SSH configuration script
$ssh = <<SCRIPT1
numnodes=6

for (( c=1; c<$numnodes+1; c++ ))
do
    echo "$c"
    echo "node$c"
    if [ "$HOSTNAME" = "node1" ]; then
      echo "**** Install ansible on node1 ****"
      sudo yum install ansible -y
    fi
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        echo "node$c"
        continue
    fi

    # Copy the current host's id to each other host.
    # Asks for password.
    # create ssh key
    
    sshpass -p vagrant ssh-copy-id "node$c"
    echo "**** Copied public key to node$c ****"    
done

# Get the id's from each host.
for (( c=1; c<$numnodes+1; c++ ))
do
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        continue
    fi

    sshpass -p vagrant ssh "node$c" 'cat .ssh/id_rsa.pub' >> /home/vagrant/host-ids.pub
    echo "**** Copy id_rsa.pub contentes to host-ids.pub for host node$c ****"
done

for (( c=1; c<$numnodes+1; c++ ))
do
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        continue
    fi

    # Copy public keys to the nodes
    sshpass -p vagrant ssh-copy-id -f -i /home/vagrant/host-ids.pub "node$c"
    echo "**** Copy public keys to node$c ****"

done
# Set the permissions to config
sudo chmod 0600 /home/vagrant/.ssh/config
# Finally restart the SSHD daemon
sudo systemctl restart sshd
echo "**** End of the Multi Machine SSH Key based Auth configuration ****"

SCRIPT1

# Vagrant configuration
Vagrant.configure("2") do |config|
  # Execute global script
  config.vm.provision "shell", privileged: false, inline: $global
  prefix="node"
  #For each node run the config and apply settings
  (1..numnodes).each do |i|
    vm_name = "#{prefix}#{i}"
    config.vm.define vm_name do |node|
      node.vm.box = "centos/7"
      node.vm.hostname = vm_name
      ip="#{baseip}.#{10+i}"
      node.vm.network "private_network", ip: ip    
    end
    # Run the SSH configuration script
    config.vm.provision "ssh", type: "shell", privileged: false, inline: $ssh
  end
end

To execute the above configuration file, run the below commands

$vagrant up
$vagrant provision --provision-with ssh

Please note that the above example show vagrant user credentials by using sshpass -p option. If you want to secure use -f also read the sshpass documentation for more info. Many constants like EPEL repo URL, number of nodes, ssh key path etc.. need be customized according to your actual requirements.

To check the status of the nodes build by vagrant use the below command.

$vagrant status
Current machine states:

node1                     running (virtualbox)
node2                     running (virtualbox)
node3                     running (virtualbox)
node4                     running (virtualbox)
node5                     running (virtualbox)
node6                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

To login to the first node, and to SSH to other nodes use the below commands. Notice that there was no password prompts which SSHing between the nodes. By the way, Ansible is installed in node1 and ready to use. eth1 is the private network used for SSH inter-connectivity.

$vagrant ssh node1
Last login: Tue Jun 11 12:01:11 2019 from 192.168.10.12
[vagrant@node1 ~]$ssh node2
Warning: Permanently added 'node2,192.168.10.12' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:01:04 2019 from 192.168.10.11
[vagrant@node2 ~]$ssh node1
Warning: Permanently added 'node1,192.168.10.11' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:16:41 2019 from 10.0.2.2
[vagrant@node1 ~]$ssh node5
Warning: Permanently added 'node5,192.168.10.15' (ECDSA) to the list of known hosts.
[vagrant@node5 ~]$ssh node1
Warning: Permanently added 'node1,192.168.10.11' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:17:23 2019 from 192.168.10.12
[vagrant@node1 ~]$yum list ansible 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.dhakacom.com
 * epel: sg.fedora.ipserverone.com
 * extras: mirror.dhakacom.com
 * updates: mirrors.nhanhoa.com
Installed Packages
ansible.noarch                                             2.8.0-2.el7                                             @epel
[vagrant@node1 ~]$ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:26:10:60 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 80872sec preferred_lft 80872sec
    inet6 fe80::5054:ff:fe26:1060/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:7b:8a:ef brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.11/24 brd 192.168.10.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe7b:8aef/64 scope link
       valid_lft forever preferred_lft forever
[vagrant@node1 ~]$

Hope this use case help to understand how to install and configure multiple VMs at once with SSH inter-connectivity. Please leave your feedback if you found this blog useful and share suggestions in the below Comments section.

Image Courtesy: sumglobal.com

References:

https://www.vagrantup.com/docs/multi-machine/

https://www.vagrantup.com/docs/vagrantfile/

https://www.vagrantup.com/docs/provisioning/basic_usage.html

https://github.com/kikitux/vagrant-multimachine/edit/master/intrassh/Vagrantfile

Advertisements

Parsing JSON in GO is an Adventure

One of the tough things to do in Golang is parsing JSON! Yes, indeed its a challenge for novice like me. When it comes to Python and Ruby its an easy task thanks to JSON libraries which are pretty easy to use especially in Python.

I tried to create the struct for a complex and nested JSON data below by hands but was not succeeded after several attempts.

{
    "clients": [
        {
            "clientId": "dde46983-00000004-5cdac62e-5cdc1fc1-00025000-a4aa9156",
            "hostname": "A999US032WIN001",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/159.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "159.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 24
            }
        },
        {
            "clientId": "70f32834-00000004-5cdac630-5cdd6bf5-00195000-a4aa9156",
            "hostname": "a999us034cen001",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/170.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "170.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 6
            }
        },
        {
            "clientId": "e34f2de3-00000004-5cdac62d-5cdac62c-00015000-a4aa9156",
            "hostname": "a999us034nve001.usp01.xstream360.cloud",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/158.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "158.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 2
            }
        },
        {
            "clientId": "3084d369-00000004-5cdac62f-5cdd4e05-000e5000-a4aa9156",
            "hostname": "a999us034rhl001",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/167.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "167.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 6
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/172.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "172.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/173.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "173.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/174.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "174.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/176.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "176.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/175.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "175.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/177.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "177.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/180.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "180.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/178.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "178.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/187.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "187.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        },
        {
            "clientId": "82fa0d80-00000004-5cdac631-5ceb7989-01165000-a4aa9156",
            "hostname": "icehousetest",
            "links": [
                {
                    "href": "https://10.10.10.6:9090/nwrestapi/v2/global/clients/184.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                    "rel": "item"
                }
            ],
            "resourceId": {
                "id": "184.0.129.30.0.0.0.0.20.198.218.92.10.10.10.6",
                "sequence": 1
            }
        }
    ],
    "count": 14
}

After deep (re)searching using DuckDuckGo, I finally found an easier way to convert JSON to Structs with the help of JSON-to-Go tool. Many thanks to Matt Holt for making things simple for me.

Here is the screenshot of converting JSON to Strcuts. This is the major step which was made very easy using this tool. By now you we know that we should have access to JSON data which should be static and available locally for conversion.

Now the required strcuts and types are autocreated, its time to unmarshal JSON data to strcuts and access its values. Below is the full implementation of the code.

/*
Parse and convert JSON to structs using JSON-to-Go Tool
*/
package main

import (
	"encoding/json"
	"fmt"
	"io/ioutil"
	"log"
	"os"
)

// structs generated using https://mholt.github.io/json-to-go/
type AutoGenerated struct {
	Clients []struct {
		ClientID string `json:"clientId"`
		Hostname string `json:"hostname"`
		Links    []struct {
			Href string `json:"href"`
			Rel  string `json:"rel"`
		} `json:"links"`
		ResourceID struct {
			ID       string `json:"id"`
			Sequence int    `json:"sequence"`
		} `json:"resourceId"`
	} `json:"clients"`
	Count int `json:"count"`
}

func main() {
	var info AutoGenerated
	// Reading data from JSON File
	file, e := ioutil.ReadFile("clients.json")
	if e != nil {
		fmt.Printf("File error: %v\n", e)
		os.Exit(1)
	}
	//Unmarshal json data into struct info
	if err := json.Unmarshal(file, &info); err != nil {
		log.Fatal(err)
	}

	//fmt.Printf("%+v\n", info)
	fmt.Println("CLIENT-ID,HOSTNAME,RESOURCE-ID")

	//Iterate through each value and print required types
	for _, value := range info.Clients {
		fmt.Printf("%s,%s,%s\n", value.ClientID, value.Hostname, value.ResourceID.ID)
	}

}

Hope the above example code would help you to understand the steps to convert JSON to Structs, iterate through each key value and access those required types. Please leave your feedback if you found this useful and suggestions in the below Comments section.

Install and Configure Go/Golang on Raspberry Pi

Go/Golang is one of the hot programming languages while I’m typing this today.

Go is a programming language created at Google in 2009 by Robert Griesemer, Rob Pike and Ken Thompson. Go is a statically typed compiled, procedural language similar to C, with memory safety, garbage collection, structural typing, concurrency and other great features are bundled to make it better compared to other languages in the marketplace.

Docker, Kubernetes, Graphana, Hugo are some of the best apps written in Go. It has a robust set of libraries and app performance is better compared to other languages.

Today I’m starting my journey to learn Go/Golang and Google will be my mentor to install and setup Golang on my Raspbian OS/Raspberry Pi 3. To get the latest version use below steps instead of native package management tool such as apt.

Installation Steps:

  • Download the current stable version of Go available on the google’s official website. At the time of writing this tutorial, 1.12.4 is the stable version. Check the latest version here
cd ~ && curl -O https://dl.google.com/go/go1.12.4.linux-armv6l.tar.gz

Above command would change directory to your ‘Home’ directory and download Go compressed tar file using ‘Curl’

  • Extract compressed tar file and place it inside /usr/local directory. Please note root level access or sudo access required to perform this step.
sudo tar -C /usr/local -xzvf go1.12.4.linux-armv6l.tar.gz
  • Set Path variables to avoid typing complete path and in order to access binaries or libraries of Go by the Raspbian OS. Open ~/.profile, a hidden file located in your ‘Home’ directory. Use nano or vi or subl to edit the file (subl ~/.profile). Add below lines at the end of the file.
export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
  • To take effect of the above changes made to the file ‘profile’, run the below command. The source command can be used to load any functions file into the current shell, script or a command prompt.
source ~/.profile
  • Create directory called ‘go’ in the ‘Home’ directory. All my codes are placed in the folder. Change the directory name as per choice but do change the GOPATH accordingly as mentioned above.
 mkdir $HOME/go 
  • Validate Go is working as expected or not by running below command
vb@pi:~ $  go version
go version go1.12.4 linux/arm
vb@pi:~ $

My first code in Go

Create a directory first_code in ‘go’ and write following content in the file and save it as first_code.go

mkdir -p $HOME/go/src/first_code 
{-p is used to create directory and its sub-directories at once}
package main

import "fmt"

func main() {
    fmt.Printf("My first code in Go Language!!!\n")
}

Now build and run the code. Change directory
to the first_code cd ~/go/src/first_code and run below commands

vb@pi:~/go/src/first_code $ go build
vb@pi:~/go/src/first_code $

vb@pi:~/go/src/first_code $ ./first_code
My first code in Go Language!!!
vb@pi:~/go/src/first_code $

Note:

Above steps are applicable to any Linux distributions just by changing the first step of downloading the compressed tar file. Change the architecture from arm6l to amd64 or as applicable to your hardware.

I am able to successfully setup GO on my Pi. Hope you’d also do the same and happy learning Go. If you have any issues or questions please mention them in the below comment section.

Python Pandas Pivot Table

This blog is going to be mostly helpful to folks who work with excel and databases. But, if you work with Excel and find yourself needing to deal with automation of pivot tables, you might also appreciate the technique.

When working with MySQL database, it was not an easy way for me to fetch the data and reshape it to a Pivot Table. I did duckduckgo, found ample of examples but most of them were hard to understand or not yielding the required output.

Python’s Pandas Pivot Table was the savior!! I followed below steps to fetch data from MySQL DB and generate a simple Pivot Table. Later this pivot table exported as an excel file and shared with stakeholders using slack.

  • Create connection string to MySQL DB using PyMySQL
  • Create a SQLAlchemy Engine (ORM)
  • Write a Query statement to fetch data from DB
  • Create a Pandas Dataframe directly from the query using SQLAlchemy
  • Create a Pandas Pivot Table with required Index, Columns and Values
  • Export data to excel or csv file using to_excel and to_csv respectively
  • Share it to stakeholders via slack

Please find below Python code screenshots

I hope this blog triggered a thought to automate the mundane task of generating pivot tables from spreadsheets or databases. Please reach out to me if you want to check out the full code.

Thank you for stopping by this blog and please share your suggestions below under the Comment section

Up Next – Manage Alerts of VMAX3 array using REST API

Image Courtesy:

Matt Whitt
https://ttorial.com/Images/python-automation-automate-mandane-task-python.jpg

TweePy – Twitter of Python

I had an opportunity to attend the NASSCOM Technology and Leadership Forum in Mumbai from 20th to 22nd Feb 2019. It was an amazing experience to listen, gain knowledge and insights from the industry’s best of the bests. CxO’s keynote speeches were focused on technologies like AI, ML, Blockchains et al

My personal favorite out of all the keynote was from Vala Afshar, Chief Digital Evangelist @ Salesforce. It was a privilege to be there and watch him explaining about AI, ML, and the importance of data. Truly inspiring and mesmerized by his depth of knowledge in the IT industry.

As mentioned by Vala Afshar, “Data is the oil of 21st century but oil is just useless thick goop until you refine it into fuel. AI is your refinery“. In those 3 days, a lot of critical information was shared and scattered across all the social media. The reason to write this blog is to share my idea to save those GEM of information which I can keep munching time and again to get inspired and motivated.

Twitter, the most popular social media platform is one of my favorites and wanted to save all those tweets which had hashtag #NASSCOM_TLF (official hashtag of the event). After quick research using ‘DuckDuckGo’, I had decided to use TweePy Twitter for Python module which is developed to use Twitter API to connect, read, write, retweet and send direct messages right from Python.

Tweepy requires twitter app to be created to use Twitter’s API to exchange information between Twitter and Python. I followed this link to set up and run my twitter app.

Here is the code which I wrote to download all the tweets having hashtag #NASSCOM_TLF and save it to an excel file!

# Download tweepy using pip install tweepy
import tweepy
# Pandas dataframe used to get tweets in tabular format and export it excel
import pandas as pd

# Replace consumer key, consumer secret
consumer_key = 'REPLACE'
consumer_secret = 'REPALCE'

# Replace access token key and secret
access_token = 'REPLACE-REPLACE'
access_token_secret = 'REPLACE'

# Authenticate to Twitter API
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)

# Create dataframe with columns names 
df = pd.DataFrame(columns=['text','timeline', 'username', 'user_id'])
# Initialize lists to store messages
msgs = []
msg =[]

# search twitter with hashtag #NASSCOM_TLF and exclude retweets
for tweet in tweepy.Cursor(api.search, q='#NASSCOM_TLF -filter:retweets',tweet_mode='extended', rpp=100).items():
    msg = [tweet.full_text, tweet.created_at, tweet.user.name, tweet.user.screen_name] 
    msg = tuple(msg)                    
    msgs.append(msg)

# Append tweets stored in lists to dataframe
df = pd.DataFrame(msgs)
# Column header columns having full tweet messages, time, user name and ID
df.columns = ['Tweet Text', 'Tweet Date Time (GMT)', 'Username', 'User ID']
# Check the first 5 tweets to see any errors
print(df.head())
# Create a file 
output = "tweets_ntlf.xlsx"
# Export tweets from dataframe to excel
try:
    df.to_excel(output, index=False)
except Exception as Error:
    print("Unable to get NASSCOM Tweets", Error)
Screenshot of the excel file

NOTE: I’ve excluded retweets to avoid duplication of information.

Please continue to stop by this blog and share your comments below.

Up Next – Manage Alerts of VMAX array using REST API

Docker’ize’ Python

What is Docker?

Docker is a computer program that performs operating-system-level virtualization, also known as “containerization”. It was first released in 2013 and is developed by Docker, Inc. source: Wikipedia

How Docker works?

Docker containers wrap up software and its dependencies into a standardized unit for software development that includes everything it needs to run: code, runtime, system tools, and libraries. This guarantees that your application will always run the same and makes collaboration as simple as sharing a container image.
source: www.docker.com

Why Docker?

Docker unlocks the potential of your organization by giving developers and IT the freedom to build, manage and secure business-critical applications without the fear of technology or infrastructure lock-in. source: www.docker.com