Cisco Nexus Switches Automation using Ansible

This is the success story of automating 700+ Cisco Nexus and IOS switches configuration changes in less than 4 hours. Even I never imagined this could be accomplished but by using Rundeck, Python and Ansible we made it successful.

There was a PCI vulnerability audit findings report shared with Network team manager which showed that close to 700+ switches configuration need to be tweaked to meet the PCI Compliance and Standards. So the network engineers identified and they sleeved up to apply required configuration changes manually. An estimation 3 days ETA was given by the team.

In the meantime network manager reached out to our team to check whether this process can be automated or not.

Ansible and Python’s Netmiko came to our rescue… we could automate all required configuration changes in a matter of few hours. Not to forget that we used SSHUTTLE to connect to all the switches spread across the globe from a single control node. Using Rundeck, we could push the Python scripts to worker nodes to execute the configuration commands using netmiko module.

Here is the simple yet powerful Ansible YAML configuration file which was used to implement the configuration changes on the production switches.

---
  - name: Playbook to remediate PCI Audit Findings on Cisco NXOS
    hosts: all
    gather_facts: no
    tasks:

      - name: Configure switch to disable services and console logging
        nxos_config:
          lines:
            - line console 
            - exec-timeout 10
            - line vty
            - "{{ tm }}"
            - no logging console
            - no logging monitor
            - logging logfile messages 6 size 16384
            - logging timestamp milliseconds
            - wr
            - end
          match: none
          save_when: always
        register: config
        
      - name: Check output
        debug:
          var: config   

nexus.yml – gather_facts used to fetch the device information. In this case we have disabled it to speed up the execution of playbook. There are two tasks in the above playbook 1. nxos_config and 2. debug.

nxos_config is the module developed by Core Ansible team which applies configuration changes on the nexus switches. lines – each line is a command which will be executed in configuration terminal mode. match – if match is set to none, the module will not attempt to compare the source configuration with the running configuration on the remote device. save_when is set to always to set the running config to startuo config. register is the keyword to save the output of the nxos_config to a variable called ‘config’. debug is a module used to display output or messages. Variable ‘config’ has the output of nxos_config which would be shown after playbook execution.

Ansible.cfg – Ansible configuration file.

[defaults]
inventory=inventory
log_path = ansible.log
ansible_debug=true
[persistent_connection]
log_messages = True
command_timeout=60
connect_retry_timeout = 60
[paramiko_connection]
host_key_auto_add = True
#auth_timeout = 300
#timeout = 300

inventory is the file having list of switch IP’s (or FQDN if switches are discoverable by DNS). log_path is the path of the log file to store all logs of the tasks being executed by the above playbook. ansible_debug set to true and its a best practice to enable this value for any network related automation. log_messages is to fetch the verbose logging info from the switches. command_timeout and connect_retry_timeout are mandatory to give more time to reach out to the remotely located devices. host_key_auto_add is set to true to automatically add the RSA keys to avoid prompting or failure of SSH connection. I’ve commented out auth_timeout and timeout but if you encounter delay or failure of logging due to network lag please uncomment them.

group_vars/nxos.yml – Group variables file contain credentials and other critical information. Please use ansible vault to encrypt this information

ansible_connection: local
ansible_network_os: nxos
ansible_user: <username>
ansible_password: <password>
tm: exec-timeout 10

It took about 3 hours to test the playbook. After successful test results, ran the playbook on prod switches which took about an hour to complete!! Later we randomly logged on to few switches to confirm the configuration changes made were successful or not. There were few switches which were failed to execute the commands in the playbook due to connectivity or credential errors.

SYNTAX:

ansible-playbook nexus.yml –syntax-check {Checks the YAML file syntax}

ansible-playbook nexus.yml -C {Dry run}

ansible-playbook nexus.yml {execute playbook}

Here are the screenshots – Output of ansible-playbook execution

Notice that changed is set to 1 and unreachable and failed is 0 indicating successful execution
Switches in red color failed to apply config changes due to credentials or connectivity issue

SSHUTTLE – Connect to various subnets from the jumphost to all the switches from the control node where Rundeck, Python scripts and Ansible are installed.

SYNTAX:

sshuttle -r <username>@<hostname or IP> <Subnet 1 IP> <Subnet 2 IP> <Subnet 3 IP> <Subnet n IP> -x <hostname or IP>

Notice that iptables rules are added automatically to enable SSH connectivity to switches

Here is the Python script used for Cisco IOS switches configuration automation along with Rundeck.

'''
This is Python script to amend changes to IOS XR switches as per PCI audit remediation

To run this script please make sure nexus switch is reachable via SSH port

'''
__author__ = "Vinay Umesh"
__copyright__ = "Copyright 2019, Virtustream, Dell Technologies."
__version__ = "1.0.0"
__maintainer__ = "Core Services Engineering"
__email__ = "vinay.umesh@virtustream.com"
__status__ = "Development"

from netmiko import ConnectHandler  # connect to cisco switches and execute cmds
from datetime import datetime  # Date and time module
import os  # Native OS operations and management
import logging  # Default Python logging module
import argparse  # Pass arguments
import getpass  # get password

# create a log file with system date and time stamp
logfile_ = datetime.now().strftime('nexus_switches_remediation_%H_%M_%d_%m_%Y.log')
date_ = datetime.now().strftime('%H_%M_%d_%m_%Y')


def check_arg(args=None):
    parser = argparse.ArgumentParser(description='Script to amend changes to  \
                                     Nexus switches as per PCI audit remediation')
    parser.add_argument('-s', '--source',
                        help='Source filename in CSV format required',
                        required='True')
    parser.add_argument('-u', '--user',
                        help='Username required', required='True')
    results = parser.parse_args(args)
    return (results.source, results.user)


src, user = check_arg()
# Get password
try:
    pwd = getpass.getpass()
except Exception as error:
    print('ERROR', error)

logger = logging.getLogger('IOSXR_PCI_Audit')
# set logging level
# logging.basicConfig(level=logging.INFO) # Python 2.x syntax
# toggle between DEBUG and INFO to see the difference
logger.setLevel(logging.DEBUG)
# logger.setLevel(logging.INFO)

# create file handler which logs even debug messages

# Create 'logs' folder if not exists. Change the path of logdir as your git folder

logdir = "logs/"
if not os.path.exists(logdir):
    os.makedirs(logdir)

fh = logging.FileHandler(logdir + logfile_)
fh.setLevel(logging.DEBUG)

# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s | %(name)s | %(levelname)s | %(message)s')
fh.setFormatter(formatter)

# add the handlers to the logger
logger.addHandler(fh)

logger.info('IOS XR switch PCI Audit remediation script started @ {}'.format(date_))

# Open the file having list of IPs of Cisco IOS switches
with open(src, 'r') as lines:
    logger.info('Read source file with device details')
    lines = list(lines)  # convert file object to list object
    del(lines[0])  # skip the header row
    for line in lines:
        value = line.split(',')
        dc = value[0]  # dc
        sw = value[1]  # switch name
        ip = value[2]  # ip
        print('Switch DC: {}'.format(dc))
        print('Switch Name: {}'.format(sw))
        print('Switch IP: {}'.format(ip))
        logger.info(' DC - {}  Switch Name - {} IP - {}'.format(dc, sw, ip))
        un = user
        pw = pwd

        #  Connecting to the switch
        try:
            # Default value is 2 else change to 4
            net_connect = ConnectHandler(device_type='cisco_ios', ip=ip, username=un, password=pw,
                                         global_delay_factor=2)
            #  show version of the switch
            ver = net_connect.send_command("show version")
            logger.info('Switch version details :\n {}'.format(ver))
            print('Show version command executed:\n{}'.format(ver))
            #  Change to config term mode
            net_connect.config_mode()
            #  Configuration config_commands
            config_commands = ['no service ipv4 tcp-small-servers',
                               'no service ipv4 udp-small-servers',
                               'no service ipv6 tcp-small-servers',
                               'no service ipv6 udp-small-servers',
                               'no http server',
                               'no tftp ipv4 server',
                               'no tftp ipv6 server',
                               'no dhcp ipv4',
                               'no dhcp ipv6',
                               'line console exec-timeout 10 0',
                               'logging console disable',
                               'logging monitor disable',
                               'no ipv4 source−route',
                               'end', 'copy running-config startup-config']
            #  Run config commands
            config = net_connect.send_config_set(config_commands)
            config += net_connect.send_command('\n', expect_string=r'#', delay_factor=2)
            logger.info('Switch Configuration Output :\n {}'.format(config))
            print('Config commands executed successfully:\n{}'.format(config))

            #  Show history of commands for IOS XR
            history2 = net_connect.send_command("show history")
            print('Show history for nexus command executed successfully: \n {}'.format(history2))
            logger.info('Switch-Nexus history output :\n {}'.format(history2))

            #  Exit from the switch
            net_connect.disconnect()

        except Exception as e:
            print('Error Occured while connecting to switch - {}  IP - {}: \n {}'.format(sw, ip, e))
            logger.info('Unable to connect to the switch - {} IP - {} :\n {}'.format(sw, ip, e))

SYNTAX:

python3 ios.py -s <inventory> -u <switch username>

Script prompts for password

We can use either Ansible or Python Netmiko way to automate Cisco switches configuration changes.

One of the SMEs of Network Engineering team reacted to this automation as under. I’m glad to see that folks are embracing and embarking towards AUTOMATION.

Hope this use case help to understand how to automate Cisco switches configuration and operational tasks. Please leave your feedback if you found this blog useful and share suggestions in the below Comments section.

Image Courtesy and References:

https://blogs.cisco.com/datacenter/ansible-support-for-ucs-and-nexus

https://docs.ansible.com/ansible/latest/modules/nxos_config_module.html

https://docs.ansible.com/ansible/2.3/ios_config_module.html

Advertisements

Vagrant :: SSH Inter-Connectivity of Multi Virtual Machines

Vagrant is one of the best example of Infrastructure as a Code (IAC) tools (VM based). It works based on the declarative configuration file which consists of requirements like OS, Apps, Users and Files etc…

By using Vagrant, we can reduce mundane tasks of downloading OS images, manual installation of OS, APPs, User Configuration and security etc… It saves a lot of time and efforts for the developers, Admins and as well as Architects. Vagrant is a cross-platform product and its free to use the community edition. Vagrant also has its own cloud where a thousands of OS with Apps images are uploaded by the active contributors. For more info and to download this great product please visit here. Please install Oracle Virtualbox which is one of the basic requirements to run the vagrant VMs.

Multi Machines: Is a type of Vagrant configuration where multiple machines can be build using a single configuration file. This is best suited for development where multiple VMs are required whether its a homogeneous/heterogeneous configuration. For example in a typical webapp development, a separate Web, DB, Middleware, Proxy servers along with client VMs required to match the production class environment.

Below Vagrant configuration file is a use-case for setting up ‘Ansible Practice Lab’. 6 nodes are build by vagrant running on CentOS 7. This LAB environment is build for the purpose of learning Ansible hands-on workshop to try out all the features offered by Ansible to automate configuration management and infrastructure automation. Ansible package is installed on node 1 and rest of the nodes are managed by ansible workstation – node1.

Later YUM is installed and configured by downloading EPEL repository across all 6 nodes using the global shell script. Post yum configuration, basic packages like wget, curl and sshpass are installed.

Most important requirement for Ansible to work is to enable SSH key based authentication between all 6 nodes. For this to work, a shell script ssh is written and added to configuration file which will be executed by vagrant during the build process. An interface with Private IP is configured across all nodes which is used to for the nodes inter-connectivity via SSH.

Here is the vagrant multi machines configuration file along with custom scripts to install packages and setup SSH key based authentication between the nodes

# Vagrant configuration file for multi machines with inter connectivity via SSH key based authentication
numnodes=6
baseip="192.168.10"

# global script
$global = <<SCRIPT

# Allow SSH to accept SSH password authentication. Find and replace if the line is commented out
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config

# Add Google DNS to access internet. 
echo "nameserver 8.8.8.8" | sudo tee -a  /etc/resolv.conf 

# Download and install  Centos 7 EPEL package to configure the YUM repository
sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm
# Update yum
sudo yum update -y
# Install wget curl and sshpass
sudo yum install wget curl sshpass -y

# Exclude node* from host checking
cat > ~/.ssh/config <<EOF
Host node*
   StrictHostKeyChecking no
   UserKnownHostsFile=/dev/null
EOF

# Populate /etc/hosts with the IP and node names 
for x in {11..#{10+numnodes}}; do
  grep #{baseip}.${x} /etc/hosts &>/dev/null || {
      echo #{baseip}.${x} node${x##?} | sudo tee -a /etc/hosts &>/dev/null
  }

done
yes y |ssh-keygen -f /home/vagrant/.ssh/id_rsa -t rsa -N ''
echo " **** SSH Key Pair created for node$c ****"

SCRIPT

# SSH configuration script
$ssh = <<SCRIPT1
numnodes=6

for (( c=1; c<$numnodes+1; c++ ))
do
    echo "$c"
    echo "node$c"
    if [ "$HOSTNAME" = "node1" ]; then
      echo "**** Install ansible on node1 ****"
      sudo yum install ansible -y
    fi
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        echo "node$c"
        continue
    fi

    # Copy the current host's id to each other host.
    # Asks for password.
    # create ssh key
    
    sshpass -p vagrant ssh-copy-id "node$c"
    echo "**** Copied public key to node$c ****"    
done

# Get the id's from each host.
for (( c=1; c<$numnodes+1; c++ ))
do
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        continue
    fi

    sshpass -p vagrant ssh "node$c" 'cat .ssh/id_rsa.pub' >> /home/vagrant/host-ids.pub
    echo "**** Copy id_rsa.pub contentes to host-ids.pub for host node$c ****"
done

for (( c=1; c<$numnodes+1; c++ ))
do
    # Skip the current host.
    if [ "$HOSTNAME" = "node$c" ]; then
        continue
    fi

    # Copy public keys to the nodes
    sshpass -p vagrant ssh-copy-id -f -i /home/vagrant/host-ids.pub "node$c"
    echo "**** Copy public keys to node$c ****"

done
# Set the permissions to config
sudo chmod 0600 /home/vagrant/.ssh/config
# Finally restart the SSHD daemon
sudo systemctl restart sshd
echo "**** End of the Multi Machine SSH Key based Auth configuration ****"

SCRIPT1

# Vagrant configuration
Vagrant.configure("2") do |config|
  # Execute global script
  config.vm.provision "shell", privileged: false, inline: $global
  prefix="node"
  #For each node run the config and apply settings
  (1..numnodes).each do |i|
    vm_name = "#{prefix}#{i}"
    config.vm.define vm_name do |node|
      node.vm.box = "centos/7"
      node.vm.hostname = vm_name
      ip="#{baseip}.#{10+i}"
      node.vm.network "private_network", ip: ip    
    end
    # Run the SSH configuration script
    config.vm.provision "ssh", type: "shell", privileged: false, inline: $ssh
  end
end

To execute the above configuration file, run the below commands

$vagrant up
$vagrant provision --provision-with ssh

Please note that the above example show vagrant user credentials by using sshpass -p option. If you want to secure use -f also read the sshpass documentation for more info. Many constants like EPEL repo URL, number of nodes, ssh key path etc.. need be customized according to your actual requirements.

To check the status of the nodes build by vagrant use the below command.

$vagrant status
Current machine states:

node1                     running (virtualbox)
node2                     running (virtualbox)
node3                     running (virtualbox)
node4                     running (virtualbox)
node5                     running (virtualbox)
node6                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

To login to the first node, and to SSH to other nodes use the below commands. Notice that there was no password prompts which SSHing between the nodes. By the way, Ansible is installed in node1 and ready to use. eth1 is the private network used for SSH inter-connectivity.

$vagrant ssh node1
Last login: Tue Jun 11 12:01:11 2019 from 192.168.10.12
[vagrant@node1 ~]$ssh node2
Warning: Permanently added 'node2,192.168.10.12' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:01:04 2019 from 192.168.10.11
[vagrant@node2 ~]$ssh node1
Warning: Permanently added 'node1,192.168.10.11' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:16:41 2019 from 10.0.2.2
[vagrant@node1 ~]$ssh node5
Warning: Permanently added 'node5,192.168.10.15' (ECDSA) to the list of known hosts.
[vagrant@node5 ~]$ssh node1
Warning: Permanently added 'node1,192.168.10.11' (ECDSA) to the list of known hosts.
Last login: Tue Jun 11 12:17:23 2019 from 192.168.10.12
[vagrant@node1 ~]$yum list ansible 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.dhakacom.com
 * epel: sg.fedora.ipserverone.com
 * extras: mirror.dhakacom.com
 * updates: mirrors.nhanhoa.com
Installed Packages
ansible.noarch                                             2.8.0-2.el7                                             @epel
[vagrant@node1 ~]$ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:26:10:60 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 80872sec preferred_lft 80872sec
    inet6 fe80::5054:ff:fe26:1060/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:7b:8a:ef brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.11/24 brd 192.168.10.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe7b:8aef/64 scope link
       valid_lft forever preferred_lft forever
[vagrant@node1 ~]$

Hope this use case help to understand how to install and configure multiple VMs at once with SSH inter-connectivity. Please leave your feedback if you found this blog useful and share suggestions in the below Comments section.

Image Courtesy: sumglobal.com

References:

https://www.vagrantup.com/docs/multi-machine/

https://www.vagrantup.com/docs/vagrantfile/

https://www.vagrantup.com/docs/provisioning/basic_usage.html

https://github.com/kikitux/vagrant-multimachine/edit/master/intrassh/Vagrantfile

Python Pandas Pivot Table

This blog is going to be mostly helpful to folks who work with excel and databases. But, if you work with Excel and find yourself needing to deal with automation of pivot tables, you might also appreciate the technique.

When working with MySQL database, it was not an easy way for me to fetch the data and reshape it to a Pivot Table. I did duckduckgo, found ample of examples but most of them were hard to understand or not yielding the required output.

Python’s Pandas Pivot Table was the savior!! I followed below steps to fetch data from MySQL DB and generate a simple Pivot Table. Later this pivot table exported as an excel file and shared with stakeholders using slack.

  • Create connection string to MySQL DB using PyMySQL
  • Create a SQLAlchemy Engine (ORM)
  • Write a Query statement to fetch data from DB
  • Create a Pandas Dataframe directly from the query using SQLAlchemy
  • Create a Pandas Pivot Table with required Index, Columns and Values
  • Export data to excel or csv file using to_excel and to_csv respectively
  • Share it to stakeholders via slack

Please find below Python code screenshots

I hope this blog triggered a thought to automate the mundane task of generating pivot tables from spreadsheets or databases. Please reach out to me if you want to check out the full code.

Thank you for stopping by this blog and please share your suggestions below under the Comment section

Up Next – Manage Alerts of VMAX3 array using REST API

Image Courtesy:

Matt Whitt
https://ttorial.com/Images/python-automation-automate-mandane-task-python.jpg

Python Script to Validate VMAX3 Hot Spare Drives Compliance

In VMAX3 or any enterprise-class storage arrays, hot spares are used to replace failing or failed disks in the storage array. Hot Spare drives need to be of the same configuration and size (can be larger) as that of the failing/failed disk. VMAX3 with Hypermax use Direct Sparing to automatically replace a failing disk. Please click here to know more…

This script would fetch hot spare drives count per VMAX3 array and generates a dash. This dash shows compliance report across all DCs.

VMAX SRP Utilization Report & Upload to MySQL DB using Python

I had an opportunity to attend Python Basics Training imparted by Mr. Ashish Gulati. Ashish, a technology coach; was really good in explaining basics of language, data types and Pro & Cons of this language. The way he imparted knowledge was an unique experience. He was flexible to explain some of the real time use cases in data analytics, JSON, SSH connectivity etc.. and explained in detail about the various module’s functionality.

Thank much Ashish for the session, it was informative, very simple topics but effective learning experience, hands on coding was a big plus point. I have learned much that will assist me in my workplace. As an outcome, I have already started migrating from Perl to Python.  This blog is about my first attempt to write Python scripts @ work.

Scripts written in Python would run from several VMAX3 Enterprise Storage Management Servers located at various data centers.

Python_vmax_cap

Brocade SAN switch automated troubleshooting script

What does this Brocade SAN switch automated troubleshooting script do?

  1. Ping the SAN switch IPs and check the connectivity
  2. Perform basic health check by running switchstatusshow command
  3. Check for CRC and EncOut errors by running porterrshow command
  4. If there are CRC and EncOut errors it clears the port statistics and wait for 20 minutes to monitor the incremental errors on the switch ports using portstatsclear and portshow commands
  5. Check and compare Lr_in & Ols_Out and Lr_out and Ols_in values to detect faulty cable and SFP issue
  6. Generate supportsave files which is required to do further analysis or troubleshooting by switch vendor and FTP the files
  7. Capture all the above output to file and generate email alert to further engage an engineer to continue troubleshooting or log a case with switch vendor

I hope this would help us in proactively monitoring SAN switches and iron out false positives without human intervention.

To download the script please click here and select ‘Brocade Troubleshooting’ from the right pane under Topics.

***********************************************************************

I believe this could be last blog post of the year 2016…

Belated Merry Christmas and Wish you Happy and Prosperous New Year 2017 in advance to all my followers, readers and visitors!!

***********************************************************************

How to hide your important files from people without making Hidden folders
1. Go to Desktop and create a new folder
2. Name the folder Internet Explorer
3. Change the folder icon to Internet Explorer
4. Keep it in a corner of the desktop

Now, no one will open internet explorer 😀

Source: http://www.coolcoder.in/2014/02/10-awesome-programming-jokes-of-all-time.html

 

EMC VMAX Storage Automated Performance Report

This blog is being written as a companion to my previous blog on Automated EMC VMAX Capacity Reporting

platform-vmax

In recent times, we’re asked to develop scripts to capture performance metrics from EMC VMAX storage. There is a ‘symstat’ command with many attributes to capture performance metrics information from the array. But this command was not fulfilling all our requirements. While exploring various options and consultation with EMC support / community we decided to try Unisphere / RESTAPI.

So far I was using Perl as THE LANGUAGE to talk to my storage arrays. But I was forced to switch over to Python which works best with REST API / JSON. Additionally, there are lots of code out there on RESTAPI written in Python. So it is easy to ‘get inspired’ by those codes and write customized code for our requirements. So this would make me yet another ‘Pythonistas’ 🙂

This is my first ever Python (version 2.7 on GNU/Debian Linux) script to capture EMC VMAX Performance Metrics retrieved from Unisphere for VMAX (version 8.2) via RESTAPI. I’ve referred this Python script to develop custom script to suit our requirements. Many thanks to Matt Cowger (mcowger) for sharing the script in Github.

There are plenty of metrics that can be captured using this script but I’ve written a simple code for demo purpose to print few metrics in CSV format which can be either imbibed by excel for further reporting / charting or injected to MySQL DB to do many stuffs…

Here is the sample + cropped output for reference. In the below table timestamp (column B) is in epoch format which is converted to MYSQL datetime format via INSERT query

2016-11-04-22_25_58-book1-excel

 

P.S: I’ve changed VMAX serial number for various factors🙂

If interested, please reach out to me to get these Python scripts.

Image Courtesy: https://www.emc.com

References: https://github.com/mcowger/randompython/blob/master/symmREST.py

Thanks for stopping by… Please leave your comments / suggestions.