Wednesday, August 29, 2012

A little HOWTO for Software Testing Automation Framework (STAF)

Introduction

The Software Testing Automation Framework (STAF) is an open source, multi-platform, multi-language framework designed around the idea of reusable components, called services (such as process invocation, resource management, logging, and monitoring). STAF removes the tedium of building an automation infrastructure, thus enabling you to focus on building your automation solution. The STAF framework provides the foundation upon which to build higher level solutions, and provides a pluggable approach supported across a large variety of platforms and languages.

Download latest version of STAF.

About STAF

STAF can be leveraged to help solve common industry problems, such as more frequent product cycles, less preparation time, reduced testing time, more platform choices, more programming language choices, and increased National Language requirements. STAF can help in these areas since it is a proven and mature technology, promotes automation and reuse, has broad platform and language support, and provides a common infrastructure across teams. STAF services are also provided to help you to create an end-to-end automation solution. By using these services in your test cases and automated solutions, you can develop more robust, dynamic test cases and test environments.

STAF Commands samples

Description Command Details
Display STAF version staf local misc version Self explanatory
Display detailed local system information staf local var list Outputs details about the system: RAM, OS version, architecture, variables, et cetera.
Display STAF trust levels staf local trust list Outputs a list of other hosts, networks and/or protocols that the local STAF server trusts to execute STAF commands.
Grant STAF trust level staf local trust set machine tcp://10.31.*.* level 5 Local machine grants the highest permission level 5 to all machines with STAF installed on the subnet 10.31.x.x
Revoke STAF trust level staf local trust delete machine tcp://10.31.*.* Local machine revokes all permissions from hosts with STAF installed located on the subnet 10.31.x.x
Copy remote files/directories staf neptune fs copy file "C:\file.txt" todirectory /opt staf netra10ga fs copy directory /opt/scripts todirectory /opt/scripts Copy the file located in "C:\file.txt" from the host “neptune” to local directory “/opt” Copy the directory “/opt/scripts” with its content to local machine’s “/opt/scripts”. Creates the directory if it does not exist.
Execute commands remotely staf mercury process start command "\\\share\inc\Scripts\AutoIT\doit.exe" Launches program “doit.exe” on the host “mercury” from a UNC share located on host “share”
STAF startup nohup /usr/local/staf/bin/STAFProc& Universal command across all OS, to start STAF server daemon. Or use the following OS-specific commands: • HP /etc/rc.config.d/rc.staf start • ESX, RH service staf start • AIX /etc/rc.staf start • SUSE /etc/init.d/staf start • MAC /Library/StartupItems/STAF/STAF start • Solaris /etc/rc2.d/S99staf start
STAF stop staf local shutdown shutdown Universal command across all OS, to stop the STAF server daemon.

Saturday, August 18, 2012

ClusterSSH on steroids: tips and tricks

Today I would like to share a combination of  tips, tricks and gems that are possible with ClusterSSH. Duncan Ferguson, the creator of CSSH, came up with this invaluable tool that helps many system administrators and QA testers manage many systems in parallel.

Introduction

ClusterSSH controls a number of xterm windows via a single graphical console window to allow commands to be interactively run on multiple servers over an SSH connection. ClusterSSH (cssh) is written in PERL and easily allows modifications to the source code.

Demo


 About ClusterSSH

ClusterSSH (cssh) command opens an administration console and an xterm to all specified hosts. Any text typed into the administration console is replicated to all windows. All windows may also be typed into directly. Commands are performed all at once via this tool, which speeds up testing and allows for easy output comparison between tested nodes. Connections are opened via SSH, therefore a correctly installed and configured SSH installation is required. Extra caution should be taken when editing system files such as /etc/inet/hosts as lines may not necessarily be in the same order. Assuming line 5 is the same across all servers and modifying that is dangerous. Better to search for the specific line to be changed and double-check before changes are committed. Extra caution should be taken when executing destructive commands as root on a large number of physical servers.

About DevilsPie

DevilsPie (devilspie) is a program which can detect windows as they are created, and perform actions on them if they match as set of criteria. ClusterSSH is used in conjunction with DevilsPie to control the position of xterm windows, administration console and to hide windows decorations such as title bar, minimize / maximize / close buttons. This in return provides a clean look and a very efficient use of monitor space.

Devil's pie in action

Planning

To ensure the success of using ClusterSSH, please consider the recommendations below.
  1. Because ClusterSSH sorts xterm windows in alphabetical order it is recommended that hosts are given appropriate hostnames. This will ensure that sorting will group similar OS hosts together. For instance if hostnames of all Solaris machines start with “s” and RedHat machines start with “r”, once ClusterSSH is launched, all Solaris and RedHat machines will be grouped together. This simplifies the output comparison process across similar operating systems.
  2.  It is preferred that testing hosts have permanent IP addresses set and either entered in DNS or local /etc/hosts file on the operating system that will host ClusterSSH. Working with a large number of DHCP hosts can be inconvenient and time consuming because of changing IP addresses.
  3. ClusterSSH setup described in this document is installed on Ubuntu 9.04 Desktop OS running on VMware Workstation 6.5.2. ClusterSSH can also be used with other operating systems.

Installation process

1.        Install Virtual BOX
2.        Install Ubuntu 10 Desktop OS
a.        Note: Use bridged VMware network connection.
3.        Apply all latest updates and patches to Ubuntu 10 Desktop OS
a.        To check for updates from the KDE menu, go to:
 System à Administration à Update Manager
4.        Install Virtual BOX Guest Additions 
5.        Install SSH Server
a.        Execute: “sudo apt-get install ssh
b.        Create public keys, execute: “ssh-keygen –d” and respond with <ENTER> to questions.
6.        Install ClusterSSH
a.        Execute: “sudo apt-get install clusterssh
7.        Install DevilsPie
a.        Execute: “sudo apt-get install devilspie
8.        Perl module[1] XML::Simple
a.        To install:
                                                   i.      # perl –MCPAN –e shell
                                                  ii.      cpan[1]> install XML::Simple

9.     Create .cssrc and .csshrc_send_menu in your users home directory


[1] You may need additional PERL modules, just follow provided commands to install them

Important files

$HOME/Desktop/Launch Custom ClusterSSHv4 launch SHELL script Contains cssh command with a list of target hosts
$HOME/.csshrc Custom ClusterSSH configuration file Contains ClusterSSH configuration parameters
/etc/cssh/cssh.ds Custom DevilsPie configuration file Contains DevilsPie configuration parameters
$HOME/.csshrc_send_menu Menu XML configuration file Contains menu and folders populated with commands


ClusterSSH launch script “Launch”

This script contains list of servers to connect via SSH. Order is irrelevant, since launched CSSH sorts all windows by name in alphabetical order.

#!/bin/bash
CSSHPID="/tmp/cssh.pid" 

if [ -f $CSSHPID ]; then
 for pid in `cat $CSSHPID`
 do
     pkill -P $pid 
     pkill xterm
     pkill devilspie
 done
 rm -f $CSSHPID
fi

echo $$ >> $CSSHPID
devilspie /etc/cssh/cssh.ds& 

cssh -l root yourhost1 yourhost2 yourhost3 yourhost4 

pkill devilspie
rm -f $CSSHPID

ClusterSSH XML configuration file “.csshrc_send_menu“

$HOME/.csshrc_send_menu
 


ClusterSSH configuration file “.csshrc“

.csshrc

Devilspie configuration file “cssh.ds“

cssh.ds

 

Secure Shell (ssh) key-based authentication

ClusterSSH is using SSH protocol as its transport method to connect to remote test systems. In order to simplify the connection process and avoid being prompted for password every time ClusterSSH SHELL scripts are launched, users may setup an SSH key-based public key authentication. If public keys are already created on UBUNTU desktop, simply change directory to your home directory “.ssh”. From .ssh directory issue the following command: “ssh-copy-id root@<system>” where “<system>” is the target system you are planning to connect using ClusterSSH. Type-in root’s password and this command will copy public key from your UBUNTU desktop login to the remote system. Next time ClusterSSH is launched it will be able to connect directly to your test system(s) without prompting for password - saves time!

Saturday, July 28, 2012

How to implement physical servers snapshots on Apple MAC OS X 10.X

Concept

Here is how it works:
  • Mac OS X physical snapshots can be configured with a single disk 
  • Partition your disk to have at least 2 partitions  "MASTER" and "CLONE"
  • Install clean MAC OS X on MASTER. This will become your  disk.
  • Include required packages (see Prerequisites, below)
  • Perfect the OS, install any software that you would like to have as part of the standard OS
  • Setup snapshot scripts and create the snapshot

Banner text

Banner will be displayed during the reimaging process to anyone who will be trying to connect to the system via SSH, CONSOLE or TELNET prior to the end of the snapshot recovery process. Touch /etc/banner_default
WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
 
       THE SYSTEM IS BOOTED TO AN ORIGINAL SNAPSHOT DISK
               SNAPSHOT RECOVERY IS IN PROGRESS
 
       TO PRESERVE INTEGRITY OF THE SYSTEM DO NOT LOGIN!

Enable banner for SSH

To enable banner for SSH, simply uncomment "Banner" in /etc/ssh/sshd_config
# no default banner path
Banner /etc/banner
NOTE: During the reimaging process /etc/banner_default will be renamed to /etc/banner

Scripts and entries

You will need to add scripts and append standard configuration files as part of this process. Feel free to customize as you wish. Append /etc/profile. Simple notification mechanism if a user is trying to login to the system during the snapshot restore.
#…
PATH=$PATH:/usr/local/bin
export PATH
#…

#master (YOUR ORIGINAL SNAPSHOT DISK)
MASTER=”/dev/disk0s2”

if [ `/usr/sbin/bless --getBoot` == ${MASTER} ]; then
 clear
 cat /etc/banner_default
 while true
  do
   echo "Are you sure you still wish to login? (y or n) :\c"
   read CONFIRM
   case $CONFIRM in
     y|Y|YES|yes|Yes) break ;;
     n|N|no|NO|No)
     echo Aborting - you entered $CONFIRM
     exit
     ;;
    *) echo Please enter only y or n
   esac
  done
else
/bin/rm -f /etc/banner
fi

Create /usr/local/bin/restore_snapshot

Create file /usr/local/bin/restore_snapshot with execute permissions (500). This is the script that you will be executing to request snapshot restore.
#!/bin/sh
#master (YOUR ORIGINAL SNAPSHOT DISK)
MASTER=”/dev/disk0s2”
#clone  (YOUR TEST DISK)
CLONE=”/dev/disk0s3”

# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
   echo "This script must be run as root" 1>&2
   exit 1
fi

if [ `/usr/sbin/bless --getBoot` == ${CLONE} ]
then
        echo "System is booted to the secondary disk"
        echo "Changing boot device priority to boot to the master disk for snapshot restore..."
        echo "INFO: Cancel by CRTL-C in 15 seconds..."
        sleep 15
        set -x
        /usr/sbin/bless --device ${MASTER} --setBoot
        set +x
        echo "Rebooting..."
        /sbin/reboot
else
        /usr/local/bin/resnapshot
fi

MAC OS X /usr/local/bin/resnapshot

Create /etc/rc.resnapshot (chmod 500). This script will be called during OS boot. It will check whether the snapshot restore must start after the reboot.
#!/bin/sh

#master (YOUR ORIGINAL SNAPSHOT DISK)
MASTER=”/dev/disk0s2”
#clone  (YOUR TEST DISK)
CLONE=”/dev/disk0s3”

# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
   echo "This script must be run as root" 1>&2
   exit 1
fi

echo "INFO: You can still cancel by pressing CRTL-C within 15 seconds..."
sleep 15

if [ `/usr/sbin/bless --getBoot` == ${MASTER} ]
then
        echo "INFO: System is booted to the master image disk."
        echo "INFO: Placing banner."
        set -x
        cp /etc/banner_default /etc/banner
        set +x

        echo "INFO: Restoring snapshot. The system will reboot automatically, if successful."
        echo "INFO: Please wait..."
        echo "INFO: Placing FirstReboot into startup"
        /bin/mkdir -p /Library/StartupItems/FirstBoot
        echo "INFO: Creating FirstReboot script"
        /usr/bin/touch /Library/StartupItems/FirstBoot/FirstBoot
        echo "INFO: Changing Permissions on FirstReboot script"
        /bin/chmod +x /Library/StartupItems/FirstBoot/FirstBoot
        echo "INFO: Populating on FirstReboot script"
        echo "/bin/rm -rf /Library/StartupItems/FirstBoot" > /Library/StartupItems/FirstBoot/FirstBoot
        echo "/sbin/reboot" >> /Library/StartupItems/FirstBoot/FirstBoot
 /usr/sbin/asr -h 2>&1 | grep '\-\-erase'
if [ $? –eq 0 ]; then
        ASRCMD=”/usr/sbin/asr -source ${MASTER} -target ${CLONE} --erase --updatebless –noprompt”
  else
        ASRCMD=”/usr/sbin/asr -source ${MASTER} -target ${CLONE} -erase -updatebless –noprompt”
fi

echo $ASRCMD
set -x
${ASRCMD}

        if [ $? -eq 0 ]; then
                echo "INFO: Restore succeeded! Setting boot to clone..."
                /usr/sbin/bless --device ${CLONE} --setBoot
                /usr/sbin/bless --getBoot
                echo "INFO: Removing Local FirstReboot directory"
                /bin/rm -rf /Library/StartupItems/FirstBoot
                echo "Rebooting Now..."
                /sbin/reboot
        else
                echo "INFO: Snapshot restore failed. Aborting..."
                exit 1
        fi
else
        echo "INFO: System is booted to the clone. Exiting..."
        set -x
        /bin/rm -f /etc/banner
        /usr/sbin/diskutil unmount ${MASTER}
        set +x
fi

Mac OS X job scheduling

defaults write /Library/LaunchDaemons/com.globalitadmins.resnapshot Label com.symantec.resnapshot
defaults write /Library/LaunchDaemons/com.globalitadmins.resnapshot ProgramArguments -array "/usr/local/bin/resnapshot" 
defaults write /Library/LaunchDaemons/com.globalitadmins.resnapshot RunAtLoad -bool true

Create first snapshot

To initiate the first cloning process, simply execute:

/usr/local/bin/restore_snapshot
Mac OS X Restore Snapshot Sample Output
mini106-4:bin root# restore_snapshot 
INFO: You can still cancel by pressing CRTL-C within 15 seconds...
INFO: System is booted to the master image disk.
INFO: Placing banner.
+ cp /etc/banner_default /etc/banner
+ set +x
INFO: Restoring snapshot. The system will reboot automatically, if successful.
INFO: Please wait...
INFO: Placing FirstReboot into startup
INFO: Creating FirstReboot script
INFO: Changing Permissions on FirstReboot script
INFO: Populating on FirstReboot script
+ /usr/sbin/asr -source /dev/disk0s2 -target /dev/disk0s3 --erase --updatebless --noprompt
        Validating target...done
        Validating source...done
        Erasing target device /dev/disk0s3...done
        Validating sizes...done
        Copying    ....10....20....30....40....50....60....70....80....90....100
+ '[' 0 -eq 0 ']'
+ echo 'INFO: Restore succeeded! Setting boot to clone...'
INFO: Restore succeeded! Setting boot to clone...
+ /usr/sbin/bless --device /dev/disk0s3 --setBoot
+ /usr/sbin/bless --getBoot
/dev/disk0s3
+ echo 'INFO: Removing Local FirstReboot directory'
INFO: Removing Local FirstReboot directory
+ /bin/rm -rf /Library/StartupItems/FirstBoot
+ echo 'Rebooting Now...'
Rebooting Now...
+ /sbin/reboot

And you are all set!

Important commands

#CURRENT BOOT DISK: bless –getBoot
#ALL DISKS PARTITIONS: diskutil list

#PARTITION WITH GUI: USE DISKUTILITY  
#MOUNT MASTER WHEN BOOTED ON TEST:
#diskutil mount /dev/disk0s2
#UNMOUNT MASTER WHEN BOOTED ON TEST:
#diskutil unmount /dev/disk0s2

Helpful links

Friday, July 27, 2012

How to implement physical servers snapshots on AIX 5.2, 5.3, 6.1, 7.1

Concept

Here is the short description of how the process works.
  • Make sure your system has at least 2 physical hard drives (hdisk0 and hdisk1).
  • Install clean AIX OS on hdisk0. This will become your "MASTER" disk.
  • Include required packages (see Prerequisites, below)
  • Perfect the OS, install any software that you would like to have as part of the standard OS
  • Setup snapshot scripts and create the snapshot

Prerequisites

  • All AIX versions require 2 HDD disks. One disk, will call it "MASTER" holds base line OS. The other, let's call it "CLONE", is the disk that the MASTER will be recovering the snapshot to effectively overwriting it every time you will need to restore your snapshot.
  • For LPAR's, allocate 2 virtual disks in VIOS.
  • All commands must be executed as root.

AIX 5.2 software

bos.alt_disk_install 
OR install file set:(lslpp -L bos.alt_disk_install.rte)

AIX 5.3 software

bos.alt_disk_copy    
OR install file set: (lslpp -L bos.alt_disk_copy.rte)
 bos.alt_disk_install.boot_images
 bos.alt_disk_install.rte     
 bos.msg.en_US.alt_disk_install.rte

AIX 6.1, 7.1 software

  • All required packages are present in the default installation

Banner text

Banner will be displayed during the reimaging process to anyone who will be trying to connect to the system via SSH, CONSOLE or TELNET prior to the end of the snapshot recovery process. Touch /etc/banner_default
WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
 
       THE SYSTEM IS BOOTED TO AN ORIGINAL SNAPSHOT DISK
               SNAPSHOT RECOVERY IS IN PROGRESS
 
       TO PRESERVE INTEGRITY OF THE SYSTEM DO NOT LOGIN!

Enable banner for SSH

To enable banner for SSH, simply uncomment "Banner" in /etc/ssh/sshd_config
# no default banner path
Banner /etc/banner
NOTE: During the reimaging process /etc/banner_default will be renamed to /etc/banner

Scripts and entries

You will need to add scripts and append standard configuration files as part of this process. Feel free to customize as you wish.

Append /etc/inittab

The line below is used to check whether user requested snapshot restore during the last reboot.
resnap:2:once:/etc/rc.resnapshot >/dev/console 2>&1

Append /etc/profile

Simple notification mechanism if a user is trying to login to the system during the snapshot restore.
#Note: MASTER DISK hdisk0 is intentionally hard coded here
PATH=$PATH:/usr/local/bin
export PATH
if [ `bootinfo -b` == "hdisk0" ]; then
 clear
 cat /etc/banner_default
 while true
  do
   echo "Are you sure you still wish to login? (y or n) :\c"
   read CONFIRM
   case $CONFIRM in
     y|Y|YES|yes|Yes) break ;;
     n|N|no|NO|No)
     echo Aborting - you entered $CONFIRM
     exit
     ;;
    *) echo Please enter only y or n
   esac
  done
fi

Create /usr/local/bin/restore_snapshot

Create file /usr/local/bin/restore_snapshot with execute permissions (500). This is the script that you will be executing to request snapshot restore.
#!/bin/sh

#Note: MASTER DISK hdisk0 is intentionally hard coded
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
   echo "This script must be run as root" 1>&2
   exit 1
fi

if [ `bootinfo -b` != "hdisk0" ]
then
        echo "System is booted to the secondary hdisk"
        echo "Changing boot device priority to boot to the master hdisk for snapshot restore..."
        set -x
        bootlist -m normal hdisk0     
        set +x
        echo "Rebooting..."
        /usr/sbin/shutdown now -r
else
        /etc/rc.resnapshot
fi

AIX 5.2 /etc/rc.resnapshot

Create /etc/rc.resnapshot (chmod 555). This script will be called during OS boot. It will check whether the snapshot restore must start after the reboot.
#!/bin/sh

#Note: MASTER DISK hdisk0 is intentionally hard coded
#Note: CLONE DISK: hdisk1 is intentionally hard coded

if [ `/usr/sbin/bootinfo -b` == "hdisk0" ];then
        echo "INFO: System is booted to the master image hdisk."
        echo "INFO: Placing banner."
        set -x
        cp /etc/banner_default /etc/banner
        set +x
        echo "INFO: Removing altinst_rootvg"
        set -x
        /usr/sbin/alt_disk_install -X altinst_rootvg
        set +x
        echo "INFO: Restoring snapshot. If successful, the system will reboot automatically."
        echo "INFO: Please wait..."
        set -x
        /usr/sbin/alt_disk_install -C hdisk1
        if [ $? -eq 0 ]; then
                echo "rootvg clone operation suceeded!"
                echo "Rebooting..."
                /usr/sbin/reboot
        else
                echo "rootvg clone operation failed!"
                echo "Aborting..."
        fi
        set +x
else
        set -x
        /usr/bin/rm -f /etc/banner
        set +x
fi

AIX 5.3, 6.1, 7.1 /etc/rc.resnapshot

Create /etc/rc.resnapshot (chmod 555)
#!/bin/sh

#Note: MASTER DISK hdisk0 is intentionally hard coded
#Note: CLONE DISK: hdisk1 is intentionally hard coded

if [ `/usr/sbin/bootinfo -b` == "hdisk0" ];then
        echo "INFO: System is booted to the master image hdisk."
        echo "INFO: Placing banner."
        set -x
        cp /etc/banner_default /etc/banner
        set +x

        echo "INFO: Removing altinst_rootvg"
        set -x
        /usr/sbin/alt_rootvg_op -X altinst_rootvg
        set +x
        echo "INFO: Restoring snapshot. If successful, the system will reboot automatically."
        echo "INFO: Please wait..."
        set -x
        /usr/sbin/alt_disk_copy -d hdisk1 -r
        set +x
else
        set -x
        /usr/bin/rm -f /etc/banner
        set +x
fi

Create first snapshot

Please note that you do not "create" the initial snapshot per se.
Your initial snapshot is your current OS hdisk0. What you are doing
is copying the good disk hdisk0 onto the second hdisk1 that after the
whole process is done will become the disk, where you will be testing
stuff. Once test is finished, you will simply execute restore_snapshot
script to request the restore of the snapshot. The restore process will
reboot the OS and will automatically start overwriting the test disk with
the clean OS image. Once that process finishes, the system reboots once
again and you are back on a clone disk.

To initiate the first cloning process, simply execute:

/usr/local/bin/restore_snapshot
And you are all set!

Important commands

#Display AIX disk information
#CURRENT BOOT DISK:      bootinfo –b
#ALL DISKS:              lspv
#DISK DETAILS:           lscfg -vl hdisk0

#MOUNT MASTER WHEN BOOTED ON TEST
#alt_rootvg_op –W –d hdisk0

#UNMOUNT MASTER WHEN BOOTED ON TEST
#alt_rootvg_op –S
#[alternatively specify –t to rebuild boot image.
Not recommended for minor changes.

Helpful links

How to implement physical server OS snapshot

In one of the assignments, I needed to solve an issue for a quality assurance department to come up with a way to restore entire QA environment operating systems to their original state. This was needed to ensure that QA process uses clean OS baseline during each test set iteration. There were about 40 various flavors, versions, CPU architecture, 32 bit, 64 bit, file system types, and various combinations of UNIX (Solaris SPARC/i386, Aix, LPAR, HPUX), Linux (RedHat, SuSE) and Mac OS X, as well as Windows server systems. All these systems needed to return to the original state after QA engineers were finished testing another iteration of software release. Virtualization and VMWARE/ESX snapshots would help a little, however the QA process required testing on physical servers as well as virtual machines.

Physical Snapshots - task requirements

  • Commonality
    • works the same way across all platforms
    • uses the same interface
  • Supportability
    • Uses supported, native OS methods
  • Complexity
    • easy to setup
    • easy to use
    • easy to maintain (patch, add / remove features)
    • requires simple skill set
  • Reliability
    • not susceptible to network outages
    • no single point of failure that affects restores for all systems
    • preserves snapshot integrity
  • Speed
    • Close to what it takes to recover a virtual machine
  • Cost
    • Low maintenance
    • Should not tie up scarce physical QA machines

At first, there were few options, ideas and ways that I had in mind, however none of these options used a universal approach to reimaging. Bare Metal restore software was very costly, required another server to perform the restore (one server per OS) and also imagine the kind of network traffic and load that would be generated if 40 systems would need to go into reimaging at the same time. Multiply that by 10 QA engineers with 40 servers each and doing reimaging would consume enormous network resources. No single open source tool was capable of handling everything. Another issue that was important - reliability and speed of the restore process.

In the next series of posts, I will go over the "how to" steps on implementing the snapshot recovery on virtually any physical server OS. If something would not be covered here, you will understand the approach on how to implement such mechanism on any other OS. Also, feel free to send anything happens to be missing to complete the collection.