Tuesday, May 31, 2016

Announcing "saltcheck"!  A new way to test the logic of salt states and highstates dynamically

Check yourself before (and after) you Salt yourself

salt_check on github 

Background
After ending my project 'Auto Scaling and On-Demand Deployment of Custom Applications using SaltStack' with a presentation at SaltConf16 this year, the need to test salt states was the natural next step to improve and fortify the hosptalityPulse infrastructure.

Several presentations at SaltConf16 covered testing salt states. Salt has built-in functionality to aid in testing states and logic. But, they only cover testing what will happen before a state runs.

The best approach I found required using ServerSpec or TestInfra.  Unfortunately, this just didn't feel like the right answer.

At some point it dawned on me that this is an open source project and I could contribute in this area, gracefully supported by hospitalityPulse. 


Goal
The main objective of salt_check is to have an easy, and fast (run in parallel) testing solution for the logic of salt states, and highstates - a vague cousin of unit tests, dedicated to salt states. The solution should make writing tests as easy as running salt execution modules. And, programming knowledge will not be required to use this tool.

Here's how it works:
  1. Create a state as a directory (e.g.  /srv/salt/apache/init.sls)
  2. Create a sub-directory of the state directory and name it 'salt-check-tests'
  3. Put one or more test files in the 'salt-check-tests' directory, each with a file name ending with .tst .

YAML Syntax for one test (replace text in caps):

Quick example of a saltcheck test



How do I run a saltcheck test?  Easily.



What tests are available:
All salt execution module functionality is available.  If there is functionality you need that is not in the 407+ salt execution modules just create a new one.
Salt Modules

Pros of saltcheck:
  • supports salt renderers (e.g. yaml + jinja = dynamic tests)
  • runs in parallel across servers
  • makes use of salt event bus - very fast
  • supports all salt cli targeting
  • supports testing over salt-ssh
  • more dynamic than serverspec/testinfra
  • no required additional infrastructure to use

Cons of saltcheck:
  • to test salt states that consist of single *.sls files they must be converted to directories in order to have a sub-directory beneath them


How to get started:
  1. Clone the github repository locally
  2. Copy the saltcheck.py file to your custom modules directory on your salt master in the "file_roots" location (typically in /srv/salt/_modules)
  3. Sync the modules to your minions (salt '*' saltutil.sync_modules)
  4. Check that saltcheck is available  (salt '*' saltcheck -d)
  5. Write your first saltcheck test suite
** There is an example of how to test apache on ubuntu in the "examples" directory of the github repo.






Monday, November 18, 2013

Ranked Choice Voting

On November 5 Minneapolis Minnesota held elections.  The process of voting could not have been easier.  In less than five minutes I participated in our democracy.  

Finding out who won the various offices up for election was another story.  Our mayoral race had a large number of candidates.  The total number of mayoral candidates was 35 (pulling from the vote information).

It took over two days for the votes to be tabulated by hand using ranked choice voting.  I thought it could be done more quickly. 

And, having a little free time yesterday I wrote up a sample implementation of ranked choice voting.


For the curious, the sample implementation program to calculate the election ran the actual mayoral election data in roughly 18 seconds!  


Programs:

election.py 
provides:   ranked-choice voting
input:  csv file containing all votes for an election (1 vote per line, a line consists of 1+ choices for a candidate)
output:  run it and see
e.g.  election.py  election_csv_file.csv

election_votes_generator.py
provides:   a simple way to create a csv of ranked-choice votes
input:  arguments  (OUTPUT_FILE VOTERS CANDIDATES ALLOWED_CHOICES)
e.g.  election_votes_generator.py out.csv  200000  15  3
This would create a simulated set of votes containing 200000 votes, with 15 candidates, where a vote is for 3 candidates in ranked order.

Note:  I did not realize the actual votes had been posted recently.  Therefore, I wrote up this election simulator.  I have added the actual voting information of the mayoral race to the git repo for reference.


Reference links:




Friday, September 27, 2013

Building custom saltstack modules


Building custom saltstack modules 

This blog post was inspired by Joseph Hall's (Senior Engineer at Salt Stack, Inc.)  presentation about writing custom SaltStack modules.
Thank you Joseph!  http://www.youtube.com/watch?feature=player_detailpage&v=YP73LM8mzL0

SaltStack comes with a large number of extremely useful modules.  
http://docs.saltstack.com/ref/modules/all/

One very handy module is the cmdmod module.
http://docs.saltstack.com/ref/modules/all/salt.modules.cmdmod.html#module-salt.modules.cmdmod

The cmdmod module gives the ability to easily run shell commands against a set of minions.
This provides an ssh like ability across one or more servers in parallel, and is great for ad-hoc commands.

So, why would we need to create a custom SaltStack module?  Eventually, a need arises for new functionality which is not already included in the SaltStack project and does not fit neatly into an ad-hoc shell command.

My example:  Gather metadata about an AWS EC2 instance using a custom SaltStack module.

Background:
AWS metadata is very useful.  A running server configured for a purpose might need to be accessed directly, or identified for use by another service.

Example 1:  To ssh directly to an EC2 instance we need a DNS address or IP address
Example 2:  To add an EC2 instance to a load balancer, we need its AWS instance id.

Amazon recognized this need and created a way for an EC2 instance to retrieve data about itself.  By issuing a specific HTTP GET request from an EC2 instance, the HTTP response returned contains metadata specific to that instance.

AWS reference on metadata:

SaltStack to the rescue!  We can run a command on an EC2 instance without knowing its DNS name or IP address, and gather metadata!  This means we do not need to log into the AWS console to get information about an EC2 instance.  And furthermore, we can automate the consumption of such information.

Some metadata in which we might be interested:
  • public-hostname (public dns address)
  • public-ipv4 (accessible outside aws)
  • instance-id (the unique id of an instance)
  • placement (availability zone)
  • security-groups (names of security groups applied to the instance)

Building the custom module


First:  Choose a module name that does not overlap with the modules supplied by SaltStack.  A name collision would result in our custom module replacing a SaltStack module.  My recommendation is to consistently prepend any module name with either a company name, abbreviation, developer name, initials or etc.  e.g.  wc_ec2_metadata.py

SaltStack reference on custom modules:


Pre-requisites:

  • An AWS account is set up and enabled for EC2
  • A salt-minion is installed and running on the EC2 instance

Building our custom module on the EC2 instance:
  • Create a directory for building custom saltstack modules (e.g.  # mkdir salt_modules)
  • Start writing a module
(e.g. salt-call -m /path/to/my/modules module.fn )


Create a skeleton module and test it works




Test that our module and function will work as a salt module:

  • salt-call -m ~/salt_modules  ec2_metadata.test


Expected Output:
local:
    True

Now let's add a useful function with some logging and error handling:
Test that our module and new function will work as a salt module:
salt-call -m ~/salt_modules  ec2_metadata.get_public_dns

Expected Output - a valid DNS name, not exactly what is below:
local:
    ec2-54-221-126-106.compute-1.amazonaws.com

All that is left to do is implement the remaining functions; (public-ipv4, instance-id, placement, security-groups).


Completed module:



There are a number of ways to distribute your new custom SaltStack module to the minions.

On a salt-master you first place your module in the appropriate _modules subdirectory under the "file_roots" location (typically /srv/salt/_modules) and then run one of the following:

  • salt \* state.highstate 
  • salt \* saltutil.sync_all
  • salt \* saltutil.sync_modules  (only updates modules)

Viewing your module and function docstrings is easy using the included sys.doc module.

  • salt-call sys.doc my-module  (returns all docstrings in a module)
  • salt-call sys.doc my-module.some-func (returning a docstring for just one function in a module)
  • Note: these work using 'salt' as well  e.g.  salt myserver sys.doc my-module

My public github repository:
git@github.com:wcannon/saltstack-related.git
https://github.com/wcannon/saltstack-related.git
https://github.com/wcannon/saltstack-related

Community contributed efforts:
Additionally, community contributed efforts (modules, states, grains and more) can be found here.

Monday, September 9, 2013

Deploying a typical LAMP application using SaltStack - my experience

Using SaltStack to deploy a typical LAMP application

This article describes my approach to a recent software deployment project.  The goal of the project is to create a secure fast deployment method for multiple LAMP stack applications.  

Previously I have used the python fabric library to deploy software applications.  Having recently set up "instant" software infrastructures using SaltStack I was eager to take advantage of its remote execution capabilities, and node introspection ala grains.

SaltStack Introduction for Deployment

SaltStack is designed to allow parallel remote execution over encrypted channels, and also provides configuration management.  Essentially, SaltStack provides the basis to make things happen quickly, repeatedly, and exactly as desired.    It is also a python based open source project with an active developer community.


Overall Deployment Design
I chose to conceptualize deployment to consist of two distinct pieces.  

Piece # 1:  The assembly or build of the application.  For the three applications I deployed this consisted of combining php files with configuration files.

Details related to Piece # 1:
Each application has a specific github repository.  And, each branch for the repository is deployable if an environment name matches the branch name (convention over configuration).  At build time, the least amount of effort to update the code locally is used (e.g. if a project has already been cloned, and its branch checked out, we simply perform a "git pull")

Piece # 2:  The actions necessary to put the new application into place and be served by the webserver.  This consists of executing a custom python module.

Details related to Piece # 2:
A SaltStack custom module is simply a standard python module that has been synchronized onto the minion.  (e.g.  salt \* saltutil.sync_modules ).  A function in a module can call a function in another module.  This is very handy and is called "cross-calling."  

Note:  If a function in the module is prepended with underscore characters it will not be callable outside the module (iow a 'hidden' function).  

** A third piece exists:  A wrapper that takes user input, and causes piece 1 and piece 2 to happen.  This piece uses docopt for command line argument processing, the salt filesystem, and a salt runner to signal the targeted servers to run their custom python module.

Details related to Piece # 3:
A targeted server is really given a message via the queuing system on the salt-master.  The targeted server identifies it should do something, and then does it, reporting its result back to another queue on the salt-master.  I have used the "grains" system in order to have a server know what roles it has.  (e.g.  is_wepapp1_server: true)


Code Snippets:

DocOpt Sample:
Salt Runner Sample:


SaltStack Custom Module Sample:



Results:
Very fast deployments for a 5 mb application -- (averaged 5.5 seconds)

An application deployment consists of these steps:
  1. log onto salt master
  2. run the deployment script:   # ./deploy.py  APPNAME ENVIRONMENT                               e.g.  ./deploy.py  webapp1  production
  3. observe output of deployment script

Using the same name as above we could call the custom module directly:
# salt -C  'G@environment:production and G@hosting_webapp1:true'  deployment_webapp1.deploy  my-build-file.tar.gz


Positive elements:
  • no list of servers to maintain
  • no ssh key management 
  • scalable, works against lots (read 100's of servers in parallel)
  • can use the grains on the servers for other remote ad-hoc execution
  • very simple command line usage (dynamically building the module name to publish)
  • trivial to add more deployment modules

Negative elements:
  • none specific to SaltStack

Items for improvement / enhancement:
  • capture deployment history
  • rollback capability / specific revision deployment 
  • use of a custom returner (rather than stdout on salt-master) -- would prove helpful for queries, or dashboards
  • remove older previous deployments (e.g. only retain the last 3)

Choices that worked out well:
  • docopt = simply a great library (sorry argparse - my previous favorite)
  • keeping a copy of repos / branches on the deployment server = major deployment time reducer
  • use of grains on servers (roles & etc) = great way to encourage better management practices, very handy for general usage 
  • use of a salt runner rather than calling specific deployment modules via the salt command = allowed me to simplify the usage of the deploy script
  • cross-calling salt modules = great way to reuse code, and helps shrink the size of a deployment module



Helpful Links: