What TechOps Can Learn From Manufacturing

When I first embarked on my journey to complete my MBA in Business IT Management I had largely worked technical support jobs and had just recently landed my first Systems Administrator job in what was largely a Windows shop. Shortly into my experience there I found myself taking over an AWS infrastructure on Linux with Puppet for configuration management. During my MBA schooling I found myself not understanding the purpose of learning so much about manufacturing operations when my degree was on a technical path. If I could go back in time I think I’d smack myself and relate what I know now to my past self. In many ways schools with MBAs that have a technical focus are sort of missing the boat when it comes to relating the importance of studying manufacturing to future IT managers, CTOs and CIOs.

I recently finished reading the book “The Phoenix Project” and it opened my eyes to some things. In the book the main character takes over managing what is an all too typical IT Ops department, constant fires, outages, tribal knowledge, lack of communication between development teams, lack of consistency in environments from dev to prod. Without giving away too much of the read, the book underscores the importance for embracing DevOps methodologies. 

So what can technical operations learn by studying manufacturing and embracing DevOps methodologies. The answer is quite simply all the things. When we think back to the invention of the assembly line, Henry Ford was able to churn out more cars at a quicker pace by having assembly lines where jobs were done with repeatable processes and a limited number of options. The modern computer is made up of countless parts made by countless different manufacturers and tech stacks, but the establishment of things like IEEE , W3C, and others is a common unifying theme of a body of standards and interoperability requirements. The first step to embracing DevOps methodologies is to begin to look at your work flows and learn what can be standardized into repeatable and reusable processes. One of the best places to start this is to standardize an imaging platform, whether that be using a customized AMI in AWS, virtual machine templates in your hypervisor platform, or using a PXE service like WDS for Windows or Cobbler for Linux. By setting up a customized image that gets regularly updated you can install some baseline utilities, have the most current updates installed to minimize initial patching time and bootstrap your configuration management agent on the image.

One of the most amazing manufacturing capabilities that’s come about in the past half century is going beyond mass production of a single item to what is called mass customization. Where certain customized components can be mass produced as interchangeable parts or production lines can easily switch between colors and styles without tremendous effort being expended. The tech equivalent of that is configuration management. Time and time again we’ve heard the mantra of infrastructure as code and this is where configuration management truly shines. By creating Puppet modules, Chef recipes, Salt states, and Ansible playbooks we are essentially codifying what our configuration should look like on a system. This allows you to create modular desired states for various components and applications that can be added and removed with great flexibility. Always start with what you already know how to do manually before building automation for it, otherwise you could be creating an automated wrecking ball, and nobody wants that. Over time your configuration management can mature to a place of using variables and manage every aspect of your applications up to the point of deploying code. By bootstrapping your images with configuration management and setting up mature config management implementation the process of building out a new server can become completely automated or nearly automated.

In the world of manufacturing a great deal of time and money is spent to coordinate the most efficient workflows, to identify bottlenecks, and learn what processes can run concurrently and which processes act as blockers for other processes further down the line. This concept becomes important when we move to the world of code deployment. Yes we’re talking about Continuous Delivery and Continuous Integration (CI/CD). Once you understand what it takes to get code from a repository into a live running system you can begin to build your deployment pipeline. Several great tools exist for this including Jenkins, Atlassian Bamboo, Travis CI, and others. In the dark ages of tech developers would build out code on their local environments which almost never matched production, flung the code to Ops who in some cases used sketchy methods such as FTP or RDP file copy to get code on machines and the results were often chaotic and lots of blaming happened. In a well developed deployment strategy developers will work on code in feature branches, those branches will be merged using pull requests into environment branches which are then automatically run through unit testing and deployed by the CI/CD tool if they pass all required tests. Of course another key element is to make sure that dev, staging, and production environments are all configured on the same tech stacks and have consistency so that code behaves in the same way across the board. Configuration Management is also a great way to ensure the consistency between environments.

One thing that is fundamental to lean manufacturing is the need for constant improvement, monitoring the system for efficiency, looking for ways to further automate and improve, creating tight feedback loops. When applied to tech this looks like dedicating time for R&D, constantly looking for new tools and new ideas to better streamline what you have built and ensure that you don’t grow complacent in the face of ever changing tech stacks and tools. This also looks like setting up deployment and configuration management logging, automating patch management, creating automated reports and alerts around this logging so that you know the moment there is a problem, when the problem begins, where the problem exists, and mobilize to resolve it. Additionally the tighter integration between dev teams and operations teams creates an atmosphere of mutual trust and collaboration and allows for tight feedback loops at earlier stages within the development process.

Wrapping up this post I can only hope that as IT becomes a fundamental part of every company, and the tech must be a core competency for modern companies to out maneuver competition, slash costs, improve quality while improving agility, and that more is done to teach these important concepts to MBA students, not only in the context of manufacturing but in the context of technology. We have entered an era where a company’s technical aptitude can make or break their success, every company is a tech company now, whether you are a retailer with complex ERP, inventory, and logistics systems, or a biopharmaceutical company crunching mass amounts of data into human usable scientific data the game has shifted dramatically in recent years. Happy automating all the things!  

Guacamole Server For Home Labs

So you’re an IT pro with a home lab, that’s awesome! Except when you aren’t at home and you can’t get to all of your machines that you need to. Exposing RDP or SSH without multi-factor auth is certainly not something I’d recommend. This is where Guacamole comes in incredibly handy. Guacamole is a web based client that allows you to establish RDP, SSH, and Telnet sessions from within the local network. Port forwarding 8080 for Guacamole allows for outside access thus giving you SSH, RDP, and Telnet access on your local network without exposing your entire home lab to the outside internet.

Here’s the install process:

-Setup a Centos 6 or 7 VM with at least 2 cores and a good 4gb or RAM and a small 10gb drive
-run the following commands below (leverages the script to complete the install of Guacamole and all of its dependancies)

Install Wget
yum install wget -y

Wget the install script
wget http://sourceforge.net/projects/guacamoleinstallscript/files/CentOS/guacamole-install-script.sh

chmod the script to make it executable
sudo chmod 755 guacamole-install-script.sh

Run the script
sudo ./guacamole-install-script.sh

Open a browser and visit the ip or hostname:8080/guacamole

Screen Shot 2016-10-26 at 10.14.41 AM

Once logged in you can see any node groups you created in a tree along with their connections:
Screen Shot 2016-10-26 at 10.14.55 AM

To add additional connections click your username in the top right, choose settings, and then click the connections tab and choose create new connection and fill out the necessary info:

Screen Shot 2016-10-26 at 10.16.51 AM

I’ve been using Guacamole for about 8 months now and it’s great to be able to make changes to my managed switches and access all of my lab machines. I hope this has been a helpful post and that you enjoy Guacamole server!

Fedora 24 Wireless Working on Dell Latitude E-Series

As many who personally know me, my favorite laptop is my trusty old Dell Latitude E-6320, it’s a few years old now but still rocks an i5, SSD, and 8gb RAM and gets the job done nicely. I use this at home with a docking station which works great with wired network connections, however I’ve found wireless to be an ongoing battle with this laptop on Fedora 24. So I turned my fix into a script and posted it on my Github!

https://github.com/rstaats/Fedora24BroadComWireless

Hope this helps other poor saps who love to run Fedora on Dell Enterprise grade laptops!