Resume

My name is Jason Gates, and my career has taken me from being an engineering physicist to a computational mathematician to a software engineer and finally to a DevOps evangelist. During my undergraduate research into magnetically confined high-temperature plasmas, I discovered I was more interested in the numerical methods working behind the scenes, and decided to pursue that in graduate school. In the midst of that, I realized the majority of the problems you run into aren’t so much in the algorithms themselves, but their implementation in the software, so I made the switch over to software engineering. My experience in that arena showed me it’s less about the implementation details and more about the team culture you have and how that contributes to the overall success of the project, and thus the switch to what I’m calling a “DevOps evangelist” role.

I excel in being able to get plugged into a team, determining what its various pain points are that prevent it from being as successful as it can be, prioritizing which of those need to be addressed first, and then iterating with the team to make incremental improvements to help them move in the right direction. Though many view DevOps as a collection of tools or tasks that enable the “real work” to get done, it’s actually a paradigm shift in how we view both the work we do and how we go about doing it. Helping teams realize this paradigm shift, and the resulting productivity boost, is one of my passions.

I function best as a generalist, leading or interfacing with a team of specialists. I excel at keeping the high-level and long-term plan in mind, breaking it down into manageable pieces, and finding the right people to work those tasks. When needed, I have the experience and attention to detail to dig down into the weeds with the specialists to ensure the solutions we’re designing and building will meet the overall project needs. I do all this best through asynchronous communication via project management and team collaboration tools, with synchronous meetings scheduled only as needed.

Clearance

DOE Q: July, 2018 – Present

DOD TS/SCI: May, 2014 – June, 2016
Deactivated after leaving Northrop Grumman Corporation

Education

Colorado School of Mines, Golden, Colorado
Ph.D. in Mathematical and Computer Sciences
GPA: 4.0; Qualifying Exams: Passed
Left incomplete due to family responsibilities
University of Tulsa, Tulsa, Oklahoma
M.S. in Applied Mathematics
Graduation: May, 2011; GPA: 3.917
University of Tulsa, Tulsa, Oklahoma
B.S. in Engineering Physics, concentration in Robotics
B.S. in Applied Mathematics
B.A. in German
Graduation: May, 2009; GPA: 3.916

Experience

Sandia National Laboratories, Albuquerque, New Mexico
May, 2021 – Present
Member of the Technical Staff: Systems Mission Engineering (Systems Engineering & Integrated Solutions)
Leading an initiative to automate manual integration and release testing; encouraging fuller adoption of GitLab as an all-in-one DevOps and project management platform to improve developer productivity and overall team success.
Sandia National Laboratories, Albuquerque, New Mexico
October, 2018 – May, 2021
Member of the Technical Staff: Center for Computing Research (Software Engineering and Research)
Developing continuous integration/deployment workflows and infrastructure; advising teams on achieving a DevOps transformation; leading teams to design and develop tools to improve sustainability and reproducibility.
Sandia National Laboratories, Albuquerque, New Mexico
September, 2017 – October, 2018
Member of the Technical Staff: Center for Computing Research (Computational Mathematics)
Introducing software engineering best practices into team workflows; developing code integration workflows and infrastructure.
Sandia National Laboratories, Albuquerque, New Mexico
June, 2016 – September, 2017
Limited Term Employee: Center for Computing Research (Computational Mathematics)
Software engineering, development, maintenance, testing; version control instruction.
Northrop Grumman Corporation, Aurora, Colorado
June, 2014 – June, 2016
Engineer Systems II
Extending software capabilities; developing and testing algorithms; addressing data quality.
Colorado School of Mines, Golden, Colorado
August, 2012 – May, 2014
Graduate Teaching Fellow
Advanced Engineering Mathematics and Calculus 3; “Problem Solving with Matlab” tutorial series.
Front Range Community College, Westminster, Colorado
Summer, 2013
Math Instructor
College Algebra; online education certified.
Sandia National Laboratories, Albuquerque, New Mexico
Summer, 2012
SIP Graduate Professional Technical Summer Intern
Code validation via manufactured solutions to partial differential equations.
Colorado School of Mines, Golden, Colorado
August, 2011 – May, 2012
Graduate Teaching Assistant
Recitation sections of Calculus 3.
University of Tulsa, Tulsa, Oklahoma
August, 2009 – May, 2011
Graduate Teaching Assistant
Quiz sections of Calculus 1 & 2.
University of Tulsa, Tulsa, Oklahoma
May, 2007 – May, 2009
Plasma Physics Research Assistant
Computationally solved nonlinear magnetohydrodynamic (MHD) equations.

Projects

Auotmated Testing Improvement Initiative
Sandia National Laboratories, Summer, 2021 – Present
The Geophysical Monitoring System is a Kubernetes-based application suite developed in Java and TypeScript that historically has a substantial set of manual acceptance tests that take days to complete for each quarterly release. I led an initiative to establish the infrastructure and associated team policies and best practices to automate component-, integration-, and system-level tests in the GitLab CI pipelines. With the initial phase complete, work is ongoing to mature the infrastructure, optimize the pipelines, improve the robustness of the applications, and continue automating existing manual tests. We are slowly but surely moving toward continuous delivery.
Unifying the DevOps Infrastructure Within Trilinos
Sandia National Laboratories, Summer, 2020 – Spring, 2021
Over the past few years, two distinct DevOps infrastructures have grown up within the Trilinos project. Understanding that both solutions had their pros and cons, both were less flexible than desirable, and ultimately the prospect of maintaining two separate solutions long term would be fraught with error, it was determined a year-long effort would be made to replace them with a single solution incorporating the lessons learned from the past. I conducted an initial investigation of the existing solutions over a two-month period, and then followed that with a time of gathering stakeholder requirements. I then drafted a plan to cover two general-purpose components for consistently loading environments across machines, and consistently configuring a CMake-based code, and then led the all-remote team, spread across four states in two time zones, in the design and execution. Modularity, flexibility, unit testing, code coverage, and documentation were all hallmarks of the way we tackled the problem. The intent was to not only provide Trilinos with what it needed, but to provide those in the greater scientific software community with general tools they can use to improve the sustainability and replicability of the codes they develop.
DevOps Infrastructure Consultant
Sandia National Laboratories, Summer, 2020
The Dakota project provides a software suite for optimization and uncertainty quantification. Their build and test infrastructure had grown organically over more than two decades to the point of being both fragile and brittle. They sought my services to determine what they would need to do to get from where there were to where they wanted to be. I conducted a number of interviews with team members, and interfaced closely with their newly hired DevOps engineer, to determine both their needs and what they could realistically accomplish. I then developed a 15-month plan to rebuild their infrastructure from the ground up such that it would be easy to maintain and extend for years into the future. I presented the plan to a wide audience largely via an extended metaphor, so the various pieces would be easy to grasp by non-experts, and a secretary’s reaction was, “I hardly ever know what you all are talking about, but this presentation I understood!”
Developing JOG-CI: Connecting Jenkins, OpenStack, and GitLab CI/CD
Sandia National Laboratories, Spring, 2020 – Summer, 2020
OpenStack is a collection of components that allows you to maintain your own private cloud infrastructure. The ability to rapidly stand up cloud tenants, running on corporate hardware behind the scenes, was desirable for lowering the barrier to entry for teams to get up and running with continuous integration. A lightweight tool for standing up such tenants and connecting them to either Jenkins or GitLab (or both) was developed under my direction by our department’s year-round intern, and that tool has been used by a handful of teams to stand up and tear down instances as needed, depending on changing testing needs.
Faster Turnaround Improves Developer Productivity
Sandia National Laboratories, Winter, 2019 – Summer, 2020
A complete run of EMPIRE’s pipelines used to take about 20 hours. Running only once per day, it was hard to determine where new bugs were introduced in a codebase that would see dozens of requests merged daily. As such, a merge from develop to master would happen every few weeks, if we were lucky. I led a major refactor of our pipelines, restructuring them with modularity in mind, such that they could fail and get actionable feedback to the team as soon as possible. We additionally achieved parallelizing the testing across a collection of machines, again decreasing our time to notification of success or failure. The dozens of Jenkins jobs used by each top-level pipeline are governed by a single Groovy Pipeline script, making maintainability and extensibility a breeze. The end result was a reduction down to about five hours, such that the pipeline suite now runs multiple times a day. With more frequent feedback, we’re kept clean more often, and developers spend less time debugging and more time doing science.
One Script to Rule Them All: Unifying Build Processes Across Platforms
Sandia National Laboratories, Summer, 2019 – Spring, 2020
The BuildScripts repository for the EMPIRE codebase had grown organically over time, with bash scripts for running on different platforms, with different configurations, etc. Developers also had their own scripts for setting up their environment and configuring the code. I led an effort to unify our build process across platforms and create a “one build script to rule them all,” so to speak, to be used by users, developers, and automation services. Python was used for the sake of documentation (Sphinx), testing (pytest), and unified style guides. Replicability was enhanced by building in both a comprehensive logging utility and the ability to replay prior runs of the script. The tool was designed with modularity and flexibility in mind, such that it’s easy extend existing pieces or plug in new ones when future needs arise. Investing the time, money, and energy in developing such an infrastructure paid dividends in productivity, both for the scientific developers and the DevOps engineers.
Developing the SPiFI Library and Associated Jenkins Pipelines
Sandia National Laboratories, Spring, 2018 – Spring, 2019
In order to adequately test the git workflow mentioned directly below, a flexible pipeline was needed, and the Jenkins pipeline plugin suite with the Apache Groovy language under the hood provided the power necessary. The plugin suite has a high barrier to entry, so a colleague and I worked closely together to develop the SEMS Pipeline Framework Infrastructure (SPiFI) library, I developing the pipeline itself and driving the requirements for the library, and he developing the library to ease and automate routine pipeline tasks. The library has since been rolled out to half a dozen teams or so, and is used to drive hundreds of jobs on a daily basis.
Stability with Respect to the Tip of Develop
Sandia National Laboratories, Fall, 2017 – Fall, 2018
Trilinos is a collection of math libraries for large-scale, complex multi-physics problems on next generation high-performance computing architectures. Its development is largely driven by a handful of physics application codes that are tightly coupled with it. Because the applications drive the algorithm development, they would like to be able to use the latest commit on the develop branch, but at the same time they would like to make sure commits to Trilinos never break them and stall application development. I developed a git workflow involving a fork of Trilinos and a secondary approved version of the develop branch, which is updated automatically via nightly testing. In the event testing fails, the branch isn’t updated, and the application team can continue development unhindered. They can file an issue against Trilinos that will be resolved through Trilinos’ usual process. Flexibility is also afforded for the rare instances where simultaneous changes must be made to both the application and Trilinos codebases. This approach has been used successfully by two separate application teams for the last few years.
Defining Policies to Turn a Team and Project Around
Sandia National Laboratories, Summer, 2017 – Fall, 2018
EMPIRE is a collection of next generation electromagnetic/electrostatic/fluid dynamic codes. Prior to the summer of 2017, there was confusion as to who was on the team, what people were working on, what needed to be done, how one could get started, etc. Pushes happened directly to the master branch, and there was minimal testing, code review, documentation, etc. I played a large part in driving the adoption of the following: GitLab issues, description templates, and Kanban boards were used to track work and capture design discussions. GitLab merge requests, complete with code review and approval, were required to get changes into the develop branch. Style guides for both the code and documentation were developed to move toward a common look and feel. A git workflow was developed to ensure no direct pushes to master or develop, and master would be updated via nightly testing. Automated testing was established to test multiple machines and configurations to improve stability. A monthly retrospective was established to regularly check in on how well our policies were working for us and allow us to tweak them as needed.
Git Instruction
Sandia National Laboratories, Spring, 2017 – Fall, 2019
I led the Center for Computing Research University (CCR-U) group in teaching courses introducing participants to version control via git, utilizing the Software Carpentry instruction style. Developed both introductory and intermediate courses, which were very popular and received excellent feedback. Served hundreds of Sandians.
Panzer Memory Usage Refactor
Sandia National Laboratories, September, 2016 – July, 2017
Local to global communication in parallel finite element simulations occurs through the use of owned vectors, containing all the information owned by a given process, and ghosted vectors, containing the information from neighboring processes. The original implementation duplicated all the data in the owned vector in the midst of the ghosting process, meaning more data was being stored in memory than was necessary. I refactored classes such that ghosted vectors contain only the ghosted information, and any time a user wants to grab an element of a vector given a local ID, the logic of whether it lives in the owned or ghosted vector is hidden from the user. Avoiding the data duplication significantly reduces the run-time memory usage.
Generalized Current Constraint Boundary Conditions in Charon
Sandia National Laboratories, October, 2016 – June, 2017
The Charon semiconductor device physics simulation code previously had the ability to attach a constant current constraint to a terminal of a device (diode, transistor, etc.). I generalized this capability such that any number of constraints can be added to a device (at most one per terminal). A resistor contact constraint type was added, corresponding to hooking up a resistor with a voltage source on its far side. A block LDU preconditioner was generalized to work for any of these constraint scenarios. This capability helps users more readily simulate real-world configurations.
LOCA and Charon Integration
Sandia National Laboratories, July – September, 2017
Previously if a Charon user wanted to sweep a voltage contact boundary condition on a device, they would use a rather brute-force Python script to get the job done. I integrated the Library of Continuation Algorithms (LOCA) with Charon to provide this capability natively, and with more flexibility. LOCA is able to intelligently ramp up the parameter step size, and, in the case of a solver failure, backtrack, cut the step size, and proceed with the continuation run. This also provides the capability to track bifurcations in the future, should we need to.
Algorithm Development
Northrop Grumman Corporation, September, 2014 – June, 2016
Given real-time input data from multiple sources, how do we clean and manipulate the data to yield the answer we seek? Details of the algorithm and its application are classified. A colleague and I reviewed relevant literature, determined most information was no longer applicable to our new geometric configuration, and developed an elegant iterative algorithm to walk its way intelligently through the solution space to the correct answer. I also developed a Matlab tool to read in pieces of data from in the midst of the algorithm to generate a multi-page PDF detailing just how the algorithm is working its way to the solution, which aids tremendously in discovering scenarios for which the algorithm needs improvement.
Automating Large-Scale Distributed Software Installation
Northrop Grumman Corporation, Summer, 2014
An installation and configuration of HP’s Network Node Manager software suite across multiple virtual machines (VMs) took an operator four days using a series of manuals to guide them through the process. I developed a series of scripts to be deployed and run on the VMs to update various packages in Red Hat Enterprise Linux (RHEL) to the appropriate versions, patch some of HP’s Perl scripts used in the installation, and install and configure the software suite. Automating the process reduced the time needed to about two hours with minimal human interaction.
Adaptive Local-Global Multiscale Finite Element Methods
Colorado School of Mines, August, 2012 – May, 2014
When solving the classical uniformly elliptic boundary value problem in a medium that is either highly oscillatory or has high contrast the standard Galerkin finite element method (FEM) is insufficient and \(h\)-, \(p\)-, and \(r\)-refinement become prohibitively expensive for large problems. Multiscale FEMs consist of solving local homogeneous problems on the course mesh elements to create multiscale basis functions that already have some knowledge of the medium. Determining the appropriate boundary conditions for these local solves is an area of active research. The adaptive local-global multiscale FEM projects an initial global solve onto extended course mesh elements, makes that projection nodal on the coarse mesh elements, and then averages across the edges of the course mesh. The resulting local solves yield nodal basis functions with expanded support that satisfy the partition of unity. In theory there exist ideal basis functions that can reconstruct the exact solution exactly—iterating this method allows us to work toward those ideal basis functions. This computational effort can be done ahead of time such that the near-ideal basis functions can be used for any source terms and time-evolution scenarios. Effective parallelism was achieved through the use of OpenMPI and PETSc.
Automated Generation of Homework Assignments and Solution Procedures
Colorado School of Mines, August, 2013 – May, 2014
Problems in Advanced Engineering Mathematics are highly formulaic—given a problem of a certain type, there are certain steps to follow to the solution. As such the generation of such problems, and their full solution procedures, is simply a matter of programming. Mathematica was utilized to randomly generate problem sets and solutions for the class.
Manufacturing Solutions to Fluid Flow Problems
Sandia National Laboratories, Summer, 2012
Assuming solutions of a certain form and working them through systems of nonlinear coupled partial differential equations (PDEs) allows one to determine the source terms necessary for the equations to be satisfied. I developed a Mathematica suite for manufacturing such solutions to incompressible Navier Stokes, some of its turbulent extensions, and to MHD. Solutions and source terms were exportable to C for interfacing with a code being validated, and all details were exportable to LaTeX for paper generation.
Boundary Integral Equation Methods for Solutions to Laplace’s Equation
University of Tulsa, Fall, 2010
This general solution method consists of transferring all the computation from the domain to its boundary. Both inner and outer Dirichlet, Neumann, and Robin problems were considered. Solvability was proven, and uniqueness was shown for all but the inner Neumann problem, whose solutions differ only by a constant. Solutions were determined in terms of harmonic potentials from Green’s representation formulas.
Boundary Element Method and Visualization Tool
University of Tulsa, Fall, 2010
The numerical equivalent to the project above, when attempting to solve a PDE on a given domain, one can instead subdivide the boundary into a number of boundary elements and do all the necessary integration there. Determining the solution somewhere in the domain is then just a matter of evaluating a function at that point. I developed an interactive Mathematica suite for solving various PDEs. Users have the ability to specify the boundary, various PDE and boundary condition terms, where to evaluate the solution, etc.
Nonlinear Evolution of Unstable MHD Equilibria
University of Tulsa, May, 2007 – May, 2009
I created a user interface between an eigenvalue code, an equilibrium code (SCOTS), and a nonlinear MHD evolution code (NIMROD) allowing for an exploration of parameter space to determine where modes were stable or resistive- or ideal-unstable. We then ran nonlinearly from a starting point near the stability boundary and observed how the plasma evolved.

Presentations & Publications

  1. Jason M. Gates and William McLendon. “Enhancing Python’s ConfigParser.” Lightning talk. US-RSE Community Call. April 2022.

  2. David Collins, Josh Braun, and Jason M. Gates. “Logger: A Tool for Keeping Track of Python’s Interactions with the Shell.” Presentation. US-RSE 2021. May 2021.

  3. Jason M. Gates, William Mclendon, Josh Braun, and Evan Harvey. “LoadEnv: Consistently Loading Supported Environments Across Machines.” Presentation. US-RSE 2021. May 2021.

  4. Jason M. Gates, David Collins, and Josh Braun. “CI Tools as Lego Blocks: Build Your Ideal Custom Solution.” Presentation. SIAM CSE 2021. March 2021.

  5. Jason M. Gates, Josh Braun, and David Collins. “One Script to Rule Them All: Unifying Build Processes Across Platforms.” Whitepaper. 2020 Collegeville Workshop on Scientific Software. July 2020.

  6. Jason M. Gates, Joe Frye, Brent Perschbacher, and Dena Vigil. “Git Productive!” Whitepaper. 2020 Collegeville Workshop on Scientific Software. July 2020.

  7. Jason M. Gates. “Faster Turnaround Improves Developer Productivity.” Poster. 2020 Collegeville Workshop on Scientific Software. July 2020.

  8. Vivek Sarkar, Jason Gates, Charles Ferenbaugh, Vadim Dyadechko, Anshu Dubey, Hartwig Anzt, and Pat Quillen. “Technical Approaches to Improve Developer Productivity for Scientific Software.” Panel discussion. 2020 Collegeville Workshop on Scientific Software. July 2020.

  9. Jim Willenbring, Ross Bartlett, and Jason Gates. “Git Solutions.” Interview. 2020 Collegeville Workshop on Scientific Software. July 2020.

  10. Jason M. Gates. “Training Best Practices.” Tea time discussion. 2020 Collegeville Workshop on Scientific Software. July 2020.

  11. Jason M. Gates. “Introduction to GitDist.” Presentation. Trilinos User-Developer Group Meeting 2019. October 2019.

  12. Jason M. Gates. “Intro to SPiFI.” Presentation. Trilinos User-Developer Group Meeting 2019. October 2019.

  13. Jason M. Gates. “Stability w.r.t. the Tip of develop: An Experience Report from Two Years In.” Presentation. Trilinos User-Developer Group Meeting 2019. October 2019.

  14. Patrick McCann, Rachael Ainsworth, Jason M. Gates, Jakob S. Jørgensen, Diego Alonso-Álvarez, and Cerys Lewis. “How do you motivate researchers to adopt better software practices?” Speed blog. Collaborations Workshop 2019. July 2019.

  15. Jason M. Gates. “Training in Version Control and Project Management.” Lightning talk. Collaborations Workshop 2019. March 2019.

  16. Jason M. Gates. “Defining Policies to Turn a Team and Project Around.” Poster. Third Conference of Research Software Engineers. September 2018.

  17. Jason M. Gates. “Stability w.r.t. the Tip of Develop.” Presentation. Trilinos User-Developer Group Meeting 2017. October 2017.

  18. Jason Matthew Gates, Roger P. Pawlowski, and Eric Christopher Cyr. “Panzer: A Finite Element Assembly Engine within the Trilinos Framework.” Presentation. SIAM CSE 2017. March 2017.

  19. D P Brennan, P K Browning, J Gates, and R A M Van der Linden. “Helicity-injected current drive and open flux instabilities in spherical tokamaks.” Plasma Physics and Controlled Fusion 51.4 (2009):045004.

Honors & Awards

  • Team Employee Recognition Award for EMPIRE

  • Team Employee Recognition Award Nomination for Advanced Simulation and Computing DevOps Visionaries

  • Spot Award for Git Training

  • Department of Applied Mathematics and Statistics Graduate Student Teaching Award

  • Graduate Teaching Fellowship & Assistantships

  • Outstanding Senior in German

  • Academic Excellence Award

  • Member of \(\Phi\)BK, \(\Phi\Sigma\)I, TB\(\Pi\), \(\Sigma\Pi\Sigma\)

  • University of Tulsa Presidential Scholarship

  • Byrd Scholarship

  • Oklahoma Academic All-State Scholarship

  • ACT Perfect Score

Skills

Software Engineering:

  • DevOps: Well-versed in the three ways and five ideals. Extensive experience serving as DevOps lead on computational science teams.

  • git: Extensive experience developing, using, and teaching complex workflows, along with managing GitLab/GitHub projects. Prefer GitLab as project management tool of choice.

  • Jenkins Pipelines: Extensive experience maintaining hundreds of jobs via Pipeline scripts, along with crafting complex pipelines. Modest experience administering Jenkins instances.

  • GitLab CI/CD: Extensive experience establishing GitLab CI/CD pipelines, along with coupling them to Jenkins for more complex workflows when needed.

  • Cloud: Experience with OpenStack and Kubernetes managing and deploying applications to an internal corporate cloud.

  • Project Management: Experience with requirements elicitation, design, execution, monitoring, and stakeholder interaction. Flexible within the plan, but will work hard to protect scope and team from external interference.

Programming:

  • Python: Extensive experience writing tools to unify build processes. Substantial experience with Sphinx and pytest. Some experience with the SciPy stack. Current language of choice.

  • Groovy: Extensive experience using advanced features to build complex Jenkins Pipeline suites. A close second in language of choice.

  • C++: Extensive experience developing and maintaining large object-oriented codes. Experience creating and utilizing templated classes, including template metaprogramming. Proficient with the Standard Template Library and RogueWave containers. Some experience with Boost libraries.

  • bash/tcsh: Extensive scripting experience.

  • Fortran 77/95/2003: Experience developing and utilizing large, parallel, object-oriented codes.

  • OpenMPI: Experience parallelizing Fortran FEM codes.

  • Perl: Some experience patching installation scripts.

  • Julia: Basic experience.

  • Java: Basic experience

  • JavaScript/TypeScript: Basic experience.

  • OpenMP: Some experience parallelizing Fortran FEM codes. Prefer OpenMPI.

Mathematical Tools:

  • LaTeX: Extensive experience typesetting a variety of works. Prefer to use tikZ, pgfplots, and pgfplotstable to automate the generation of papers from code-generated data using only LaTeX.

  • Mathematica: Certified by Wolfram Research. Extensive experience with symbolic manipulations, visualizations, creating dynamic user interfaces to codes, etc.

  • Matlab: Extensive experience implementing numerical methods and visualizing results. Developed “Problem Solving with Matlab” tutorial series. Some experience with computer vision packages.

  • Trilinos: Panzer, Teuchos, Thyra, Phalanx, E/Tpetra, NOX, LOCA, Piro, Teko.

  • PETSc/LAPACK: Experience implementing parallel FEM codes.

Other:

  • German: Once fluent in conversational and some technical.

  • SketchUp: Extensive experience utilizing for woodworking and carpentry design.

  • Blackboard/Desire2Learn/MyMathLab: Experience managing courses; online education certified.