Preparing our warfighters to combat our adversaries in ever-changing physical and digital landscapes takes innovative new approaches to training. To equip our warfighters with the tools they need to be prepared to complete their missions, General Dynamics Mission Systems also needs to look for new ways of devising training solutions, including incorporating emerging technologies such as machine learning (ML) into their work.
Staying true to our commitment to innovation, we present Pioneer of Progress, Jeremy Trammell, a data scientist in our Deep Learning Analytics Center of Excellence (DLA CoE). The DLA CoE applies state-of-the-art ML technologies and best practices to help our teams deliver solutions to their customers.
Tell us about your role.
Our mission within DLA is to help other teams get the best value possible out of ML. We help them determine when ML is an appropriate tool for their problem, guide them to ensure they will have all the ingredients necessary to successfully apply it, and then deliver functional ML solutions to them. To do this, we need to stay up to date with the latest advances so that when the time comes, we can select the best models and datasets to jump start our work.
Our Deep Learning Analytics team has proven experience in creating specialized machine learning algorithms and deploying them to mobile platforms like smartphones, unmanned underwater vehicles, and aircraft at the edge of the battlefield.
What sparked your interest in pursuing an innovative idea in your role?
We were working on a project that had a complex set of problems to solve. A typical ML problem poses a single question with a single piece of data X, which you can provide to the computer, and a single piece of data Y, which is the correct response to the question. This particular problem required us to develop a number of independent ML models for answering each part of the complex question being asked. That lead to the inevitable problem of optimizing models not just unto themselves, but also as a whole to ensure that the entire process was as efficient as possible to satisfy the small size, weight, and power requirements of the customer.
Tell us about your innovation and the benefits you've seen implementing your solutions.
Our innovation is really anti-innovation. We work closely with the other engineering teams to help them apply tried and true processes for reliably getting the best value out of ML on their projects. We refer to this process as establishing ML readiness, and that critical work is often overlooked when people decide they want to apply ML to their problem. Much like building a home, you can't just start building. You need to have a design to work from, the tools and materials you intend to use, etc.
Before you can even begin the process of applying ML to a problem, you need to have five critical elements:
1. A clear definition of what you want to give to the computer and what you want to get back from it in response.
2. A metric for measuring the quality of the responses you get back as compared to the responses you expected.
3. A baseline measurement of the current method of solving the problem so you know what you're up against.
4. A set of data you can use to evaluate your performance on and another set to use for training.
5. A crack team of data scientists who know how to take 1-4 and generate results.
What has been the best part about working on this innovation?
Ever since the DLA team was acquired by General Dynamics Mission Systems, we've focused on growing our ranks to better support the increased workload. This has brought so many new ideas and perspectives to our work and I am constantly inspired by the ideas my ML peers and engineering team have been bringing to our efforts.
Could you share any "aha" moments or major breakthroughs you experienced in your work?
As we developed the software necessary to deploy an ML based solution to the warfighter training program, it occurred to me that with a simple tweak of the code, we could cut the execution time of the model down by an order of magnitude by only performing minimal calculations. This may seem like an obvious bit of streamlining, but it is highly unusual for that kind of optimization to be an option when working with highly parallelized ML solutions.
What advice would you give to others looking for innovative solutions?
The most reliable innovations aren't really innovative at all. Steady methodical progress wins almost every time. It may not sound exciting but take things one step at a time. Establish a baseline and incrementally improve upon it, documenting your work as you go.