Parallel Engineering Efforts

Parallel Tracks

One of the most important things an organization can do as it grows out of startup-hood to maturity is to learn to run parallel engineering efforts. A parallel set of projects might implement multiple takes on an overall idea, testing out different approaches. Or it might implement the same idea at multiple points on a time/quality tradeoff continuum: a low-effort, easily delivered prototype, and a more fleshed-out version that takes longer to deliver. This essential institutional skill confers advantages by reducing risk and improving throughput.

It’s well established that when creating a new product, rapid feedback is more important than polish. You need to validate or disprove your assumptions and learn where to focus, preventing you from wasting money, time and effort. This philosophy is enshrined in the concept of the MVP.

Startups are essentially all MVP. A startup can only do one thing at a time. If their experiment works, they survive. If not, they die. (Pivots, runways, blah blah blah…) Because of this existential threat, the technique of releasing an MVP and iterating has become the dominant method of launching a startup – to the extent that those flouting this common wisdom are considered crackpots. And rightly so, in almost every case.

But early, constant feedback is the key to every project’s success. New projects at mature companies aren’t less likely to fail, they’re just less likely to kill the company if they do. You still have to expect your assumptions to be incorrect, for your vision to need tweaking or overhauling, for your users to care about things you didn’t expect them to, and for your initial efforts at implementation to be sub-optimal. Furthermore, an established company generally has a higher bar to pass when releasing a new product than a startup does, as clients expect a greater level of quality, polish, and integration with existing products.

Luckily, a company that is no longer living on the edge can do something that startups can only dream of – it can run multiple experiments at once. By diversifying the field of experimentation, a company can improve its outcome. Different benefits can be realized, depending on what aspect of a project this diversification varies over.

Throwaway and investment projects

Sometimes you have a project where you’re pretty sure you know what you need to build, but building it right will take a long time. This calls for a throw-away project that you can build quickly, in parallel with the more rigorous solution. The throwaway project will help some users, validate your assumptions, and teach you about the space. You will duplicate some efforts, and write code destined for the bit bucket, but it’s totally worth it.

MongoDB used this approach to develop our BI connector. We wanted to let our customers take advantage of the many BI tools out there that visualize data stored in databases. All of the mature solutions were built to work with SQL databases, so the best thing for our customers was to build a translator. There were many possible options for implementing one of these; the top two were:

Use a PostgreSQL foreign data wrapper -> easy but very limited Write a full sql translation layer for MongoDB-> hard but highly useful

We wanted to be able to solve some real problems as soon as we could, but we had no idea how long implementing a full SQL layer would take. So in June of 2015, we started building both. After about a month or so, we had satisfied ourselves that both solutions would work. We had a good POC of the postgres solution, and a super super rough POC of the full one.

We shipped the postgres-based project as v1 in December of 2015 and just shipped the full SQL proxy as v2 in November 2016. v2 is already performing way better than v1 (but can still get much better) and is a lot easier to manage (but can still get much better). v1 was limited, but we were able to ship it an entire year earlier than v2, and it addressed a very real need that a subset of our customers had. Now it’s retired, but using it we were able to validate our approach, initiate partner relationships, and iron out integration wrinkles.

Multiple competing MVPs

Sometimes you need to build something but you don’t know what the right approach is. You may have a few alternatives in mind, but no idea which is the better one, for whichever definition of “better” you value most for that project. You would address this condition by building multiple competing solutions, with the understanding that all but the winning solution will be abandoned. (In the absolute worst case, all your efforts fail, but in that case you are left with less mystery as to what factors affected the outcome, as you have tested more things.)

MongoDB has used the approach of multiple competing solutions, most notably when we were working on document level locking for MongoDB 3.0. We had built a prototype into the original storage engine, mmapv1; we were looking at WiredTiger; and we were looking at other storage engines to embed as well. Obviously, the WiredTiger solution was the winner and we ended up acquiring WiredTiger and it is now our default storage engine.

Multiple, coexisting (for now) solutions

Sometimes different audiences want the same type of solution, but in different ways. In that case you can build parallel projects to serve them, and you might wind up running with all of them for a while. Maybe over time things will converge, maybe they won’t, but you’ll be getting feedback from these parallel projects, and you’ll be able to incorporate the learning from all of them to improve them all as well.

Examples of this in action can be found in the different management systems and services we provide for the variety of environments into which MongoDB can be deployed. Regardless of whether that is fully in the cloud, fully on-prem, or some degree of hybrid, we have the same key goals: make MongoDB easy to spin up, put into production, grow, and manage.

In December of 2015, as part of MongoDB 3.2, we released MongoDB Compass, a tool focused on real time interaction with your database. In June of 2016, we released MongoDB Atlas, our database-as-a-service for MongoDB. When released, these products had no overlapping functionality. In the last few months however, Atlas has added some features from Compass. The first was [real time server stats][linkRealTimeServerStats], and soon we’ll be adding the first piece of a CRUD roadmap for Atlas. In addition, Atlas features tend to flow into our Cloud Manager and Ops Manager products.

This sets up a bit of a race between Atlas and Compass. That’s ok though! It creates a bit of a competition between these teams in terms of adoption, but they are actually working together to share resources like css, design, and user research. We’re not sure where this will fall out, but our focus isn’t on the success of a particular artifact of software, it’s on getting features to our users and acting on what we learn. Over time we’re likely to see more of a convergence, but in the meanwhile we can explore the space with multiple teams, and none of that effort is wasted.

Avoiding the pitfalls and harnessing the benefits

When you run parallel efforts, it’s critical to make sure the teams have a collaborative relationship, not an antagonistic one. Some competition can be good, but it can quickly turn toxic. Diversification isn’t worth losing a team to hard feelings. Furthermore, it’s better for them all to learn from each other than than it is for them to gain a marginal productivity boost.

To start with, you need absolute and full transparency. Any level of secrecy about one of the projects is a really bad idea. Not only can it lead to teams undermining each other, it completely wastes one of the core benefits of having multiple things going on in parallel. As long as you’re getting learning from a broader surface area, you should be maximizing its impact across all the efforts, not siloing it within each. Parallel efforts should focus on the experiment, confining the competition to the areas where the different approaches are truly distinct, and leveling the playing field everywhere else.

The document-level locking project I mentioned before is a good example of how teams working on competing solutions can collaborate. While mmapv1 team worked on document-level locking in the existing storage engine, and the WiredTiger team worked on integrating with MongoDB, they both collaborated to enable document-level locking at the layers above the storage engine.

Don’t set things up so there will be a “winner”, and definitely don’t put money on the line. Bear in mind, the only way an experiment can fail is by not generating results; a failed effort is actually a successful experiment. The team that built the “losing” solution did just as much to contribute to the company’s overall success by exploring – and eliminating – some of the search space.

Relish doing it twice

The specifics of these three types of parallel engineering efforts are different, but the unifying principle is that sometimes you aren’t sure which solution will work, and you shouldn’t be scared of doing it twice. Writing code isn’t the hardest part about building software, it’s building the right code that you can live with for years. [linkRealTimeServerStats]: https://docs.mongodb.com/compass/master/performance/