Table of Contents
Get the latest news

Chasing Clouds: The No-Bullshit Microservices Dev Guide (Part 3)

Liran Haimovitch | Co-Founder & CTO

10 minutes

Table of Contents

As deployments become more complex and require more moving parts, simulating the full microservice environment on a developer’s laptop becomes really tough. At a certain point, it just can’t be done right and provisioning a cloud-based environment becomes your dev weapon of choice.

This may be a less familiar option for devs who are new to microservices. However, it offers many important advantages and (you guessed it!) some significant drawbacks.

Developing in the cloud: the pros

You get a dev environment that is almost identical to production. That means that as you develop, your code is running almost as it does in production. Tests are more likely to reflect the truth. You can reproduce bugs reported by users way more easily. Plus, you’re much less likely to experience unpleasant surprises along the way than if you were developing in a less similar environment.

All technology building blocks are available. Going the cloud route means that all the building blocks you need are in place for dev. Since they’re the same elements you line up for production. Your cloud provider, DB, queue, and all other pieces of technology that you need are right there, ready to use.

And the cons

As you undoubtedly know, developing microservices in the cloud presents some real challenges as well.

Low visibility into the dev environment. Cloud environments offer much less observability than your local machine. You can’t use your sniffer. You can’t set breakpoints in your application or change code easily. Logging and low-fidelity production monitoring tools are generally unavailable for cloud environments. We’ll have much more to say about how to tackle this issue and — dare we say, resolve it! — in the final post in this series. Be sure to check back to learn more.

Limited control of the dev environment. Devs have a much harder time controlling cloud work environments than they have with local environments. They have to learn entirely new tools and will often be restricted by limited permissions. All this makes it a lot harder to change the code and run applications again and again.

Computing costs. Setting up environments is expensive. Compute, storage, DBs, and other services all come at a cost. It is easy to run up large bills, especially if you do not adequately control the number of environments being spun up. Or if you neglect to ensure their release in an orderly fashion.

When is cloud the way to go?

  • When you need an environment that accurately reflects the production environment. This is always a good idea but is more crucial in some cases than others. But which cases exactly? Unfortunately, this can generally be identified only after a frustrating trial-and-error process of having a microservice (or even an element of a microservice) experience issues in production, then debugging and having issues arise again and again. In those cases, poor replication of the environment is most likely the culprit and moving to the cloud is the right way to go.
  • You need significant computing resources for developing your microservices. High CPU/memory requirements, low-latency to the cloud, access to many other microservices and so on.
  • Your application relies on cloud-based infrastructures, such as DBaaS or serverless, to run.
  • When end-to-end application flow is hard to simulate on your laptop.

Provisioning the cloud environment for development

Once you have opted to develop in the cloud, the first thing to do is decide how to provision working environments for your devs. Of course, “best” is a judgment call, based on a number of factors – budgets, your devs’ comfort level in the cloud, security considerations and more.

Let’s look at some of the provisioning options and the issues that will drive your decisions.

Static or dynamic?

Developers who cross over from the monolith world are accustomed to having dev environments that are ready and waiting for them at all times. It is certainly possible to allocate a static environment to each user, as well as the many cloud resources the dev will require, such as databases, load balancers, DNS records, and TLS certificates. Third-party integrations need to be set up as well for certification and configuration providers and more.

Static environments, however, are increasingly rare in the microservices world, and for good cause. In the dynamic microservices environments, especially for large apps, provisioning static environments as new devs join up, and keeping all those environments aligned and updated is a huge task.

Perhaps, more significantly, the costs of static environments are high: An environment must be dedicated to each dev around the clock, whether they’re using it or not. In the cloud, the cost of those services adds up to significant – and unnecessary – spend.

For these reasons, for microservices development in the cloud, especially with Kubernetes, dynamic environments which can be spun up as needed at the touch of a button, and de-provisioned when unneeded, simply make more sense.

Provisioning the dynamic environment

Once you opt for dynamic provisioning, a new world of decisions awaits, as well as some minefields. Considering the issues in advance, before starting the provisioning process, will help make it easier and smoother. Let’s run through some of the issues as well as the considerations that drive them.

What should the relationship be between dev and production accounts?

A number of issues play a role in choosing how and where to provision the cloud dev environment. These include logistics, security, production reliability, cost and business considerations.

  • Security dictates that dev and production be separated to the greatest possible extent to protect the production environment.
  • Logistically, however, using the same environment for production and development often makes life much easier for the DevOps team.
  • Equally importantly, developing in an environment that most closely replicates the production environment is efficient and yields the best results.

Using the same cloud provider for dev and production, as well as enabling economies of scale, make logistical sense. But beyond that, what should the balance be between the competing needs for closeness and separation? Or, to put it more practically, should the same account be used for development and production?

To a large extent, the answer hinges on financial, logistical and cost accounting considerations. In small organizations with smaller DevOps teams, managing large numbers of accounts might be more trouble than it is worth. Similarly, larger organizations might require the use of different accounts to allow for precise allocation of costs between development – an R&D expense – and production, which is a cost of sales.  

Of course, the choices are not black and white. Within a single account, security separation may be achieved by utilizing different Kubernetes clusters for production and dev. Yet some businesses may opt to use for the same clusters for both. If your finance department is strict about cost allocations, you will need to use separate Kubernetes clusters for production/staging and for dev so that you can accurately book costs for each.

Spot instances or on-demand?

Businesses with large teams can easily spin up dozens or hundreds of environments and run up significant costs. Especially if devs are not careful about deprovisioning environments that are no longer in use.

Using spot instances for dev environments can cut costs by a whopping 60 to 90%. These instances, which are also known as preemptible, are offered by the major cloud providers on the basis of surplus server capacity, without availability guarantees. On-demand instances are generally used for production: Using spot instances for production is not for the faint of heart since they can disappear with only minimal advance warning.

Complementary elements

Assembling, setting up and orchestrating resources for dev is often more difficult than provisioning the actual computing environments and can be time-consuming and costly. While some of the elements are essential for dev, others can sometimes be added only later, in staging, without causing undue difficulty. Alternatively, some resources can be shared among devs rather than provisioned for each instance.

Resources may include databases, either within Kubernetes or as a service, using Amazon RDS or Google Cloud SQL for database-on-the-fly; initial datasets in the database to provide initial functionality for the environment; networking, including setting up load balancers through Kubernetes services; DNS records; TLS certificates; and other elements required for your specific app.  

Dev accounts for third-party services that will be used in production, such as identity providers, must also be connected to the dev environment, and credentials entered. An initial investment of time and effort is required to ensure that all these elements are provisioned quickly, seamlessly and securely. In some cases, it may be easier and more cost effective to utilize static elements instead.

Tooling for provisioning

Congratulations! You’ve decided to set up your dev environment in the cloud and have established what that environment will include. Now it’s time to decide how much control you allow your developers when provisioning environments on their own.

The choice of which of the three common environment provisioning is right for your team depends on how ops-oriented your developers are and the level of permissions they hold. Each choice requires relevant tooling for spinning up environments as needed, as well as access to production accounts.

Manual

The easiest and most basic way to provision environments is to do so manually. By using Kubectl, Helm, or similar tools – assuming your devs know low-level tooling, have account-access permissions, and are familiar with the process of building Docker images on their own. While this is often the fastest option, only ops-oriented developers who can manage flows on their own can provision environments manually.

Tooling

Next-generation tooling for developing Kubernetes apps such as Skaffold by Google and Draft by Microsoft offer devs who have less knowledge of the underlying system a much simpler way to share and update configurations. By providing an end-to-end flow they take much of the pain out of learning how to properly operate Kubernetes. This way devs do not need to deal with building and rebuilding containers and pushing them to the environment.

Both tools offer two modes:

  • A CI mode, used by build automation tools to deploy to a cluster
  • A Dev mode, used by developers to immediately apply local changes to the development cluster

Devs require security permissions to use these tools just as they do for manual tools.

CI

The easiest way to make provisioning broadly accessible is to make it available at the press of a button — or its developer equivalent, which might be a CLI command or REST API. This easy-to-use and secure method makes provisioning development environments accessible for all developers, based on pre-defined, optimized CIs created by DevOps. Less-skilled developers can get environments as needed without delving into configuration issues, while basic configuration options are available to more sophisticated developers.

CIs also addresses security concerns by having permissions reside within the CI, rather than with individual devs. As such, they lower security and technology barriers to self-service provisioning. In addition, CI tools enable flexible, on-the-fly provisioning of other resources that otherwise would be much more complex to provision. Even exceptionally complex environments can be custom-programmed into CI tools for provisioning at the touch of a button.

Conclusion

In this post, we’ve discussed when to develop in the cloud. We’ve mentioned a number of issues to consider when establishing what you need for your cloud environment and how it will be provisioned. The next – and final — post in this series will focus on the challenges of actually developing in the cloud and how to address them. Stay tuned!

Rookout Sandbox

No registration needed

Play Now