Rookout is now a part of the Dynatrace family!

Table of Contents
Get the latest news

The Taming Of Microservices

Itiel Shwartz | Lead Production Engineer

2 minutes

Table of Contents

By making it easier than ever to add powerful new microservices, Kubernetes has become a driving force behind companies breaking up monolith apps and transforming them into microservice architectures. But as their numbers grow, the cost of managing microservices and the dependencies between them increase exponentially as well. This makes things more complicated than many companies have bargained for.

I recently spoke at a conference about the right way to build a cloud-native CI/CD to make managing microservices – well, manageable. I’m happy to share the highlights of my talk with you!

During the presentation, I first built and deployed a small app using Docker. Then, using a simple K8s deployment recipe, I took the app from my local computer and deployed it to our Kubernetes cluster.

Simple enough, right?

Move fast and break things!

Then, I decided to break the app into two microservices. Of course, once that is done, if you want to add functionality, the code needs to be changed in both of them. Naturally, in real-world scenarios, this might cause at least some existing behaviors to break.

Plus there is no clear way to indicate in plain vanilla Kubernetes if microservices are dependent on each other at the application level.

Now, imagine we’re talking about tens or hundreds of microservices, thus exponentially increasing the potential for faulty interactions between them.  To sharpen the issue even further, keep in mind that there is no easy way to test microservices before putting them into production. There’s also no version control for inter-microservice dependencies. So basically, we’ve created a system that is sure to fail!

Helm: Your new BFF

In this situation, Helm, a package manager for K8s, is your friend – in fact, it may just be your best friend.

Helm is important for many reasons. Out of these, one of the most notable is the fact Helm serves as an “abstraction template” for Kubernetes. It uses ‘charts’ to characterize and organize the files, reports, and releases for your microservices. You can also add external dependencies and efficiently deploy multiple services at once.

An umbrella ‘chart of charts’ serves as a single file containing all the microservices. It enables you to specify versions for each microservice and thus becomes a single source of truth for microservice versions. Additional supported capabilities include rollback to previous deployments and injection of different variables.

Getting the CI/CD pipeline right

Using Helm to standardize how you package and deploy each microservice, based on a single umbrella chart, allows your dev team to create a unified way of building services. It empowers them to move faster (and break things quicker) while still keeping the business on safe ground.

To learn more, about how Helm simplifies the CI/CD process, especially when used with a Jenkins shared library, check out the full presentation.‍

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

The Mixology Playbook: Kubernetes And Serverless

Or Weis | Co-Founder

8 minutes

Table of Contents

Technology is constantly transitioning, and in this state of continuous ‘in-between’, hybrid is king.
From manual to automated, physical to virtual, on-prem to the cloud, and now monolith to microservices and serverless.

Serverless today is a fresh and blooming ecosystem, yet when it comes to making the transition to serverless (or microservices) the future promises many aspects of growth and progress, but also a substantial amount of uncertainty. Will everything become pure serverless with zero orchestration, in just a few years time? Will we see a combination of serverless with microservices, and even with the classic monolithic apps? When contemplating these questions – One thing is certain: hybrid systems are a key aspect of technology and they are here to stay. Which is why we better pay them some attention.

A look at the players

Change is imminent. There’s tension in the air, and from the cutting-edge perspective, just like in the classic western, we have the good, the bad, and the ugly.

  • The ‘good’ is pure serverless: no complex orchestration, no servers, no complicated maintenance, etc. Serverless gives you the ability to run clean code and focus on achieving goals with your software.
  • The ‘ugly’ is Kubernetes and microservices: messy and full of orchestration. There just seem to be too many things you must write, too many servers and containers you must maintain. However, there are quite a few tricks up its sleeve, and it has quite a few advantages that can give serverless a good run for its money.
  • The ‘bad’, from this point of view, is the monolithic app. It’s been with us for years. Right from the start, and it’s not going anywhere. And with its own bag of heavy-duty tricks, it plans to stick with us to the very end.

Each of these players come with its advantages and disadvantages, and each of them currently has a position on the field.

Change brings challenges

Considering a move to new technology, like serverless or microservices, may create strong tensions within a company. The resistance to the new may express itself on several levels:

  • Systems: the existing applications, the current orchestration, and the existing services.
  • People: devs may resist the move since they are used to working with specific apps and services on a single server and on a certain scale. Their old projects have their own disciplines. Plus, resource allocation may also generate tensions.
  • Cultures: together, systems and people create the culture which becomes fundamental to the company’s activities. It includes, for example, specific methods of managing and maintaining the code, the deployment process, and code review. All these are also subject to friction and change when moving to new technology.

It’s very tempting to fall into the utopian vision of a pure world. To think that it’s possible, for instance, to only run your project in serverless or only in microservices. This mindset might work for the short term, but it ignores the existing reality and its complexities. The tensions and frictions mentioned earlier will meet you in the next round and quickly pile up into technical and cultural debt. “Suddenly,” you’ll find yourself juggling both the maintenance of the old systems and the integration of the new elements that are still being baked in.

Starting a new project: things to consider

There are many parameters that can affect the long-term success of your new project in this hybrid world. The following is a list of common key examples:

Language: This is one of the first things we need to choose when starting a new project. Will it be Go or Python? Would we rather use the language we’ve previously used, or should we choose one that seems to be the best match for the environment? The latter option is often considered to be best practice. In reality, however, it leaves the rest of the environment behind and ignores all other elements and frameworks, the handling of web requests, connecting the file system, etc.

Framework: The problem with existing solutions is that they are different for each environment. Each framework is often completely new and we will need to figure out how to adopt and implement it in practice. For example, if you worked with Flask or Django in Python in your monolith or microservice, moving to serverless you’ll have to go with something like  Zappa on top of Flask, or Chalice.

Building and deploying: In serverless, we don’t need to pass through the complexities of the CI/CD pipeline since we can simply upload the Zip to Lambda. However, we should consider where this leaves us in relation to the existing code deployment flows- as we run a risk of losing our controls and policies for deployments.

Design patterns: With monoliths, developers are often used to working with monolith design patterns, for example, “Shared Process Memory”. It’s very convenient since all the work is in the same process, accessing the same memory. This changes with the move to serverless: everything is on the network and you have to use solutions like SQS or Kafka to synchronize your code components.

Monitoring: While there are classic monitoring products, such as Datadog, serverless offers up a whole new world with tools like Epsagon, IOpipe, Honeycomb, etc. This plethora of new solutions brings the challenge of picking the right offering for your needs, and more importantly, balancing the offering with the existing tools.

Debugging: the classic monolith allows you to attach a debugger and simply pause and observe. This isn’t an option in serverless since there is no server to run and nothing to attach to. You can try SAM Local, and ironically enough, set up a server for debugging serverless.

What can developers do?

So, how can we overcome these challenges? How can we, as devs, DevOps, and SREs create the best configurations for hybrid systems and build the most efficient solutions to face these inherent challenges?

Embrace the hybrid future. Accept the fact that sometime in the not-so-distant future, your project will be a part of a hybrid environment and consider how you can develop it now, so when the time comes, it integrates and grows better.

Explore and innovate. Yes, there’s a variety of options and tools out there, but we still don’t have all the answers. Organizations and devs need to allocate time to experiment and gradually outline the best practices. The more we deal with these questions, the sooner we’ll find answers as a community. By allocating a small amount of time now, you’ll be saving your future self bucket loads of time and pain.

Use these 3 key concepts to create and measure up good hybrid solutions:

  1. Connect: In a good hybrid solution, the new connects to the old, with interfaces and shared data.
  2. Maintain: In a good hybrid solution the new is easily maintained with the old; allowing developers to understand it with ease as it connects to the whole system.
  3. Consolidate: In a good hybrid solution the new consolidates with the old. It aspires to become a single system and not some detached side project; which people fail to be aware of and adopt.

What can the industry do?

As part of an industry, we need to consider standardization and how we provide developers with a unified way to work with different elements. New vendor solutions should uphold 3 key parameters:

  1. Augment: Solutions should aim to improve and add value to existing elements. As opposed to trying to replace them altogether, or trying to enter into specific sections within existing processes. Consolidation is becoming a frequent occurrence, and as vendors, we need to think about promoting the process. We should aim at providing value to the hybrid world and accelerate its evolution.
  2. Interconnect: The solution needs to be able to connect to other solutions while sharing information and usage patterns. The APIs we build should have a clear and consistent interface. They should also work in a similar way across platforms, thus saving devs the trouble of having to learn everything from scratch. We should also consider investing in data pipelines and allow for the effortless transfer of data between solutions.
  3. Interchange: The ability to be interchangeable is the hardest point for vendors to accept. An inability to replace a vendor produces a negative dissonance in the long term. On the other hand, enabling different tools to connect to your solution is best for the ecosystem as a whole.

Hybrid solutions done right: the takeaway

Create symbiosis: Instead of building a closed system, consider forming a symbiosis between the various parts of the solution. If you’re writing a function in serverless, ask yourself: Will the existing microservices benefit from it when you’re done? Would it be easier for them to work?

Leave room to mutate and evolve: When things only work separately and cannot connect to other elements, they eventually get stuck and stop generating value. But if you allow room for flexibility, a place to insert or change the code and connect to other platforms, everybody wins.

Provide transparency and observability: A solution with code that cannot be observed and understood is bound to eventually fail and lose its users and maintainers. Aspiring for your solution to be accessible and understandable by others is key to wider adoption and allowing it to stand the test of time.

Support Interchangeability: As mentioned earlier, a good solution should be interchangeable with others.

Make it feel seamless: Developers who work with the solution should feel it integrates seamlessly with other solutions used by their company. No special effort should be made on their part to learn how to use your solution.

These are values we set into Rookout’s foundation from day one. Aspiring to be as complementary as we can to our ecosystem, our goal is to connect to as many platforms and solutions as possible. We want to create a better world where developers can work with the various tools in a smooth and seamless way. And as you can see in the demo below, with Rookout, it doesn’t matter which cloud you are using or if you’re using Rookout on a monolith, microservice, or serverless function. It all works the same: by simply clicking a button.

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

Should Dev Happiness Become A Devops KPI?

Alex Greensphun

6 minutes

Table of Contents

The term “employee happiness” is thrown around quite a bit these days. It started out as a buzzword and became a business standard several years ago when Google promoted one of their first software engineers to the role of “Jolly Good Fellow”. Since happy workers are more productive and less likely to quit, they are the key to an organization’s success and longevity. This is why businesses work hard to increase the well-being of their employees.

But what happens when those employees are developers? With devs in short supply, the ability to make them happy is money in the bank. An opinion gaining traction in recent years claims that DevOps should be the ones responsible for dev happiness. So how does one cater to a developer’s well-being? And is DevOps truly responsible for driving dev happiness?

We tried to answer that question by making a list of dev-pleasers that every business aiming for growth and success should consider. Let’s see how many of the points mentioned here can be assigned to DevOps.

Give them all the right tools

It may sound obvious, but ensuring your developers have access to the right tools for accomplishing their tasks is an important part of creating a smooth operational flow. Devs enjoy having good infrastructure and systems that run well with no surprises. One of the most discouraging scenarios for a developer is being stuck for hours and days on end, on an issue that could have been avoided if they could only use a better tool. DevOps are responsible for preventing bottlenecks, are in charge of the ongoing tool availability, and for adding and removing tools. Since developers rely heavily on these resources, their delivery dramatically improves with the right stack.

Optimize workspaces and culture

Sure, perks, unlimited coffee, and snacks are all great. But let’s face it: it isn’t gonna cut it for making your office into a dynamo of dev-happiness. The perfect dev workspace, both virtual and physical, is one where the hum of productivity inspires without being distracting. It’s where developers can put their heads together without treading on each other’s toes.

Moreover, great workspaces stand out for their culture. A positive work culture should encourage independence balanced with structuring each task as an integral part of a whole. DevOps and heads of R&D should strive to set clear goals for their devs, but leave them the freedom of deciding how to achieve these goals.

Build connections

Coding can be lonely and atomizing. Build a supportive home base for devs by encouraging meaningful connections between team members and between their tasks. Make sure that each dev knows what role his assignment plays in the project at large and why it’s essential. One of the main roles of DevOps is to improve communication within and between cross-functional teams in the company. Better communication creates stronger connections and encourages devs to support each other, share expertise, insights, and give feedback.

Promote personal growth

With technologies changing at the speed of light (serverless, anyone?) each position represents a chance for a developer to build his or her personal toolkit, or have it grow stale. Give developers plenty of opportunities to grow in their jobs, and they’ll be less likely to seek growth elsewhere.

DevOps are often the company’s trailblazers. Be it breaking down a monolith into microservices, or moving to the cloud, they are the ones leading the way when it comes to changes. The implementation of such changes should be planned carefully from an architectural perspective. However, the human factor should always be taken into consideration as well. Use organizational changes to boost professional growth. Encourage devs to acquire new skills and provide the right training and tools to help them do that.

Provide technical challenges

Good developers are a curious bunch; the modern-day successors to the kids who disassembled clocks in garages to see what made them tick. Sending technical challenges their way will get you valuable solutions while keeping devs interested, engaged and excited. Keep in mind there’s a difference between challenging a newbie and an experienced developer. This is why you should assign tasks accordingly. Of course, this depends a lot on your organizational flows and isn’t always the DevOps responsibility.

Balance boredom with automation

As we’ve established above, challenge and personal growth are both important for making developers happy. While that holds true, a bit of work that allows for wandering minds can stimulate creativity and yield innovative approaches to stubborn problems. So keep just enough routine tasks in the mix to enable productive space-outs.

And although a bit of boredom may be beneficial, too much time wasted on repetitive tasks is simply frustrating, annoying, and a waste of valuable talent. Automate irritating tasks and set your devs free to do the quality work they’re itching to do. Finding the right balance between routine assignments and automation is certainly a DevOps responsibility.

Bypass avoidable constraints

As some of the most practical, logical, and resourceful people around, developers have little tolerance for constraints that seem small-minded and rigid. Policies that run counter to the dev ethos and put unnecessary stumbling blocks in their paths, are seen as constant annoyances and met with little tolerance. These can be open-source bans, refusal to acquire essential software, or having non-devs decide which technologies devs get to use. Do your best to ensure your developers have what they need. Whenever constraints are real and cannot be avoided, DevOps should step up to explain the situation and help find acceptable alternatives.

Let’s wrap things up

Contrary to the myth of the developer geek, good tools alone won’t bring much joy to your devs. Cultivating their happiness requires teamwork, measurable goals, and rich interactions. It should also involve interesting challenges and powerful feedback loops. Management, HR, and Product must all be attuned to the professional, psychological, and social factors that keep developers excited, engaged, committed, and curious. DevOps certainly plays an important role here as well. At its core, it emphasizes people over technology, but should dev happiness become a DevOps KPI?

I’ve heard contradicting opinions about this from different heads of R&D. Some claim the answer is a resounding yes since DevOps engineers are in charge of most of the contributing factors. After all, happier devs are more productive, which is crucial to the success of any company. Others aren’t so sure, however. They resolve to provide their engineers with the right tools and prefer to leave the softer side to other people within the organization. At the end of the day, it’s really up to your business culture to decide who will lead this important effort. Just makes sure to keep it in mind, and maybe save this post as a checklist. 😉

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

Pet Projects And New Year’s Resolutions

Oded Keret | VP of Product

4 minutes

Table of Contents

The holiday season is the perfect time for working on my pet project. Everyone is off with their families. Email is quiet, Github is silent, Jira is calm. Even the customers are taking some time off, which means I can use this opportunity to invest in what really matters.

Of course, I could spend some quality time with mom and dad, trying to dodge the usual pesky questions. I could log back into World of Warcraft and finish my daily quests. Theoretically, I could even go outside. But let’s not go crazy here. So, what else should I be doing right now?

Chasing unicorns, and other pets

Last year I made a new year’s resolution to spend more time on my pet project. Write some beautiful code that will make the world a better place. Something that will challenge me and help me master a shiny new technology at the same time. Something that will spark some electricity through the creativity and problem-solving circuits of the dark side of my brain. And hey, if it also ends up becoming the next Unicorn, you won’t hear me complaining.

But like all new year resolutions, this one was impossible to keep. As it happens, I spend all of my days (and most nights) coding anyway and so, when I finally have some time off, I’d rather be doing anything else.

Now is different though. Now I’m on holiday. There should be no distractions. Nothing should stop me from quickly writing an app to help people connect over video. Or an app to automatically track and map locations of beautiful Christmas light setups Or an app to warn me about obstacles when I’m riding my electric scooter.

And so it begins

My mind is set. I pull out my laptop, start up my favorite IDE (my true safe place), put on my noise-canceling headset, and start coding. A weekend project done over the holiday; I should have a working POC in no time! Right? When they make a movie about it, I hope they get someone handsome to play me.

An hour into coding, reality kicks back in. There comes a dawning realization that some of the stuff that makes my day job feel like, well, a job, also applies to my pet project. With barely more than a “hello world” mockup up and running, I’m already facing some annoying bugs. I add some logs to try and debug them. Nothing beats good, old-fashioned `printf()` debugging, right? But I end up getting horror flashbacks from my day job.

The code is buggy and full of errors

What’s gonna happen when I build a CI/CD flow around this app, to make sure my revolutionary MMORPG has daily updates to keep it fresh? How long will I have to wait when I want to add a log line then?

What will happen when I deploy my app to Lambda because serverless computing is a perfect use case for my new blockchain-powered e-greeting card service? How will I be able to debug it? How the he$% should I be expected to reproduce issues that only reproduce in production because my revolutionary ‘it’s-like-Shazam-for-ice cream’ app has trouble telling gelato apart from cats?

Conquering fears without leaving the comfort zone

Naturally, my pet project goes through a quick pivot. So I decided to spend some quality time setting up Rookout and getting myself familiar with its capabilities. I learn how to add log lines with a click of a button, no longer waiting for a CI/CD flow just to add some missing logs. Next, I discover how to set breakpoints and get full visibility into my code, wherever it may be running (even in serverless frameworks).

And then, before I even realize it the impossible happens: I get used to debugging in production without fear. Turns out that with Rookout, “could not reproduce” is a thing of the past. And I get to do all of that in the comfort of my IDE! My safe place, the place where I want to be debugging from.

Moving on: rewards and fresh resolutions

I do all of that in a matter of minutes. Which leaves plenty of time to get back to my pet project. After all, brushing up on new shiny tech is what the pet project is all about. My app isn’t up and running, but I did find a new helpful tool, and that’s something. Maybe I can reward myself by playing some more WoW. Or maybe I’ll even step out of my room and say hi to mom and dad. After all, I’m now brave enough to debug in production, so surely I can take a few pesky questions during the holiday season.

Before shutting down my laptop, I make a new year’s resolution to spend less time debugging and adding log lines, and more time writing beautiful code. And next year I’ll write an app that helps me keep my new year’s resolution! Or an app that removes distractions. Or an app that thinks up beautiful pet project ideas. But whatever I write, I know Rookout will be there to help.

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

Is DevOps Leaving Developers Behind?

Or Weis | Co-Founder

13 minutes

Table of Contents

There’s no doubt — Devops and the sheer scale of the software it enables have truly revolutionized the world of software development. Evolving from humble single-server beginnings two decades ago, it’s finally reached the point where we can build elastic software at scale, thanks to cloud and orchestration layers such as IaaS, containers, k8s (Kubernetes), and serverless (FaaS).

New developments create new challenges. As developers try to keep pace with the ever-increasing complexity, speed, and scale of their software, they are realizing that they are the new bottleneck, still stuck working with the same development and debug tools from a simpler age.

A flood of new challenges

The most beloved characteristics of modern software, and those which make it so powerful — scale, speed, extreme modularity, and high distribution — are also the qualities that make developing and maintaining it so difficult. These new challenges can be roughly divided into three categories or levels: Complexity, connectivity, and development.
These categories are closely related: Often challenges in one category are solved by translating the problem to the next level up. This ‘challenge flow’ isn’t a straightforward vertical waterfall but is rather best envisioned as a cascading spiral. Just as complexity and connectivity spur development, development challenges lead to greater degrees of complexity, and so forth.

Complexity challenges

  • Complexity challenges are a direct result of software solutions tackling more complex problems, making them the most obvious set of challenges. They include:
    Developing distributed modular systems (aka microservices), which requires engineers to take time to plan how to break down, deploy and connect software.
  • Developing scalable software. Engineers must consider what scale the solution will need to support. (Whatever happened to “this server supports up to X users?”)
  • Handling data complexity in transit and at rest (usually coupled with “big data”). Modern software requires engineers to plan for extreme cases of throughput (high traffic, workloads), fast processing (real-time), storage, and search. What used to be a unique skill-set just a few years ago is now expected of every fullstack engineer (“You know Hadoop/Redis/Mongo/Kafka… right?”), raising the bar of most software projects. Saving your data as a .csv file or even a local SQL database is rarely an option anymore.

The skillset developers require to address these challenges is constantly growing, along with the time and energy that they need just to get started — to understand the problem space and assess how the available tools can be optimally applied .
Often when complexity issues become too great to manage, the problem escalates to become a next-level connectivity challenge, with complexity encapsulated in a new solution or method. Adding encapsulated software solutions such as databases, memcache, message-queues and ML engines in effect “delegates” the problem to be solved by the way we weave and orchestrate the overall solution. Once a specific pattern of complexity-to-connectivity escalation starts to be repeated frequently, it usually translates into a new standard development method or solution.

Connectivity challenges

Connectivity challenges result from the way modern software is woven together. There are at least three parallel connectivity chains:

  • Interconnectivity – connectivity within the software solution, such as the connections between microservices or module
  • External connectivity – connectivity with other software solutions including 3rd party servers, SDKs and SaaS
  • Meta-connectivity – the configuration and orchestration layer used to build, deploy, and manipulate the software solution.

In even the simplest of layouts, each connectivity layer is comprised of dozens of elements , as well as multiple connections between the layers. Just keeping track of all the connections and data flows is a huge — even Sisyphean — management and architectural effort.

Now, consider that connections constantly change due to changing needs or as part of the architecture itself, as for load-balancing. Systems quickly reach a point where developers must invest many hours — even weeks in some cases — to understand what connects to what in the software solution they are working on. The days of “That box is connected to that box, and that’s the wire connecting them” are long gone.

We’ve already listed plenty of ‘frosting’ on the connectivity ‘cake’, but let’s not forget the cherry on top: security and compliance. Engineers are expected not only to understand all aspects of connectivity but to design, build, monitor and maintain them so that the overall solution is secure and meets all required standards. This is mind-bogglingly difficult when even the smallest defect in the smallest element can bring the entire house down. It’s like saying to the engineers, “You know that huge haystack you’re trying to pile up? Make sure all those pieces of hay connect just right, and while you’re at it, check for needles, too.”

As time progresses (both within projects and in general) connectivity layouts grows beyond the ability of the human mind to keep track of, developers turn to the meta-connectivity layer, and try to automate and orchestrate connectivity (e.g. Puppet/Chef/Ansible, K8S, Istio, Helm, …). “You like code, right? So, here’s more code to orchestrate your code while you are writing code”.

When this modus operandi, often described as ‘configuration as code’ approaches maturity it transfers to the next level of challenges – development. And elevates the resolution of each challenge to a software solution in its own right.

Development challenges

Development challenges are the classic pains involved in producing functional software within a framework. These emerge from the growing gap between the power of modern software and developers’ ability (or lack thereof) to keep pace with it. Moreover, since all the challenges from the previous levels gradually propagate to this one, the challenges here are at the bleeding-edge of the dev/devops experience. These include:

  • Dev environment challenges require developers to assemble, learn, and manage all tools and workflows chosen for the software solution being developed
  • The sheer number of tools and methodologies is overwhelming and still rising, even after the huge growth of the past decade, to name just a few: IDEs, source-controls, compilers, transpilers, DBA tools, cloud consoles, development servers, debuggers, orchestrators, monitoring agents, task / ticket management, alerting solutions.
  • Context switching makes matters worse. This ungodly pile of tools doesn’t hit developers just once, but continues to bombard them. Developers must always be ready to deal with a different situation. (“It was raining Git commits yesterday but today I’ve been waiting for hours for it to start snowing containers… Oh wait, it’s actually going to be a typhoon of tickets with a chance of tracing.”)
    Today, for developers, “getting into the zone” is no longer a matter of efficiency; it’s essential for basic productivity. And that’s before we complicate the stack with AI, quantum-computing, biocomputing, and other miracles. (“If you think multithreaded programming is hard, I have some bad news for you…”)

Replicating bugs

Often the first step toward understanding a bug is observing it. The most basic approach to observation is a replication of the bug in a controlled (usually local) environment. Sadly, with modern software, replication has become a herculean feat. The many challenges and factors mentioned above interact to create cases so complex and interconnected that any attempt to simulate them is doomed to fail, or break the budget trying. (“We think the issue was a result of rain in Spain, and a butterfly beating its wings… Alright, so let’s start by building a life-size model of Spain, and I’ll check Amazon for bulk butterfly shipments.”)

As in vitro bug replication becomes less of an option, in vivo observability becomes a must. Unfortunately, developers often discover that their ancient toolbox is ill-equipped for the task. The old tools simply don’t work in modern software environments; the reason usually comes down to loss of access.

  • You can’t access the ephemeral – Multiple layers of encapsulation, the result of translating complexity challenges into connectivity, have made access an issue, as layers like containers, Kubernetes, and serverless take ownership of the software environment. The issue is clearest for serverless / FaaS: Not only do developers lack access to the server, but as far as they are concerned there is no server. Even if by some trick or hack one gains access to an underlying server through all the layers, it’s practically impossible to pinpoint a specific server/app since these fluctuate, controlled by orchestration layers, which constantly remove and add servers on the fly.
  • Bringing an SSH knife to a K8S gunfight- With access lost, existing tools have become obsolete.
  • SSH, which provided an all powerful direct command-line console, has become unusable in most cases and is at best, a huge stability risk.
  • Traditional debuggers (in contrast to Rapid Production Debuggers) have nothing to attach to. More significantly, the idea of breaking (setting a breakpoint) on a server in production is unacceptably risky, since it is likely to literally BREAK production or at best, knock microservices and functions out of sync, removing any chance of replicating or debugging substantial issues in a distributed system.
  • IDEs used to represent a developer’s full view of the system from code through compiling to building and running. They enabled developers to sync their views and approaches to software they share. IDEs were never very close to live systems, but the gap between them has grown so great that they can no longer create shared views. While integration to modern source control (e.g. Git) and CI/CD solutions helps, the gap is still large.

The CI/CD bottleneck

With access lost and few alternatives, if any, developers have turned to deploying code as the main way to achieve access. Any need to interact with software translates to writing more code, merging it, testing it, approving and of course, finally deploying it. This approach is leveraged for adding/removing features, getting observability (monitoring and investigating), and applying fixes and patches. This overwhelming dependency on code deployment creates a bottleneck as all these deployments fight for space in the pipeline and the attention of those who maintain it.

As the flow of challenges continues spiraling onwards, it gathers momentum. Today’s torrent is but a taste of the challenges the future holds. Once these challenges of modern software development are brought into focus, a clear picture emerges of a huge gap between the power of modern software and developers’ ability to keep pace with it. We see this observability gap most often in development challenges but in the other challenge levels as well. Bridging that gap is a key capability we require from modern dev and devops tools.

A bridge over troubled waters

Currently, only a handful of solutions is available to developers who face these challenges. Most developer tools have remained stagnant for years. An engineer from 20 years ago would be completely baffled by modern servers in the cloud but would recognize a current IDE or debugger in seconds. That said, the modern developer toolbox does have some new solutions. These include solutions designed to enable greater focus, solutions that attempt to do more with the limited available data, and solutions that try to bridge the observability gap by improving on the deployment cycle.

“Like a glove” – Tailored observability views

With many modern software challenges starting to repeat and consolidate into known forms and formats (such as Docker, K8S, Serverless functions) a new breed of tools has emerged that identifies these patterns and leverages their knowability to tailor specific solutions.

Within this category are next-generation APM solutions such as DataDog, which provide views built specifically for Containers and Kubernetes on top of existing APM offering. In fact, you’ll find that most APMs adjust and provide capabilities for the microservices world, although not always as first-class citizens.

In an even more modern tailored approach, we find solutions doubling down on structured data and specifically Tracing, such as HoneyComb.io (alongside open source projects such as Zipkin and Jager). These identify the specific pain of reconstructing a distributed system behavior, much as a detective reconstructs a crime scene, and arise from the very nature of microservice architectures.

For Serverless, we find solutions like Epsagon and Lumigo, which are tailored specifically to the FaaS use case and target specific pain points such as discovery, management, and pricing. These issues, of course, were present before, but became more acute with Serverless.

“Deus Ex Machina” – Advanced analysis

With software grown big, big data is not only a challenge but also a means of tackling the problem. Multiple solutions harness the strengths of machine learning to attain observability, and some use it as a primary approach. Examples include solutions like Coralogix and Anodot.

“Developers of the world, unite!” – Advanced workflows

Harnessing machines to solve machine problems is a start, but it isn’t a silver bullet.
As a result, many solutions focus on building a better developer workflow on top of the views and alerts provided by the automated part of the solution. Sentry.io is one great example and specifically with their recent release of Sentry 9, they improve the cooperative workflow on top of the existing exception management platform.

“Getting a taste of the real stuff” – Canary deployments

Most solutions reviewed here so far, and most solutions in the devops space in general, work by deriving value from data that exists in the system, or that can be collected in advance by focusing on a specific pattern (such as tracing).
While a significant step forward, these still leave us highly dependent on the CI/CD channel to iterate by delivering new data from production by updating log lines or SDK calls that feed the systems upstream.
One way to reduce pressure on the pipeline is by using canary deployments, a variant on green-blue deployments. They enable developers to expose new code (in this case, the necessary observability changes) to a limited percentage of production, without affecting all of production. It also allows faster rollbacks.
Recipes for canary deployments can be found for most leading CI/CD tools such as GitLab and CodeFresh.
Yet a huge gap still remains: What if the needed data isn’t within the limited canary percentage?
While canary deployments reduce the friction and risk of observing production, they still leave most of it on the table, making it an expensive and risky effort that most can’t afford.

“Bridging the gap with agility” – Rapid data collection

Despite the many solutions above, a gap remains between the observability data developers need from live systems and the pace at which they can iterate to collect and deliver that data to their various solutions (including some listed here). A new cohort of solutions attempts to bridge this gap by completely decoupling data collection from the CI/CD pipeline and thus eliminating the friction, risks, and bottleneck that create the gap.
These rapid solutions collect and drive data on demand without prior planning, creating evanescent bridges as needed, instead of trying to anticipate specific bridges for predictable cases. This ephemeral access approach aligns with the ephemeral nature of modern software. Rookout Rapid Debugging solution is an example which leverages instrumentation and opcode manipulation capabilities to enable developers to collect data from live environments and pipeline it to whichever platform they need.

A glimpse into the future

We’ve come a long way. Developers from the past would definitely envy the amazing accomplishments that enable modern software architectures. But when it comes to the tools we provide developers, we are not there yet. Devops has launched the world of software forward at tremendous speeds, but the complexity, connectivity, and development challenges left in its wake are preventing developers from keeping pace with their own software.

Now is the time to take another leap forward by embracing the new solutions quickly and actively working to bridge the observability gap. By more effectively connecting tools to one another, creating better workflows on top of them, and finding true agility in data collection and observability, we can bridge the gap and more. In the not-too-distant future, we can reach a point where the pace of software evolution is matched by software observability, creating a feedback loop that endlessly increases the speed of creation. A future where no one is left behind.

*Originally posted in hackernoon.

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

How To Take Advantage Of The Holiday Code Freeze?

Polly Alluf

4 minutes

Table of Contents

The weather outside is not yet frightful, but it’s getting cold in the codebase! As winter descends and temperatures plunge, many software companies implement a Holiday code freeze over the holiday period.

The theory makes sense. New code means new and enhanced features, but also the risk of serious new bugs. The period from Halloween to January is a peak time for commerce sites, with urgent gift shopping in early to mid-December, followed by January sales. Online retailers make 40% or more of their annual profit in these 6-7 weeks. A code freeze means no risky new code is pushed to production at a time when a bug could cost millions of dollars in lost revenue.

An upcoming code freeze means a major deadline for developers. Everyone wants to get their features into the CI/CD pipeline before the freeze hits. Afterward, developers can sigh in relief… but what do they do next?

Bypass the chilling effects of developing during a code freeze

A code freeze isn’t a development freeze. Developers will probably spend the freeze period writing the next set of new features in dev environments. But the code freeze can still have a chilling effect on developers; without access to production, it can be harder to get the information needed to create new features. It’s often necessary to understand how a particular piece of code works in production in order to work out how to elegantly extend it, or to decide where in the code logic to integrate a feature. Normally, this is done by writing diagnostic code like logging lines and then deploying it to production. This is already a frustrating and long process, but during a code freeze, it’s not an option.

This is where modern tools can come to the rescue. Rookout’s on-the-fly information collection works everywhere, including in production, and does away with logging lines altogether. Rookout allows developers to track a single variable value at any point in the code, or get access to full stack trace data. By decoupling visibility from deployment, developers can get the information they need directly from the production environment in a read-only fashion, without any impact on performance. No more waiting until February to check what an object is doing.

Use production as a live laboratory

The mission-critical code might have passed review and testing, and maybe the site’s working just fine. But it could still be faster, or better. During the holiday code freeze is a great time to see heavy loads in action and think about future optimizations by seeing how the live sites perform under real and intense conditions. APM solutions like AppDynamics or observability platforms like Honeycomb.io can do more than simply ops monitoring; you can use them to show how to improve your code for the future.

By adding Rookout into the mix you can take things to the next level by getting visibility into the code itself without affecting its execution or performance. Rookout lets developers see how their code is actually working in production without any need to simulate it locally or to mess with the production setup directly.

With these observability tools, software engineers can create a live laboratory where developers can use the experience of observing their production environment to help them fix “low-priority” live bugs and other performance issues in the future.

When things go wrong: Deciding to unfreeze

Despite code review, testing, staging, and code freeze, sometimes things still go wrong. Maybe it’s a weird edge-case bug nobody noticed; maybe it’s some small infrastructure failure that causes backlogs elsewhere. Additionally, key development staff may take a vacation for the winter holidays, making bugs harder to find and fix.

When issues occur, DevOps teams need to ask themselves “How serious is this problem? Is it bad enough to need a fix right now?” Rookout can help with this, too. By speeding up the diagnosis of the problem, Rookout makes it quicker to pinpoint the source of the issue (is it a logic bug? A failure of an external service?) and giving software companies the information they need to assess the severity of a production issue. This cuts back on unnecessary downtime and means that companies don’t need to use diagnostic builds during the code freeze.

The code freeze period doesn’t have to leave developers out in the cold. With modern tools and creativity, it can still be a valuable time for software development and load monitoring. And if something does go wrong, those same tools can help pinpoint the problem.

Enjoy your winter!

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

How I Met Your Debugger

Oded Keret | VP of Product

3 minutes

Table of Contents

Kids, today I’m going to tell you how I met your debugger.

You see, back in the summer of 2010 I was fresh out of Uni and trying to prove myself at my first job – a small startup in the world of retail. My first major assignment was developing an Explorer add-in to a web client with a Java backend. So when I had to debug it, my process would be as follows: set a breakpoint at Eclipse for the backend server, and a breakpoint at Visual Studio for the Explorer add-in. Then, run both apps, trigger the UI, break, see the frame, step, hit the backend, break, step and repeat.

And that kids... is how I met your debugger

First loves and monoliths

With time I grew accustomed to this multiple IDE debugging. I didn’t even mind having to switch keyboard shortcuts whenever I jumped from debugging my client to debugging my server. I guess you could say that was my first true debugging experience. And you never forget your first.

Years later I found myself working in a large enterprise company, adding features and mostly, well, debugging legacy features in a huge .NET based monolith, with some Java Add-ins. One of the biggest challenges there was figuring out where even to begin investigating a new problem. Where do I place my first breakpoint? Who the h#$% built this code? What was he thinking?

But with time I learned that my first love was here to help. I figured that if I set a breakpoint as early as possible, and patiently step into the code (switching from VS to Eclipse and back again), I will eventually “Grok” the application. Indeed, with enough time, patience, and a powerful debugging tool, I was finally able to get the hang of what was going on and learned that there is no better way for me to get into someone else’s code, or break down a monolith, than simply stepping in and out through the code.

Flying too close to the cloud

But this love wasn’t meant to last. Eventually, we decided to get rid of the monolith and soar up to the cloud. We were going to break the backend into microservices, deploy in AWS, do everything by the book. But then I realized that just like Silicon Valley’s Richard Hendricks, I just couldn’t do cloud.

How do you debug a process that doesn’t run on your machine? Set a breakpoint in a piece of code that mustn’t stop? Attach your IDE to a service that hasn’t spawned yet? I was completely lost and watched with admiration as my friends threw log lines into the air, being able to tell from a million nearly identical lines just what the problem was and how to fix it. Much like Roger Murtaugh, I figured out that I was getting too old for this s#$%. I thought I’d never debug again, and was willing to leave my mechanical keyboard and geeky T-shirts behind to complete the transition into product management.

More years have passed. Eclipse and Visual Studio were replaced by Word and Powerpoint, and apart from the occasional scripting demo or HTML-based documentation, I didn’t write much code. I was even close to deleting my World of Warcraft account. I thought it was all over for me.

The one with the yellow umbrella

And then I met Rookout. And it was just like debugging for the first time. I felt at home, in the comfort of an IDE. It allowed me to look at my code, set a breakpoint, and let the debugging flow. With Rookout, I could debug a local application or one that runs in a docker container, or in my staging or production environment; I could even debug several parts of the application written using different frameworks, all in the same debugging session. Set a breakpoint, trigger the app, see the debug data, Grok your application from inside. Just as I always did.

So that’s my story. That’s how I met your debugger. If you’re experiencing a similar heartbreak, you don’t have to transition to product management just yet. Your next debugging solution may be just under your nose, holding a yellow umbrella.

Happy debugging! 🙂

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

Go Loco for Local? The No-Bullshit Microservices Dev Guide (part 2)

Liran Haimovitch | Co-Founder & CTO

7 minutes

Table of Contents

In the first part of this series, we discussed how developing with microservices differs from monolith development, and how those differences impact the choice of dev environment. In this second post, we’ll talk about when it’s best to leverage the local microservices environment and the tools that can help.

The Pluses of Microservices Local Development

For most developers, the natural impulse is to begin tackling microservices dev on your trusty machine. This is pretty much the tack that just about everyone takes when they start. Just like the good ole days, right?

But while this approach feels familiar and safe, it is worth pausing to consider when it is best to work locally, and which tools can help.

Like almost everything else, developing microservices on your computer has pros and cons. On the plus side, we have no need for internet connectivity, familiar tooling, and technologies and resources that are literally right at your fingertips. Let’s dive a bit deeper into each of these pluses:

Internet connectivity seems like a red herring, right? After all, when was the last time you were anywhere for any length of time with no internet connection?

In actuality, however, this is a non-trivial issue: For a development environment, any old internet just isn’t enough. Development requires connections that are very fast and very low-latency.  Add in essential elements such as the ability to authenticate into your network and log into your VPN, and suddenly, sufficient internet connectivity isn’t trivial at all. As a result, a dev environment that does not need connectivity seems — equally suddenly — quite appealing.

The ability to use existing dev tooling can mean huge savings of time as well as money. If development is done on a familiar platform, there’s no need to invest in new tools and technologies. The sniffers, process monitors, and debuggers that you usually use are most likely just fine. And the time and effort required to teach old devs new tricks and convince them to adopt them can instead be invested in development itself.

Computing, storage, databases and all else you need for development are available right on the machine. There’s no need to spend scarce budgets and even scarcer time on spinning up instances in the cloud for essential resources with hefty price tags attached.

…And the Downsides

Of course, there are significant downsides to developing microservices on local platforms as well. These include being unable to use cloud-based infrastructure such as DBaaS; testing on a different environment than the one in which you will actually run the microservices; and maintaining multiple configurations. Let’s dig into each of these a bit:

Running microservices on a laptop is very different from running the app in the cloud. Obvious differences, which impact all dev-to-production transitions, include HW configuration; OS type, version and/or configuration; and network configuration. But the impact is way greater for apps that rely heavily on the service mesh, k8s networking, and other technologies to coordinate communication and interaction between microservices since these infrastructure layers can only be approximated in the local environment.

Local development environments utilize different technology stacks than those used in the cloud, making them highly susceptible to configuration divergence. Keeping the local configuration updated and in sync is a difficult and annoying task and one that is often neglected. This can have a significant negative impact on local development productivity.

If your application relies on a cloud-based infrastructure such as DBaaS (RDS/DynamoDB/CloudSQL), queues, or SaaS/PaaS products, you won’t be able to fully run your application when developing locally. Replacing cloud-based infrastructure with local alternatives further exacerbates both issues mentioned above. You may also choose to connect to those services remotely, although that creates a new set of provisioning and authentication challenges. We’ll discuss this further in the next post in this series — be sure to check back.

When is Local the Way to Go?

Developing microservices locally on your laptop is a tricky business, as indicated by the pros and cons listed above. So, let’s get to the nitty-gritty of when the pluses of moving back to local outweigh the minuses, and when it is better to stick with the cloud.

Local microservices development is the route you want to take when:

  • You are developing only a few microservices and their interactions are easy to define and maintain
  • Inputs and outputs for your services are easy to simulate locally
  • Being able to develop offline is a significant requirement
  • Executing your code does not depend on cloud infrastructure
  • You are trying to cut back on cloud computing costs for dev environments
  • You’ve moved to the cloud, but are having a hard time figuring out why your code (mis)behaves as it does, and need to observe it more carefully than you can do in the cloud.

The Local Development Toolbox

Now that we’ve touched on when it is reasonable to develop locally, and some of the drawbacks to be aware of, let’s review the tools that can help you with local development and briefly discuss how to apply them to microservices dev.

Processes:

The easiest way to run microservices locally — and with the lowest overhead and least setup time — is simply as processes run directly from your IDE/shell. But be aware: this is likely to require a fair amount of manual intervention and can be error-prone.

Docker Compose:

As a way to meet the challenge of container orchestration, Docker Compose is not necessarily a winning horse to bet on these days. But it’s unquestionably a useful utility for running a few microservice containers locally. The big advantage of Docker Compose configuration, however, is that it is much easier and has a learning curve that is less steep than the more complex Kubernetes YAML files.

The upside here is clear: Developers on your teams will find it much easier. The downside, however, is equally clear: You’ll need to create and maintain local Docker Compose configurations in addition to Kubernetes configurations, and more advanced features may be hard to replicate.

Minikube:

This official CNCF tool makes it easy to spin up a Kubernetes instance on your machine. In fact, the latest version of Docker for desktops also comes with Kubernetes built-in. It would be reasonable to think that it allows you to quickly spin up your application on Kubernetes on your laptop, right? No such luck.

Surprisingly enough (and disappointingly), more often than not, your Kubernetes configurations will probably not work out of the box in Minikube and will require some minor tweaking. For instance, ingress controllers may be missing or your laptop might not have enough horsepower to run a full-fledged development environment.

Even worse, Kubernetes is hard to learn and is quite daunting for many developers, so it may be difficult to get team members on board.

Hotel:

Hotel is a local process manager for running microservices that supports all types of OSes. The simple-to-use utility allows your team to easily define and share local run configurations and see logs for each process.

One major advantage of Hotel is that it can run both containers and processes. Processes are less expensive to run than containers. When you’re running locally and performance is limited, the ability to run both can be valuable. Hotel also includes a few useful extras such as management of simple use cases for HTTPS, DNS, and proxying

While local configure definition is much easier, it still takes time, effort and attention to create and update parallel configurations for Hotel and Kubernetes.

Summary

In this post, we’ve discussed the pros and cons of developing locally, when local is the preferred way to go, and some tools that help make local microservices dev quicker, easier, and more robust.

Check back for the next installment in this series. We’ll cover the pros and cons of cloud dev and offer some guidance as to when it’s the right approach. And finally, in the fourth and final chapter, we’ll discuss tools that can help when you work in the cloud.

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

Debugging Castles In The Clouds

Zohar Einy | Solution Manager

3 minutes

Table of Contents

When developing software, debugging is essential. But users named debugging functions as the most common challenges associated with serverless architectures.

Recently, several solutions have been introduced that claim to make debugging serverless functions easier (or just plain possible). While the solutions take a variety of different approaches, deciding whether you want to debug locally or in production is the first step when selecting a solution that is right for you. To help you make a more educated choice, let’s take a look at just what local and production debugging each entail in the world of serverless.

Local debugging

In order to debug serverless functions locally, the entire cloud provider environment must be emulated so that the application flow can be accurately reproduced. To accomplish this, the various local debugging solutions emulate serverless environments for specific cloud providers, frameworks, and languages.

These solutions have definite plusses: They are easy to set up and of course, do not interfere with production. Breakpoints enable developers to fully control the application flow, and they can extract functions on demand, via custom events.

On the other hand, local solutions cannot possibly really represent the full production environment. Inevitably, events, databases, and other elements will be missing from the simulation. Support for languages, frameworks and cloud providers is still limited, and with each tool only supporting a very narrow technology stack, tool inflation will be the inevitable result.

Most importantly, due to these and other limitations, it is almost impossible to trace production issues locally. So while some issues may be discovered and resolved, others may be stubbornly resistant to local debugging.

Production debugging

The production-first, event-driven and data-driven nature of serverless environments makes production debugging a natural fit since it enables more accurate reproduction (or essentially, production) of issues than local emulators can accomplish. Minor differences in event content, format, or order can make big differences, so reproducing events faithfully results in more efficient and successful testing and debugging.

Similarly, having the right data inputs is essential for reproducing bugs, but migrating data from production to development can be time-consuming and resource-intensive, and complicated by security and compliance considerations.

As a result, production debugging enables rapid debugging, based on “in-the-wild” conditions, without adding overhead to production. The solutions are more likely to work with a wide range of languages, frameworks, and cloud providers than local solutions, enabling a smaller technology stack. Perhaps most significantly, production debugging allows requests to be traced across multiple microservices and serverless functions for more accurate debugging.

Production debugging solutions have a number of disadvantages as well. As edge technologies, their learning curve can be steep. New code must be deployed for setup. And while some solutions provide fully descriptive dump frames, similar to those provided by local debuggers, they cannot be used to set breakpoints.

Choosing the serverless debugging solution that is right for you

There’s lots more to know before deciding what kind of serverless debugging solution will best meet your needs. To get the full scoop on which type of solution is right for you, download our comprehensive Serverless Debugging Guide.

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

5 Surprising Lessons About Marketing To Developers

Polly Alluf

4 minutes

Table of Contents

I know software engineers. Like, really know them. I’m the sister of one, the spouse of another and the proud daughter of a third. I’m also a veteran B2B marketer with almost a decade of experience marketing to IT and security folks and another five years in MarTech. Does all this qualify me for my most recent gig, marketing to developers? Well, yes and no.

When shifting from one marketing space to another, I always apply the rule of ‘some things to remember and some things to forget.’ In this post, I’ll share some hard-earned lessons about marketing to developers. If this type of challenging, yet satisfying role is in your future, read what follows as a self-help guide. If it describes your present position or one in your past, consider it as light entertainment.

Long and to the point

At the risk of stating the obvious, my first lesson is to forget the fluff. This is especially important if you are shifting from MarTech, as I did. Developers don’t like to tiptoe around subjects. They prefer to hit matters straight on. They want to know just what you can do for them and understand how the whole thing was built and why it will work.

Here’s the interesting — and surprising — part: You don’t necessarily need to be short and to the point. Developers appreciate long-form posts that top 1500 words, as long as they help resolve challenging issues and save them time in the long run. Sure, their time is precious, but if you incorporate usable scripts, many will stick with the post and read all the fine print!

Free love!

Developers expect to get everything for free. Definitely t-shirts and SWAG but also trials and content. As marketing people from other areas, we’re used to driving lead gen with gated content. For the most part, developers just won’t tolerate that shtick. That’s not to say that you can never ask them to register for an e-book. But you should think twice about whether that’s the best strategy for converting leads. Consider sharing content for free and driving developers to register on other occasions, such as for webinars or trials, for which requiring registration is a given.  

And speaking of trials… In the age of open source, developers have learned to expect not only free trials but a free-forever-freemium version of your app with which they can play. I tend to say “yes” to free trials, but it’s important to be careful. Make sure your product is completely ready for self-serve onboarding before offering free trials to potentials users. A less-than-ideal experience can backfire on you, and result in loss of potential customers and harm to your reputation.

Caring is sharing

When social media was still in diapers, my colleagues and I used to laugh about crazy CMOs who would turn to their horrified marketing managers and say, “Make me a viral video!!” Even in today’s super-social world, when sharing is almost as intuitive as breathing, it’s anything but easy to gain traction and high visibility.

When my gig, Rookout, announced its support for AWS Lambda production debugging, which no other player in the market apparently offers, the non-comical demo video clocked over 1000 views in under 24 hours!  

Here’s the secret: Developers care about real stuff and have an unprecedented culture of sharing. So when they care, they share! That’s an advantage marketers working for this audience must leverage to the greatest extent that they can.

Engage!

Developers constantly engage with each other, at meetups, on slack channels, in Facebook groups, and myriad other platforms. In fact, they’re more out and about than any other audience I’ve ever worked with. If you can participate in their conversations by speaking, writing, commenting and presenting, that’s an important win for you and your team.

I’m fortunate to work with developers who are happy to assist with content production, meetup presentations and more. Having a background as a developer is a huge advantage, but rare for a marketer. So find people in and out of the company to help you and once you’re ready, consider recruiting for dev community manager or dev community engineer functions.

Embrace it!

Two hours into a developers’ conference in NY, I concluded that our creative animated video isn’t attractive at shows. The good news is that our product demo video attracted lots of attention and as a bonus, it was valuable for quick semi-live demos.

This attraction to ‘less fun, more serious content’ by people passing our booth was a new experience for me, and I embraced it, much as I welcomed compliments to our home-made stickers. Developers do love stickers and not just tech logos. Go the extra mile and create fun stickers  – that’s where rewards for creativity lie, rather than for videos.

When I crossed over from MarTech to DevTech, I left behind the pleasure of creating beautiful, fluffy marketing stuff. But I also left an oversaturated landscape where every term has already been used zillions of times, and every marketing trick has been beaten to death. DevTech isn’t blue sky: In fact, it’s quite crowded already. But there’s still room to innovate and do work that’s refreshing, with a customer base that rewards your creative approach. So go forth and conquer!

This article was originally published on TheNextWeb.

Rookout Sandbox

No registration needed

Play Now

Table of Contents
Get the latest news

Heisenbug 101: Guide to Resolving Heisenbugs

Or Weis | Co-Founder

4 minutes

Table of Contents

Welcome fellow developer, I can see you’ve traveled a long road, why don’t you stay a while and listen? I’ve got some fantastic stories to share; Lessons to imbue your debugging skills with power and wisdom, adding at least 1000 XP to take you to the next level and make your future travels much safer.

Hmm, now, where should we start? Have you already faced the terrifying Heisenbugs? They are truly fantastic.

Heisenbug definition: A software bug that disappears or alters its behavior when one attempts to probe or isolate it.

What is a Heisenbug?

A Heisenbug is a bug that seems to disappear or change when you try to debug it. Heisenbugs are the dread of every experienced developer since they know that encountering this type of bug will undoubtedly be hard to study, understand and resolve. Developers also know that Heisenbugs reduce the chances that a fix, once achieved, would cover all cases due to their ethereal nature.

Heisenbugs are named after the famous physicist Werner Heisenberg, in homage to his uncertainty principle.

Types of Heisenbugs

There are two subtypes of Heisenbugs: those that are actually affected by the debugging capabilities, and those that seem to be affected but are in fact just manifesting randomly/statistically.

“I’ve spent every Tuesday for 2 months trying to track down a bug that only shows up in production on Tuesdays.”Flufcake on Reddit

Both types of the Heisenbug are as painful as a root canal, but one of them – disappearing only when debugging is used – is definitely more evil.

This type of Heisenbug comes with a built-in sense of despair, as you discover that most (if not all) tools at your disposal for approaching the problem are worthless. That despair is often the reward you get for spending a long time trying to figure out why you’re not seeing anything when you attach a debugger/profiler or redeploy with more code.

Disappearing Heisenbugs - catch them if you can!

Disappearing Heisenbugs – catch them if you can!

The Increase in Heisenbugs

To make matters worse, Heisenbugs which used to be rare nightmares, are becoming more and more common these days. With the rise of distributed systems (e.g., Kubernetes, serverless, microservices) software is growing in scale, complexity, and asynchronicity. These are the perfect spawning grounds for the vicious Heisenbugs. As a SaaS company from the DevOps ecosystem, we tend to hear about Heisenbugs more often from customers who are migrating to microservices.

Credit: cloud.google.com/kubernetes-engine/kubernetes-comic

Credit: cloud.google.com/kubernetes-engine/kubernetes-comic

The greater the playing field and the more interconnecting moving parts it has, the more areas Heisenbugs have to come into existence. That’s just pure statistics. With added complexity and encapsulation there are more layers that can conceal simple bugs and “upgrade” them into vicious Heisenbugs.

Most prominent is the effect of asynchronicity. Heisenbugs are often close cousins of another type of bug, the race-condition (to be covered in a separate post). With more and more components connecting in an asynchronous fashion, the amount of possible software situations increases dramatically. Essentially, that is the Cartesian product of all concurrent software elements. As a result, specific software situations or configurations are becoming rare/fleeting, directly leading to more Heisenbugs springing into being.

How to avoid and resolve Heisenbugs

1. Use the right tools to debug Heisenbugs

The heavier or clumsier your observability tools are, the more likely is their impact on the running code, causing bugs to disappear or change while you’re debugging. Be extremely cautious of tools that pause, freeze or slow down execution (including classical debuggers); tools which allocate a lot of memory or have a high CPU overhead; and tools that change the execution or networking layout (including proxies and service-mesh solutions).

2. Understand your software

We’ve covered where Heisenbugs prosper, to avoid them you must know your software. Know its most complex, encapsulated, asynchronous, distributed parts; you’d have an easier time finding Heisenbugs there.

3. Use the right work- and debug-flows to avoid creating Heisenbugs

If your debugging flows include restarting, redeploying, or significantly changing server layouts, it shouldn’t surprise you that Heisenbugs will soon pop up and you can then expect a world of hurt. Have protocols and observability solutions in place so that they can be put into play without all the ruckus. This way, when these ugly beasts rear their heads you can chop them right off.

4. Know your code and know how to do static analysis

Debugging is roughly split into two parts. Executing the code in order to observe it (dynamic analysis), and reading through the code to find patterns in it (static analysis). Heisenbugs can evade detection only in execution; if you don’t run them, they can’t run from you.

5. Never trust a Heisenbug

In most cases, it’s hard to know for sure if you’ve solved a Heisenbug. Even if they whisper in your ear “It’s ok now.  It’s gone… shhh… it’s gone.” It’s better to stay on the safer side of suspicion and have your debugging kit at the ready.

Happy hunting and Happy travels!

Rookout Sandbox

No registration needed

Play Now