The first pillar of an Engineering Culture
An Engineering Culture consists of eight pillars. The first pillar is State-of-the-art Software Engineering. But what is State-of-the-art Software Engineering? Is it about implementing the latest tools or frameworks? Or is it about the way we work together? Or is it something else?
We believe State-of-the-art Software Engineering is about delivering the maximum amount of (business) value in the shortest amount of time without compromising on quality aspects. To achieve this goal, state-of-the-art software engineering should have certain characteristics, which we will outline below.
Cloud-native software development means we build software that is designed to make optimal use of the services that the cloud offers. This allows us to create software that adds value in record time. To optimize for cost, we divide software systems into microservices or ‘autonomous business capabilities’ as we like to call them. Each service is responsible for implementing a specific part of the business domain; business value determines the scope and boundaries of each individual microservice.
A very important aspect of a microservice is that it should be autonomous. It should be able to continue handling requests, even when there are issues with other microservices within the application. Therefore, microservices need to be loosely coupled. Event-based messaging, circuit breakers, and retry mechanisms can all help to make sure end-users can continue to use the application when there is a problem with its components.
Since each microservice is completely autonomous it can have its own development lifecycle. They can be developed, versioned, deployed, and scaled as a single unit. This makes releasing new versions much easier since only a small part of the system is updated.
As Xpirit, we recommend the use of container technology like Docker and Kubernetes when building cloud-native software. Containers are lightweight, easy to distribute, deploy and ensure consistent behavior of the software. They always behave in the same manner, so it doesn’t matter if you run them in the cloud or on in your local development environment. And because it is easy to scale containers, they are an excellent choice for building highly-available & scalable applications.
State-of-the-art Software Engineering has a strong focus on security. Development teams should constantly assess if their code is secure. What can a hacker do once (s)he has access to the system? What are possible week points? What can we do to improve security? These are some of the questions development teams have to ask themselves on a regular basis. Microsoft has developed a Security Development Lifecycle which provides guidance, best practices, tools, and processes for developing secure software.
Make sure to check them out if you want to help teams build and run better, secure software.
Humans make errors. To minimize these errors, we rely on automation as much as possible.
From building pipelines to verify the correctness and quality of your code to automated delivery pipelines to deploy your applications in a fast, predictable and repeatable manner eliminating error-prone manual tasks.
Automated build and delivery pipelines also act as a safeguard against introducing weaknesses or potential bugs. Running all the (unit/integration) tests as part of the build process reduces the risk of accidentally breaking existing code. Performing static and dynamic code analysis during the build and deployment phases can greatly help in identifying possible weaknesses by checking if your code complies with a set of coding rules. Detecting possible vulnerabilities or violations of coding guidelines at a very early stage gives developers the opportunity to fix them before the code is shipped to production.
The “automate everything” also applies to the infrastructure your application is running on. We think of infrastructure as code. The code for deploying your infrastructure should evolve together with your application code. That is why we oppose the notion of separate Dev and Ops teams; writing the code for your application or writing the code for the infrastructure are not separate concerns.
Infrastructure as code makes deployments repeatable and auditable. When everything is automated you can be sure you can (re)deploy your application in case of an emergency without having to worry if you missed something during deployment.
Once an application is deployed it is very important to know how well it is actually performing. Can it handle all the requests or does the system need to scale up? How is the performance? Are there any issues? To answer these questions, you need your system to be observable.
Observability is defined as the ability of the internal states of a system to be determined by its external outputs. These outputs come in the form of metrics, logs, and tracing.
Correct implementation of metrics, logging, and tracing enables you to establish a benchmark of the “normal behavior” of your application and get a view of how your application is currently behaving. If your application is not behaving as expected/normal, it can be a sign that there are problems with it. Logging and tracing can be used to find the root cause of these problems.
Check out the pillar about Appropriate Continuity for more information regarding this topic.
Let's talk Engineering Culture