empower-your-ci-cd-delivery-pipeline.jpg

Empower Your CI/CD Delivery Pipeline with Logs & Best Practices

May 14th, 2021 | Uday Patel

Logging is a trivial part of the whole software development process, and can sometimes be ignored or given the least priority. But let me tell you, from where (the experience and the background) I come, it means a lot to understand what caused an error and what is the sequence of events that caused this.

Logs are often the most common source of information about an application’s behavior. They provide context as well as result, opening up far more information than a simple count metric will. Despite the immense insight that they can offer, organizations regularly overlook them. Many leave their logs in files on servers, far away from the CI/CD pipeline that needs them. It doesn’t have to be this way; with some best practices, every part of your business can collect, analyze and utilize the hidden knowledge within your logs.


Add Life to Log Data With JSON

Logging in an unstructured format dramatically increases the complexity of detecting patterns in your logs. By logging your application output in JSON, you open up the possibility of analysis of all of your logs. This gives you a top-down view of your entire system while still maintaining readability. This empowers your CI/CD pipeline by allowing you to query and filter your logs, which then enables you to focus in on the problem and precisely diagnose any unwanted side effects of your latest change.

Create Actionable Alerts

If alerts are defined and channeled correctly, have context and can be interpreted easily, they will be actionable, add context and offer greater value. It is difficult to provide context when a single metric exceeds a threshold, but log lines are more sophisticated. They can include additional information to provide context and allow us to pinpoint the source of the issue fast. This is an essential capability for a company embracing CI/CD.

Prioritize Your Logs

Structured logs often come attached with a severity, which indicates how seriously we should investigate an event. An INFO level log signals business as usual, but an ERROR demands immediate attention. Once your logs have these tags, you can make intelligent, automated decisions that will automatically respond to unwanted changes in your system.

Benchmark Each Version to Understand ‘Normal’

In today’s modern world applications often we rely on APIs, Microservices which may rely on additional services. Microservices can be developed, deployed independently of each other and hence it’s easy to identify hot services and scale them independent of the whole application. It may be safe to ignore a minor slowdown of a single service, but an unfortunate combination of events can lead to disaster. Benchmarking enables us to see these disasters well before our users. Benchmarking is the act of recording what the normal behavior of an application is. After each deployment from your CI/CD pipeline, compare your new behavior with your previous benchmark. If something is out of the ordinary, logs provide the information you need to act decisively. They provide an outstanding baseline signal for your benchmark.

Analyzed Logs Can Level up Your CI/CD Pipeline

Building a CI/CD pipeline is not the most difficult part. Neither is the deployment of new features. In fact, the greatest challenge facing any organization wanting to deliver their changes via a CI/CD pipeline is observability. With a best-practice approach to the preparation, curation and analysis of application and system logs, we can overcome this challenge and confidently change at a pace that propels us to the forefront of our market.

Logging Best Practices

  • First and foremost, we need to be aware of what our goals are. Why are we adding log statements to our code in the first place? Do we want to use them for application monitoring? Support and troubleshooting? Security? Depending on what your goals are, your entire approach to logging and the tools you will need may be changed.
  • Once the purpose of the logs is settled, it’s important to structure them so that they are understandable both to yourself and the members of your team and to whichever logging tool you choose to use. JSON and KVP (Key Value Pair) are both good choices.
  • Logs generate a massive amount of data, and this data may be coming from multiple environments across many servers. To ensure that this data isn’t lost and can be used effectively, they should be consolidated and centralized to a single storage location. Of course, this can be costly which is why, in almost all situations, TRACE, INFO and DEBUG-level logs are turned off in production.
  • Finally, we need to be aware of the limitations of our logged data. Our first warning sign is those log-levels that we’re turning off in production (TRACE, INFO, DEBUG) makeup close to two-thirds of all of our logs. That’s a lot of missing information.
  • Logs have been used for troubleshooting and support for ages, but considering the limited context they provide for application errors, they’re much better suited for other purposes. These log aggregation and analysis tools are most helpful for security and BI purposes or for identifying trends in user events and activities.

6 Recommended Log Management Tools

  1. Splunk
    • Power-house enterprise solution
    • On-premise (w/option now for SaaS)
  2. Elastic
    • Logstash for logs
    • Elastic for search
    • Kibana for visualization
    • Complex set up
  3. Sumo Logic
    • SaaS competitor to Splunk
    • Enterprise-worthy
  4. Loggly
    • More for developers and DevOps (less enterprise-y)
    • Parses data from App servers
  5. PaperTrail
    • Straight-forward log aggregator
    • Without all the bells and whistles
  6. GrayLog
    • Also for developers
    • Open source
    • Newer to the space but working hard to be enterprise-ready
    • Can handle an extensive range of data formats