Info Image

When Development Meets Operation: The Evolution of DevOps

When Development Meets Operation: The Evolution of DevOps Image Credit: Natali_Mis/bigstockphoto.com

Since the earlier days of computing, there have always been two groups of professionals involved with application technology: Software Development, and Operations. The first group, software development, is responsible for developing software to meet a range of requirements. Operations, on the other hand, is responsible for ensuring an organization has all its IT needs being met to maintain the ideal hosting environment for the software.

Although both are necessary, there are times when the development and operations worlds collide. Software developers might not consider the hosting environment, or existing infrastructure when writing an application. Operations might not have a full understanding of their hosting requirements or how best to scale software to support both use and load. This can result in a range of challenges, especially when the time comes to deploy and install software updates that require the two groups to work closely together; Sadly, this doesn’t always go so smoothly.

If only there was a way for the two to get along…

From system operations...

I started my tech career in operations as a third shift Systems Operator where my primary role was to run programs that either produced some printed output or moved data around. I was also responsible for backing up the system and distributing printed reports. From there, I was promoted to Systems Manager where I ensured the system was up-to-date and performing well during peak times and high user loads. I would monitor the performance of the system and user response times while also tuning and re-indexing the database for peak performance and efficiency. Essentially, my role was to ensure the system was always available, while also ensuring fast recovery in the event of a disaster.

This was in the 1990s, mind you, so it’s important to remember ‘the system’ was a ‘minicomputer’ that took up an entire room. When we first implemented the system, one single processor was sufficient in covering operations for a staff of 60. That included sales, accounting, and warehouse processes. As the team grew, however, so did the load on the system which made it necessary to scale. Two processors were added, then three, then four. Once we could no longer scale vertically, we clustered multiple systems together to ensure the load could be balanced.

In addition to ensuring enough processing power, operations were responsible for ensuring all software was up-to-date. This was no small undertaking, considering this would need to be done for multiple applications at the same time, including monitoring software, database tuning software, database replication software, job scheduling software, and of course, the operating system. These would all need to be taken offline, usually late at night, the software installed, and then tested before coming back online. To cover this, our IT Department consisted of four systems operators, two systems managers, three programmers, and the Director.

Thanks to rapid digital transformation, operations have changed a lot since those days. The core responsibilities, however, are still very similar. The biggest difference is most companies don’t own servers anymore - it’s all in the cloud. That said, these cloud-based servers still need a similar amount of attention and maintenance as they did back then.

...to software development

After a few years spent working in System Operations, I landed a job at a software development company as a Support Engineer. I progressed steadily working in the development space, becoming a Support Manager, Product Manager, Pre-Sales Engineer, and finally, in Professional Services (Implementations). While moving through these roles, I learned about the Software Development Life Cycle (SDLC), which goes something like this:

  • Plan
  • Design
  • Development
  • Test
  • Deploy

Because software development requires knowledge across many areas of expertise, it’s usually completed by a team. These teams might be assembled in house a la Facebook or Google, while others will hire firms to do this development for them. These teams, in both contexts, will generally include Product Managers, Project Managers, UX/UI Specialists, Designers, Developers (programmers), and QA Engineers.

When worlds collide

As IT systems have become more and more interconnected, the methodology has emerged to combine the two worlds into one to create a whole new kind of team: DevOps. This methodology is characterized by three main themes: shared ownership, workflow automation, and rapid feedback, and acts as an extension to the deploy phase of the SDLC. The lifecycle for DevOps generally looks like this:

  • Coding
  • Building
  • Testing
  • Packaging
  • Deploying
  • Configuring
  • Monitoring

The majority of the SDLC methodology is still here, taking place before coding, building, and testing phases. Everything after testing is traditional operations. Essentially, the idea of this new lifecycle is to build a cohesive team and toolset so the processes flow easily from one to the next, simplifying the entire process. Instead of having two teams, you have one that works together. There are still separate positions and responsibilities within a DevOps organization. Software Developers, for example, will still focus more on the code than the application itself, while DevOps Engineers focus on packaging, deployment and day-to-day care.

Working in this way imparts a number of benefits, including:

  • Improved application stability with end users experiencing fewer bugs
  • Improved software performance, giving end users a more responsive experience
  • Increased infrastructure reliability due to requirements being considered during development
  • Faster deployment, with automation meaning updates can be deployed quickly and with minimal effort
  • Faster problem solving and recovery, with cloud-native monitoring tools enabling fast resolution of issues
  • Offloading pure development resources, allowing developers to focus solely on development
  • Fewer instances of human error as packaging and deployment processes are automated

The strengths of marrying both Software Development and Operations are clear: not only does it ensure a much smoother process with increased efficiencies, stability, and a better user experience, but the cooperative element also means less tension between both the development and operations sides. Having started out when minicomputers would take up a whole room, the latter is, in my opinion, the biggest benefit of all.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Stuart Smith is a Lead Product Engineer at Saritasa. Smith has been a technology professional for 30 years ranging from operations/systems management and software development. Stuart capitalizes on software applications and systems ranging from avionics, web applications, disaster recovery, license plate recognition, content management, and systems monitoring.

PREVIOUS POST

Future of Cloud: Digital Transformation in a Post-Pandemic World

NEXT POST

5 Trends Proving Data is the Heart of Business Transformation