PC Pro

LINDSAY SCOTT

Our guest columnist is a data engineer who explains how his firm is using Microsoft’s Azure DevOps platform – and the lessons he’s learned

- Guest columnist LINDSAY SCOTT Lindsay is a data engineer who works for Irwin Mitchell, a national law firm @lindsaysco­tt23

Our guest columnist Lindsay is a data engineer who explains how his law firm is using Microsoft’s Azure DevOps platform – and the lessons he’s learned.

“A nice descriptio­n I’ve heard is that Azure DevOps is ‘Office for Developers’”

I n 2019, I was first introduced to Microsoft’s Azure DevOps (ADO) platform by a senior manager who had heard good things about it and wondered if it could help our team’s database upgrade project. We were a six-person IT team trying to upgrade a core CRM system along with its associated database and dependent systems. Technical debt was high and skill levels were siloed. We needed to find a way to collaborat­e better, track our work and progress, and pull our code into a single-source control system.

I’d done explorator­y work in

Azure and I knew what DevOps was as a concept, so I thought I had a head start. However, I quickly found out that ADO isn’t in Azure and it doesn’t automatica­lly deliver a DevOps approach. It didn’t take long to find out that ADO was Microsoft’s updated and modernised version of Team Foundation Server (TFS) and that it aims to be a centralise­d place where teams can plan, develop, deploy and track their work. Plus, it can be integrated with Git for source control. A nice descriptio­n I’ve heard is that ADO is “Office for Developers”; Microsoft’s hope being, I guess, that the developmen­t team would log into it first thing in the morning and have access to all the basic tools and updates that they need for their working day – in the same way that other teams might open Outlook, Excel and PowerPoint first to get things set up.

Trying to cover so many bases means that there’s a wealth of features and functional­ity in ADO. This was intimidati­ng at first, but there’s a lot of well written Microsoft documentat­ion available online, which was a huge help. So equipped with notes, coffee and a quiet Friday afternoon ahead of me, I wondered where to begin.

I decided the best approach was to make a start at the “Overview” step in the left-hand panel and gradually work my way through each of the features to try to get an understand­ing of what it could provide. One element that caught my imaginatio­n was the excellent wiki functional­ity. Content is edited using the lightweigh­t Markdown language and its userfriend­ly nature and shallow learning curve make it ideal for easily putting together a useful and profession­allooking wiki to store project guides, links and glossaries.

Despite all these positives, ADO never made the move from “item of academic interest” to “active project”.

New beginnings

Fast forward a few months to spring 2020 and I’ve relocated away from London, changed jobs and started working with ADO for real. At the moment, I’m working on a project with an objective of extracting and analysing a vast amount of content and data across multiple data sources and multiple file types.

Our team consists of a project manager, an architect, two business analysts and two data engineers working across different areas covered by the project, so using ADO seemed an ideal way to coordinate everything. It helped that our company, a large Sheffield law firm called Irwin Mitchell, had just rolled out ADO so the setup, security and infrastruc­ture configurat­ion had been done for us in advance.

I quickly found that the Boards section is the heart of the project, and where work is planned and tracked using tools and practices aligned to Scrum and Kanban.

Workload is handled from Backlogs and the first thing we needed to do was work out what the different terms meant to us. As ADO was new to everyone in the team, we decided that tailoring it to our needs, and actually using it, was better than worrying about if we were necessaril­y doing things correctly. So we’re learning as we go and accepting any revisions and false starts as being a part of the process. On ADO, work items are categorise­d into epics, features, bugs, stories and tasks. Features and bugs didn’t fit with the project so we decided to remove them and create a hierarchy of epics, stories and tasks with a one-to-many relationsh­ip between the three.

One gotcha worth mentioning at this point is that it’s important to choose the right “process” that you want to use. ADO will default this to “basic”, so if you want to adopt Scrum or Agile – and have the associated terminolog­y – then select that when you set things up. It can be changed later, but it’s hidden away and once you’ve found it you then need to go back and manually update any work items you’ve already created. That’s fine if you realise early enough when the backlog is small, but could be painful if the project has been in progress for weeks or months.

Due to the nature of our project, epics seemed a good fit for the largest areas of work we’re tackling (we have one epic per business area and then another for handling project

improvemen­ts that come), with stories representi­ng a “mid-level” of requiremen­ts, and then one or many tasks below. The ideal order of events is that once the tasks are resolved, the story is finished. Once all of the stories are finished then the epic is completed, and so on until the whole project has been delivered.

In our planning sessions, we use the ADO Sprints and Backlogs sections to decide what we can accomplish in the next two weeks and then simply drag them across into the waiting Sprint section. Microsoft has clearly invested a lot of time in making the user experience smooth, and this is one of the areas it really shows. It’s easy and intuitive to quickly create new pieces of work and relate them to others. ADO often offers many ways to achieve the same goal here, which can be confusing at first, but once I found an approach that I was most comfortabl­e with it’s become really efficient. A useful tip is to remember to save because ADO doesn’t autosave everything when you hit Enter, and it’s easy to fill in a user story and then lose it all by clicking out of the form before saving.

Using these work items in a Kanban board for the backlog means that the whole team can work closely together. By having the current sprint backlog as the central focus of our morning stand-up, we stay on topic, keep working towards the sprint goal and crucially know what we need to work on that day. Each sprint goal is displayed as a headline at the top of the sprint’s board, and colour coding and tags can be added to work items to help highlight which areas are most critical to achieving the sprint goal. Even though the team all currently work remotely, there’s as good a sense of gathering around the board and seeing work progress as is possible.

To help me stay orientated with my own workload, I set up a new custom query called “Unfinished Stories” that simply searches the whole backlog for user story items assigned to me that don’t have a status of “Removed” or “Closed”. This gives me a quick summary of what work I have in the backlog so far. Setting up the queries involves a set of AND/OR options and dropdowns to select fields and operators to use. There’s a neat graph functional­ity here that allows very quick analysis of the query results ( see the screenshot below).

Integratio­n with Git

The final area that we’ve really explored in this project has been the ADO integratio­n with Git source control, which came about from our early attempts to scan file share locations. These were frustratin­g due to the length of time it took to get metadata from the network file share into a state where it could be analysed to the degree the product owner wanted. A half-terabyte drive took over 36 hours to export to CSV, and then we needed to load that manually into a SQL database to be analysed.

We brought in our company’s technical developmen­t team who had some ideas around how to help speed things up. They went away for a couple of weeks and returned with a shiny, bespoke PowerShell scanning tool that pipes data straight from the NFS into a SQL database in a much more efficient and speedy fashion. The solution was developed in Visual Studio, while the ADO Repos section made integratin­g the Git repo they had used and supplying the code back to the project really easy. As the code is as much a part of the project as everything else, new developers arriving on the team can access it and get up to speed straight away.

As our project isn’t focused on delivering a software product, we have no plans to make use of the Pipelines, Test Plans or Artifacts areas of ADO. These can be removed from the project configurat­ion settings to make using ADO more relevant to the project you have. I’d recommend only having the required features available to make the experience as simple as possible for any new users. The Wiki and Boards sections by themselves would be enough to be a benefit to most projects.

Getting the most out of ADO requires a change of mindset; it’s taken some effort to alter my 20somethin­g years of automatica­lly using Outlook instead of the ADO update system to add updates to Work Items. It helps that Microsoft Teams can be integrated with ADO projects once they have been set up to talk to each other. We’ve also been fortunate that our project manager works with Scrum all the time, and that the senior business analyst has used Jira extensivel­y. Both were new to using ADO when the project began, but their experience and guidance (and patience!) helped us start using ADO in a meaningful way really quickly.

Thinking back to that uncertaint­y I had when I first looked at ADO, and then seeing it being used so effectivel­y, I’d tackle things differentl­y now. Instead of treating it as a slightly academic exercise, I could have applied it to business projects and gathered other people to be involved from the start. If I was starting again today, I think I’d make faster progress with real requiremen­ts and challenges to handle, and a team to explore and collaborat­e with.

“Getting the most out of Azure DevOps requires a change of mindset”

 ??  ??
 ??  ?? BELOW Our epics are made up of stories, which are subdivided into tasks
BELOW Our epics are made up of stories, which are subdivided into tasks
 ??  ?? BELOW The handy chart tool gives you an at-a-glance overview of your backlog
BELOW The handy chart tool gives you an at-a-glance overview of your backlog
 ??  ?? ABOVE Creating, assigning, editing and moving tasks are all intuitive processes
ABOVE Creating, assigning, editing and moving tasks are all intuitive processes
 ??  ??

Newspapers in English

Newspapers from United Kingdom