The scene at the field operations control center of a large company that sells high-tech equipment troubled its COO. His company had spent millions on a new automated scheduling and dispatching system that promised to optimize the deployment of 3,000 field service engineers. The results, however, were disappointing. The company had spent more than a year implementing the software and installing the hardware for the new system, equipped all of its engineers with GPS-enabled handheld devices, and spent months training engineers and dispatchers to use these new systems. New data finally flowed into the control center, yet response times had not improved, and the number of jobs each engineer could handle in a day had not increased. Feedback from the frontline workers was mixed as well. Some field service engineers were happy that the new system reduced their administrative burdens, while others complained that it wasn’t compatible with the way they did their jobs and that even more software customization was necessary.
That kind of experience is common for leaders of service-ops organizations who manage large groups of remote or distributed employees. Many have made multimillion-dollar IT investments in areas such as automated dispatching, schedule prioritization, workflow automation, and performance management. Over the last six years, these investments have grown by 25 percent annually—two and a half times the rate of overall IT spending. Indeed, in a recent McKinsey survey, 444 IT executives said that their top priority among all IT-investment areas was to improve the efficiency of business processes by automating major workflows in call centers, back offices, and field service. Unfortunately, these initiatives are often hobbled by the problems that nag many IT projects: they take years to complete and frequently fail to deliver the promised results. When IT-enablement projects in service operations go awry, it’s often because these systems require processes and work practices different from those used in non-IT-enabled situations. These processes and work practices are best designed and implemented before companies roll out the new IT.
At this field service organization, the IT implementation had followed a typical development path. A joint team staffed by personnel from operations and IT spent months meeting with dozens of dispatchers, managers, and engineers to understand the processes and to collect everyone’s requirements. Many meetings and working sessions were devoted to reconciling them and figuring out how to incorporate everyone’s wishes in the requirements documentation.
The team had evaluated products from nearly all relevant vendors. Many of them had come to demonstrate their offerings. Hundreds of hours were spent determining whether these products could accommodate the company’s processes and meet its requirements. Once the best product was finally selected, the company had assigned some of its top IT specialists and project managers to oversee implementation. A large, skilled development team had worked for more than nine months to put the new system into operation. After this extended process, stakeholders had signed off on milestones and functionality. Finally, the company rolled out the system, and an additional two months were spent training the staff to use it. The COO found it hard to comprehend—and unacceptable—that after all this work, there was so little to show by way of quality and productivity improvements. Clearly, he thought, something had to be done.
This COO called a meeting with his leadership team and the CIO to discuss the situation. The group soon focused on the key issues:
Was the new system really enabling the organization to work more effectively or was it simply wrapping a lot of expensive IT around the current processes?
Did the team members fully understand how to get the most out of the new technology and were they using it properly? For example, did the new system’s optimization algorithms actually deliver a better forecast and job schedule?
Had the team really understood how to take advantage of the new data and faster access to information or were they of little practical use?
Was the team overusing the new technology and making things that used to work perfectly well more complex than necessary?
Were the field service engineers truly attempting to work with the system or were they circumventing it because they didn’t trust or like it or understand its importance?
To understand what was going wrong, the COO and the CIO set up a task force to analyze work processes and develop a better understanding of how engineers actually spent their time. The task force found that processes and boundaries between service regions, designed to make teams work smoothly and efficiently back when assignments were made manually, actually prevented the new IT from making much of a difference.
On many days, engineers could reasonably handle more than the standard morning and afternoon assignments, for example. But under the manual system, it was nearly impossible to determine under which circumstances and on which days that would be possible, so no more than two appointments were booked for each engineer each day—a practice that continued under the new system.
Boundaries between service districts, designed to improve the efficiency of manual scheduling and team management, made it difficult to assign engineers across zones, even when that was technically feasible. Tight requirements for skills, familiarity with customers, an engineer’s home location, and other criteria, all designed to ensure high service levels under the manual system, prevented the company from using the engineers’ days optimally or minimizing drive times under the automated system.
The task force also found that the new system rarely assigned less expensive service engineers to simple tasks, because schedulers and dispatchers, following the old processes, did not always have time to check out the exact skills needed for each particular repair. These practical constraints had simply been transferred to the automated system.
To address these issues, the task force adopted a plan based on four key principles.
Take the new approach for a virtual test drive
A key issue for the task force was to understand how productivity was affected by the process requirements that managers, dispatchers, and engineers had provided to the IT designers and that the team had diligently compiled into a large requirements document. To find out, the task force decided to model the workings of one branch office with the aid of a computer simulation. Sophisticated software applications now make it possible to do large-scale simulations quickly and to generate scenarios for further testing in field pilots.
The task force chose a branch office deemed to be operating at full capacity, with 20 engineers. It used a few weeks of actual dispatch data covering a particularly busy period. The aim was to baseline current practice and then get a good sense of what would happen under an automated system with revised rules and requirements covering where engineers could work, when, and on what. In particular, the task force was curious to know how many service engineers and how much overtime would be required, what service levels could be achieved, and how customer assignments would change.
Simulation provided insights into the impact of the requirements on the system’s ability to optimize the deployment of engineers to jobs. Next, the task force conducted a simulation to evaluate the impact of automation under different scenarios, leaving some requirements, changing others, and redefining certain processes. From this range of scenarios, the task force selected a set of modified processes and requirements that, combined with automation, improved service levels by 10 percent and increased the number of jobs a day per engineer by 15 percent. As a result, fewer engineers and less overtime would be needed.
Field-test to indentify important factors for success
Next, the task force wanted to ensure that its findings could be duplicated in an actual work setting and to identify the tools and training required. It selected three branch offices, each with 15 to 20 engineers, for a field test. The task force put a premium on getting answers quickly, so it was important to minimize the time spent implementing the new IT tools while nonetheless testing the most important processes and IT requirements.
To that end, the task force devised a “light” IT plan for the three pilots. The plan minimized implementation time by leveraging simple-to-configure Web-based interfaces to existing company systems, as well as software-as-a-service (SaaS) tools from vendors. The IT implementation wasn’t built to be scalable beyond the three pilot teams; for example, it relied on simple text messages rather than dedicated handheld devices for communications. Nonetheless, it allowed the pilot teams to adopt the essentials of the new processes.
The task force realized from the outset that capturing the opportunities would require significant changes in the way engineers did their jobs and were managed. The new system, for example, would sometimes assign engineers to jobs across service districts when things were particularly busy or speed up response times by assigning backup engineers to fill in for already-occupied ones. It was important to help the field teams understand these changed working procedures and how the system was optimized to find the best solution for all customers across all three branch offices. These insights helped the pilot teams to abandon the old ways of working so that they would not allow remnants of the old processes to creep back into the test or, even worse, change the new tools to fit the old ways. The field test helped to identify the most critical success factors for the new approach.
After fine-tuning the processes and IT requirements during the pilots, all three teams exceeded the performance opportunities identified in the simulation. Given these encouraging results, company leaders quickly approved a full implementation of the revised approach (exhibit).
Build only what you need
The task force worked closely with the CIO and the IT team to determine the fastest way to fix the already-implemented IT and to adjust the plan for adopting the rest of it. The pilots were critical, since they helped identify the IT functionality with the greatest impact on productivity. As a result, the team could sharply reduce the number of bugs that needed to be fixed and cut the remaining implementation time by eliminating a lot of low-impact functionality.
It turned out, for instance, that the ability to pinpoint engineers’ locations and track the time they spent traveling between jobs wasn’t crucial. The pilots showed that engineers typically knew where their clients were and how to get there, so they didn’t need the GPS navigation and fleet telematics the vendors had recommended. Similarly, the handheld bar code scanners that allowed engineers to order spare parts remotely turned out to be a productivity drag; the pilots showed that dispatchers could enter orders more easily and quickly than field engineers could, even with the scanners. Advanced forecasting and planning modules were eliminated thanks to the pilots because these systems provided little extra value and added complexity and expense.
The IT team also saved programming time by adopting many of the algorithms the pilot teams had developed in the bootstrap implementation. The jeopardy-management process used when jobs couldn’t be scheduled automatically, for example, was taken directly from the processes used by the schedulers during the pilots. Ultimately, overall implementation time and costs were 30 percent lower than the original IT estimates suggested.
Change processes and mind-sets in parallel with IT implementation
The pilots had clearly shown that to take full advantage of the new IT systems, it would be necessary to change the way field service teams did their jobs. As the COO noted, “Changing the way you work is difficult, but it ultimately determines whether the new technology will deliver.” Using lessons from the pilots, frontline managers formalized the new processes and, together with HR, developed supporting materials for a company-wide training program. The curriculum emphasized the new system’s value for both employees and the company and covered new procedures for scheduling and dispatching cases, as well as new management processes.
One module, for example, showed that although service engineers now had less control over their schedules, the new system would improve their work–life balance by spreading the workload more evenly over the week and reducing the number of very long working days. Another training module included tips for dispatchers on how to persuade engineers whose performance was lagging to be more proactive in accepting new cases and closing out others.
While the CIO’s team worked in parallel on the IT implementation, the leadership began rolling out the new processes to the extent possible before the IT was in place. Dispatchers, for instance, would release a new job only after the previous one was complete—a precursor to full-blown IT-enabled dynamic dispatching. Six months later, when the scaled-up IT followed, field engineers were so familiar with the new practices that the automated ones were very easy to accept. The IT team also helped ease the transition by seeking regular feedback from the field and using an agile development approach.
The subsequent company-wide implementation took half as long as the original one. Productivity (as measured by jobs completed each month) increased by 20 percent. Service levels and first-time fix rates also improved. The additional capacity allowed management to reduce overtime substantially and to bring outsourced work back in-house, which yielded tens of millions of dollars in annual cost savings. Surveys showed that customer satisfaction improved as a result of faster and more predictable service times.
More and more service operations are turning to IT to reach the next level of efficiency and quality across all service venues—distributed, mobile workforces; centralized workforces; service supply chains; and back-office workflows (see sidebar “Examples of workflows that can be automated effectively”). One telecom call center, for example, achieved results matching those described above with a similar strategy of simulation, pilot tests, and process change.
We recommend that companies take a disciplined approach (see sidebar “A checklist for service executives”) before committing themselves to significant technology investments. As we have indicated, that kind of approach requires companies to align their working practices with the strengths of automation. To do so, they will have to gain a clear understanding of the expected improvements by modeling and piloting new processes with light IT and by changing behavior with effective training that highlights the interrelationship between work practices and IT.