How about them apples? UCD benchmarking as an ROI technique for IA

Short Session, presented by Andrew Boyd.

How do you show middle managers that ROI case for IA? UCD benchmarking is one way, but how do you compare apples with apples and know what you’re supposed to be measuring?

Middle managers are famed throughout the known universe for wanting the business value proposition, the dollars and sense (pun intended) for your work. How much will it cost? What will it do for me and my business unit? We’re having to prove that there is definable benefit - and, well, define it. UCD benchmarking is one way to show return on investment (ROI) through quantitative usability measures:

  • efficiency (such as task completion time - is the system faster to use as a result of the improved IA?)
  • effectiveness (such as percentage of successful task completion - is the system measurably better to use for end to end task completion?), and
  • satisfaction (such as the perceived ease of use - are the end users definably happier against survey results?)

Benchmarking is one way, but how do you know which comparison factors and measures to use for your own project? That is, how do you compare apples with apples and know what you’re measuring? Should you try to separate the IA from the rest of the user experience? How scientific do you have to be?

This discussion draws on UCD benchmarking of IA projects spread across eight years and three diferent organisations. It will cover basic techniques and further sources for study.

Andrew Boyd

Andrew Boyd

Andrew is based in Canberra. He has been managing and designing information systems for more than 12 years, mainly in the defence, government and health sectors.

Andrew is currently the senior UCD practitioner for a large government organisation where he advises on web information and service delivery, quality/process design, IA, IxD, and usability/accessability evaluation - and sometimes he even gets to do some design work.

A co-convener of the Canberra IA Cocktail Hours, he is partial to good peaty single malt whiskey, touring, blurring social network boundaries between the virtual and the real, travel, fine dining, shiny sharp things, blogging, blogging on blogging, and Donna.

Ten questions for Andrew Boyd

What role does the site’s scale play in the benchmarking process?

Size isn’t everything, but it sure is something :) All other things being equal, larger sites are more complex – they offer more functionality, contain more information, have more moving parts. Only analysis can confirm this though.

Something to keep in mind though — just because a given site (or application) is larger, it may not have a large budget. Benchmarking is an overhead, and when money is tight, it can disappear — along with other so-called “optionals” like information support and quality assurance. We know that skimping is rarely rewarded, but you and I don’t write the cheques.

How does the site’s function or purpose affect the metrics being used?

Function and functionality should dictate metrics – for example, if a project is providing a bunch of services to a portal framework with no direct users, then “satisfaction” as a measure is pretty well meaningless (but effectiveness and efficiency aren’t).

Are you going to go through the qualitative usability measurement techniques?

Only so far as they apply to benchmarking – of the three ISO9241 usability measures (effectiveness, efficiency and satisfaction), satisfaction applies itself most readily to qualitative rather than quantitative techniques. That said, this is not a talk about usability, it is about using UCD Benchmarking in IA, so we’ll try to keep on track.

Do we have to be stats wiz kids, or can the humble IA understand all this.

This is where the “art” of UCD Benchmarking starts to separate from the “science” – on stats. Let’s take task completion as an example. Because there are so many different ways for people to perform a given task, and people tend to do things in the way that suits them, and different people think differently, there is often no One True Way for a typical user to perform a given task. To be scientific about it, you would make people do the same task in the same way every time, and measure that. Then you would have a rock-solid basis for comparing one task execution with another.

But... people aren’t robots, they are people, and will do things differently from one another. We can either go nuts using complex multivariate techniques or we can record the results and variables the best way we can and go from there. Really, it comes down to isolating the atomic task step and comparing it to another – one will be faster. In the effectiveness measure, one will work more reliably than another. In satisfaction, one will be perceived as being more pleasant, easier.

Isn’t it just easier to go around middle management and approach it from an enterprise view point.

Sure – but I’ll bet you will start with a middle manager somewhere. And somewhere, you will need to present a business value proposition for the work that you are doing – unless you are a superstar or a hypnotist, and can use “Because I Said” as your design rationale.

Do these techniques assume good UCD practices are already in place or planned to effectively start the benchmarking off in the first place?

No. It assumes that the person doing the Benchmarking has some idea of how to do it – UCD is good for background, as is system architecture, IA and some BA skills.

Is there a preferred time during the project to conduct the benchmarking?

Absolutely – if you are comparing a system to itself then two points are vital (a) when the to-be-replaced system/site/application is in full use, and (b) when the replacement is in full use. For analog system comparison, any time is good (so long as there is something to test and you can actually compare apples to apples).

How big/relevant of a problem is it? (convincing middle managers of the ROI benefits of IA)

In government, it is the biggest problem we have – because without it, no work takes place. Senior management endorsement still needs middle management faith to fully manifest.

In a perfect world, who’s job is it to conduct benchmarking?

Someone who knows what they are doing – knows the basics of quantitative usability evaluation, knows the site/system under evaluation, knows the people using it. They may be called a BA, may be an IA, may be UxD – really, it doesn’t matter.

Are you seeing any trends in the industry towards better accountability?

That’s a dangerous one to answer :) Let me put it this way – I work in government, and there is certainly a big movement towards full accountability on large government projects.

Program Schedule is online

The program schedule is now online. Don’t worry, we have a couple suprises planned to de-stress the packed schedule.

Student rates!

We’re offering special rates for full time students, for the conference and for the workshops.

more...

[an error occurred while processing this directive]