Hashing Content (Confab Part 2)


By Joe Baz

Usability testing is finding its niche. Designers use it to inform the interface of a website. Information Architects use it to validate a paper prototype. And now, Content Strategists are using it to understand the effectiveness of content.  But is usability testing really suited to measuring content? In a word, yes; a usability test can easily be designed to focus on how quickly the user can find and comprehend the content.

At Confab 2011, Ahava Leibtag (Aha Media Group) and Aaron Watkins (John Hopkins Medicine) demonstrated how content can be assessed and improved through usability testing. Participants came away with a solid set of tools for measuring effective content.

How Can We “Test” Content?

Ahava and Aaron’s presentation described their experience in running a series of usability tests specifically focused on measuring content. Based on the Creating Valuable Content checklist, the duo began by posing several questions – each with the intention of measuring value:

  • Can users find the content they need?
  • Can they read the content?
  • Can they understand the content?
  • Will they act on the content?
  • Will they share the content?

If the answers to these questions are affirmative, the content is successful.  But how do you find, and measure, the answers?

The first three questions are ideal for a usability test, but it is the third question in particular that will provide the most information. Asking whether users understand the content comprises the main purpose and value of conducting a usability test. Through observation during testing a moderator will learn whether the user understands what they have read or seen.  And just like a reading comprehension exam in school, users are then asked to restate what they consumed. If the percentage of users who can comprehend the content is between 60-80%, then the content is considered understandable. If the average is above 80%, then the content is excellent!

What of the other four questions? The first two, as mentioned, are quickly answered through observation: it is obvious whether or not the user can find the content they need (i.e. navigation-oriented tasks), and whether or not they are able to read the content (i.e. legibility-oriented tasks.) This ought to seem familiar; it’s just like testing design.

Photo Credit: Sean Tubridy @tubes

Unfortunately, the last two questions are not effective for a standard usability test because the accuracy of users’ answers cannot be validated. It’s a red herring to act on this information through a usability test, because users can’t accurately predict their future actions. Some users answer what they think is expected, in an attempt to give the “right” answer, and most have high expectations for their future actions, which may or may not be based in reality. Consider how most people would react to the question “in the future, will you look both ways before crossing the street?” Most people will answer “yes,” regardless of what happens in the moment. Though the “correct” answer is less obvious when asking about sharing content, the impulse to assume our future selves will do what’s “expected” remains the same.  To really know whether or not the user acts on the content or shares it, use web analytics to measure trends and behaviors over time.

Unintentional Cheating

Throughout the presentation Ahava and Aaron shared lessons learned along the way. At one point, while scrutinizing the series of usability tests, they uncovered a major flaw in one type of test they used: cheating.

The test in question was conducted through UserTesting.com, a remote, unmoderated testing tool with a prescribed panel of usability testers for quick and cost-effective recruiting and speedy results. But because UserTesting.com only conducts remote and unmoderated tests, Ahava and Aaron were not present during the time the participants were testing the website and also could not see  facial expressions or body language. The only thing they got was a recording of each user’s screen and voice as each tested the site – not exactly an ideal situation for usability testing!

Worse still, the results they received were very strange. Judging from the first round test, 67% of the participants were able to comprehend the content on the John Hopkins Diabetes Weight Loss Center page.  Great! However, during round 2, which was onsite, moderated and using the same content, the reading comprehension result of only 17%. Something was up!

After re-reading the answers from the first test, Ahava and Aaron learned that the round 1 testers had not answered the task comprehension questions by relying on memory and true understanding. Instead, users had returned to the site and copied and pasted the “correct” content into the questionnaire. Once again, the human need to give the “right” answer trumped the testers’ ability to respond with genuinely useful information. Ouch!

The subsequent usability tests (rounds 3 and 4), were moderated onsite, leaving no opportunity to “cheat,” and Ahava and Aaron were able to use what they learned to edit their content and take it to a comprehensive level.  Through this process, they learned a valuable lesson: Remote usability testing, though more cost effective, comes with its own risks. Whatever method you chose to test, be sure to consider the pros and cons of your method and carefully evaluate the results.

A Silver Bullet, or Kryptonite?

Content doesn’t always involve the written word; it can also involve images and video which also need to be tested. While statistics like “Shoppers who view the video at Onlineshoes.com are 45% more likely to buy” (Internet Retailer, February 2010) make a compelling argument for using video, it is always worthwhile to validate any new content or concept with actual users . During Ahava and Aaron’s series of usability tests, one round introduced a video into the tested pages.

The results were surprising:

  • 33% of the participants who did not watch the video were able to comprehend the content
  • 80% of the participants who did watch the video were NOT able to comprehend the content

Though it may be that the selected video was not a good choice and a better video would produce better comprehension, the point is the same: video or any other trend as a blanket idea is never a silver bullet. Rather, all forms of content need to be carefully vetted for the audience. With usability testing Ahava and Aaron were able to screen the video option and learn what would work more effectively for their users.

Another Win for Usability Testing

Usability testing is still one of the best methods for understanding how users behave and interact with products, websites and applications. It stands to reason it is also a vital technique for understanding how users comprehend content. When combined with A/B split testing, persona and scenario development and other user research methods, you get a full picture of how users are consuming and comprehending content.

For more information on the usability tests conducted by Ahava and Aaron, please see their slideshare presentation: Johns Hopkins Medicine & the Healthcare Content Conundrum: Aligning Business Strategy with User Goals

How do you measure content comprehension? What methods have worked best to shape your content?

Twitter Digg Delicious Stumbleupon Technorati Facebook Email
  • http://rikwilliams.net/ Rik Williams

    I was mulling the potential pitfalls of using a remote testing service to rapidly probe content quality and comprehension for a news website project. However your article helped me rapidly get back on topic and test in person and onsite instead. Thanks!