background preloader

Usability Testing

Facebook Twitter

Testing design: Testing users impressions of a design. Design: The estimated time to read this article is 6 minutes It would be wrong to have a series on design methodology without looking at the subject of testing design. One of the fundamental principles behind Headscape’s design approach is to design with data. Design is subjective. What I like you may well hate and vice versa. The normal route for resolving this problem is to focus on what the user wants, rather than what the client or designer thinks is best. Testing the usability of a site is relatively easy. The first step is to know what you want to learn about the aesthetics.

Asking users what they think isn’t enough It is not enough to just show users a design and ask them if they like it or not. First, whether they like a design is not the only criteria by which to judge it. Second, when asked what they think users fallback on comments such as “I don’t like the green.”

Focus on brand value and personality The question now becomes, how do you test a design against a set of words? The mobile testing challenge: How to improve your UX and prepare for the future. It’s one of the biggest headaches for mobile developers and organizations launching mobile initiatives, and one where the most capital can be wasted: mobile testing. Since testing can amount to as much as 10 percent of a mobile development budget, this headache can quickly avalanche into a disaster without the right direction and tools.

So what options are available to help companies get through this frustrating period before launching a mobile application? It’s easiest if you consider the four types of testing — unit, functional, data, and user experience — as building blocks that can be put together to create more comprehensive testing. From VentureBeat Buying and getting the best from mobile marketing technology doesn’t need to be difficult. Unit testing: the basics Put simply, unit testing is about testing individual functions in isolation. Functional testing: going through the motions Data testing: validating and integrating UX testing: getting it right the first time. SUM: Single Usability Metric.

(Presented at CHI 2005) Jeff Sauro • April 17, 2005 SUM is a standardized, summated and single usability metric. It was developed to represent the majority of variation in four common usability metrics used in summative usability tests: task completion rates, task time, satisfaction and error counts. The theoretical foundations of SUM are based on a paper presented at CHI 2005 entitled "A Method to Standardize Usability Metrics into a Single Score. " Usability ScoreCard The UsabilityScorecard web-application will take raw usability metrics (completion, time, sat, errors and clicks) and calculate confidence intervals and graph the results automatically.

You can also combine any combination of the metrics into a 2, 3 or 4 measure combined score. Data can be imported from Excel (.csv) and exported to Word(.rtf). SUM Calculator The SUM calculator will take raw usability metrics and convert them into a SUM score with confidence intervals. Why would you want to measure usability? Quantitative Usability - Papers. Scene 1: DAY, INTERIOR, HALLWAY You: Hey, I just heard our last usability test of the redesigned shopping cart page only found a couple problems.

And we can fix those in the next day or so. We're almost ready to roll. Marketing: Great. What percentage of your test subjects were able to actually get through the shopping cart? You: Well, I don't have the details, but my buddy said we had a success rate of 80%. Marketing: Great. You (smiling): Cut me a check while you're there. Scene 2: DAY, INTERIOR, MARKETING OFFICE Marketing (mumbling while setting up his Excel chart): We know that 1000 people a day put items in our shopping cart and try to check out. Scene 3: DAY, INTERIOR, USABILITY OFFICE You (to your buddy who did the test): I just told marketing we found some more problems we can fix on the shopping cart tasks.

Your buddy: Yeah, I guess we'll have to wait and see what happens on the Web site. Reality quiz Which is right – Scene 2, Scene 3, Both, or Neither. Margins of Error in Usability Tests. The 20/20 Rule of Precision Jeff Sauro • August 6, 2009 How many users will complete the task and how long will it take them? If you need to benchmark an interface, then a summative usability test is one way to answer these questions. Summative tests are the gold-standard for usability measurement. But just how precise are the metrics? Just as a presidential poll uses a sample to estimate outcomes for the entire population, usability tests also estimate the population task time and completion rate from a sample of users.

Also like presidential polls, our sample estimates won't be exactly like the entire population. The margin of error is half the width of the confidence interval and the confidence interval tells us the likely range the population mean and proportion will fall in. To find out, I examined a large set of data from an earlier analysis Jim Lewis and I conducted. Margin of Error for Task Times On average, a sample of 10 will have a margin of error of +/-36%. Plotting Likert Scales. Confidence Interval Calculator for a Completion Rate. Jeff Sauro • October 1, 2005 Use this calculator to calculate a confidence interval and best point estimate for an observed completion rate. This calculator provides the Adjusted Wald , Exact , Score and Wald intervals. Download this calculator in an excel file Explanation The Adjusted Wald method should be used almost all the time.

For exceptions, see below. For a detailed discussion of binomial confidence intervals with small samples, see the HFES paper and for a discussion on the best point estimate see the JUS paper . Adjusted Wald Method The adjusted Wald interval (also called the modified Wald interval), provides the best coverage for the specified interval when samples are less than about 150. Exact Method The Exact method was designed to guarantee at least 95% coverage, whereas the approximate methods (adjusted Wald and Score) provide an average coverage of 95% only in the long run. Score Method Wald Method * The "Margin of Error" values are half the width of the Confidence Intervals.

Templates. 8 Tips for Doing Usability Testing at a Fast-Paced Company. At HubSpot, we have one of the fastest development teams around. Our dev team continuously deploys code, up to 100 times per day, so our product is constantly changing. This leads to several challenges for us on the UX team, whose job it is to ensure that the software is easy and enjoyable to use. One big challenge we have is to conduct usability testing in this crazy fast environment. As you may know, usability testing is often one of the first things dropped from the "must have" list of product release schedules. There are several reasons for this, common assumptions that are made about usability testing: Usability testing takes too long or is too slowUsability testing is too hard to get rightWe can get the same data in other, easier ways All of these assumptions are common but do not stand up to scrutiny.

So at HubSpot, we've refined our usability testing process to be as fast as possible. Here are eight things we've learned: 1. 2. 3. 4. 5. 6. 7. 8. Bringing it all together. What is Ad Hoc Testing. What exactly is Ad Hoc Testing? When Will you use Ad Hoc Testing ? Ad-hoc testing is an unscripted software testing method. Often, people confuse it with the exploratory, negative and monkey testing. In the domain of software testing, the word “ad-hoc” means that the test is for a particular purpose at hand only. The following characteristics will provide the real meaning of ad-hoc testing: Ad-hoc testing is a random and unscripted test, performed without a formal test plan, procedures or documentation of results.

During 1990’s, ad-hoc testing experienced a bad reputation, for people presumed that, it was a careless way of testing. If you conduct, a test only once in the series of different tests, then you can call it as ad-hoc test. Does Ad Hoc Testing fit in the SDLC? Ad-hoc testing is a unique test with its own inherent strengths and merits.

Here are some situations where ad-hoc testing is inappropriate: Advantages of Ad Hoc Testing Disadvantages of Ad Hoc Testing Comments comments. Ad-Hoc Usability Testing. We at MAYA have been interested for a while about the differences between usability tests where the tasks are well-defined beforehand and those that use a looser structure; where the user has greater autonomy to explore the interface or the product. Observing the user while they’re allowed to explore a system on their own has merit — after all, they won’t have a usability test moderator telling them what task to do next when they’re using the system in earnest. On the other hand, if there’s no structure to the test, a participant may not encounter many areas of the user interface, or it may take more users to get complete coverage of a system.

It’s also hard to make objective measurements (error rates, time-on-task) if the tasks are generated in an ad-hoc fashion — not only will the tasks not be known by the moderator, but each participant will have a different task set. Usability and user experience surveys. This article or section is a stub. A stub is an entry that did not yet receive substantial attention from editors, and as such does not yet contain enough information to be considered a real article. In other words, it is a short or insufficient piece of information and requires additions. 1 Introduction According to Perlman (2009), “Questionnaires have long been used to evaluate user interfaces (Root & Draper, 1983). See also: learning surveys 2 List of web usability questionnaires We didn't find (yet) any specific web usability questionnaires, see below for generic usability survey instruments and that can be adapted to specific websites. 3 List of usability and user experience questionnaires 3.1 User Interface Usability Evaluation with Web-Based Questionnaires Author: Gary Perlman (2009) Available through the User Interface Usability Evaluation with Web-Based Questionnaires page, either as online interface or as a a set of Perl scripts that you can install in your own server.

Usefulness. Basics of Website Usability Testing. “It takes only five users to uncover 80 percent of high-level usability problems”, – Jakob Nielsen. Emergence of Usability Testing In the beginning, there was an ocean liner. It was in the late 1940s. Henry Dreyfuss designed the state rooms for ocean liners “Independence” and “Constitution” and installed them in a warehouse. A group of travelers was invited to live in these rooms for a while after installation. With the development of web space and virtual services, methods of usability testing are constantly updated. Web Projects Usability Testing – Its Types The simple truth proposed by Jeffrey Graham is evident and states that customers who find your website difficult to use will most likely leave it, which is surely bad for your business.

The most common concept of usability testing is a feedback left by users after some interaction with your website. Usability Testing Services There are even services like Usertesting that make all the usability testing for you. How Many People To Test. Developer Watching Usability Test. An Introduction To A/B Testing. A/B Testing isn’t a new term. In fact, many canny marketers and designers are using it at this very moment to obtain valuable insight regarding visitor behavior and to improve website conversion rate. Unfortunately, A/B Testing still remains in the dark for most online marketers and web designers. The technique is still underrated as opposed to comparably valued methods like SEO. Definition of A/B Testing A/B Testing (also known as Split Testing) is a website optimisation technique that involves sending half your users to one version of a page, and the other half to another, and watching the web analytics to see which one is more effective in getting them to do what you want them to do (for example, sign up for a user account) (Source).

An excellent introduction video about A/B Testing from Hutchinson Web Design To improve your website conversion rates using A/B Testing, you must first learn and master the technique. Benefits of A/B Testing. Testing Lab Diagram. A/B Testing Diagram. What is Usability Testing? Advanced Common Sense - Steve Krug's Web site. Usability testing. Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system.[1] This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.

Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose. Examples of products that commonly benefit from usability testing are food, consumer products, web sites or web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human–computer interaction studies attempt to formulate universal principles. What it is not[edit] Simply gathering opinions on an object or document is market research or qualitative research rather than usability testing.