Home » Blog » Think Aloud Testing

Think Aloud Testing

In 1993, UX guru Jakob Nielsen wrote that “Thinking aloud may be the single most valuable usability engineering method”.1 The think aloud method in UX research is as simple as it is effective: it consists in observing a user interact with a product and have them “think out loud” – stating what they are doing, what they are trying to do, and whatever passes through their head during the session.

The Think Aloud Protocol

A think-aloud (also thinking aloud or think out loud) protocol is a usability testing method that consists in having a participant express their thoughts and intentions as they walk through a series of tasks while interacting with a user interface or prototype. The participants are expected to speak out their mind without overthinking, and the session is usually audio and video recorded.


Decades later, this is still deemed as “the #1 usability tool”, due to being the most cost-efficient: it requires as little as five participants, no special setup or expensive equipment besides a quiet room and a way to record the session, and helps quickly uncover the vast majority of usability issues.


The scope of think aloud tests is also somewhat limited to what can be evaluated during the session – it works great for design research and testing a UI, but not so much for evaluating other aspects of the user experience. This is why think aloud tasks are usually combined with other user research methods, such as interviews or questionnaires, which can help collect feedback before and after the test. Another disadvantage of this setup is that it’s somewhat unnatural – whether conducted in-house or remotely – with the participant often feeling they are being tested or that they are expected to say specific things while performing a task.

How to setup a think aloud session

Participant recruiting

Studies show that in most cases, it is possible to identify the majority of usability issues with as little as five participants, although you will need more if you are evaluating critical systems, complex interfaces or have a wide target user base. In any case, always recruit more participants than your target number. This is to cover for no shows, technical issues, or if you need to revise your script after the first session (which happens quite often in my experience). Recruiting between 7 and 10 participants is usually a good idea, as it also gives you a chance to look for people with different demographic backgrounds. Ideally, these participants should be as close as possible as our target end users and not be familiar with UX or market research themselves.

Setting up the lab

If you are conducting the session in-house, you will need a quiet room where the participant can sit together with the session moderator and observer(s). The least amount of people in the room, the better – if possible, have the observers join remotely from a different room. You will need to find a way to record both the participant thinking out loud and the interface they are interacting with (so that you can see what they are commenting on and if what they are saying is matching what they are trying to do). Recording the participant’s face is not a must and it may make some people feel uncomfortable, but it can help observe their facial expressions. Make sure to explain everything you are going to record clearly and to ask participants for their permission in advance.

If you are conducting the session remotely, most UX research tools like Lookback will allow you to record both the participant’s voice and their screen.

Prepare your documentation

There’s a few papers you will need to have ready before any think aloud session: a briefing script, in which you explain to the participant what they are going to do (without giving them too much details about the product they are testing!) and make sure they understand they are not being tested and can quit the session at any moment if they are uncomfortable; a consent form in which you ask them for permission to be audio (and potentially video) recorded, and in which you explain who is going to have access to their data after the session and for what purpose; and your questionnaire and list of tasks that the participants will have to go through. It is also helpful to prepare a small debriefing script to be read after the session in which you can give the participants more information about the test and explain any answer they may have which you could not tell them about before the test.

 1 J. Nielsen, Usability Engineering. Morgan Kaufmann Publishers Inc., 1993

Leave a Reply

Your email address will not be published. Required fields are marked *