Six exercises

This page shows our 5 cents for each of the six exercises which the book refers to this website. The exercises are:

Imagination, page 87, Section 4.2.3

The book says: As an imagination exercise, (1) try to imagine an example of each of the 28 analogue image modality types. It can be done (probably takes some time). (2) For each of them, try to answer the question: might this representation serve a useful purpose in HCI? (3) Now let's try a different angle for which you might have to replace some examples with others: could all these types of image representation have been used, say, 2000 years ago, possibly for a different purpose in some cases? (4) How many of the 28 types of representation can be rendered on the pages of a book like this one? (5) Modify or replace the four acoustic representations to also represent spatial dimensionality in 1D, 2D and 3D, bringing the total number of types to 36. Finally (6), answer Questions (2), (3) and (4) for all of the 12 acoustic modalities you have just generated.

How do we get 28 varieties of analogue image in the taxonomy in the first place? We have: (gr,aco,hap =) 3, (st,dyn =) 2, and (skt, real =) 2. And 3 x 2 x 2 = 12. We know that 2/3 of these, that is, 8, can be (1D, 2D, 3D =) 3, and 8 x 3 = 24. Since those 8 must have been either 1D, 2D or 3D already, we get (24 - 8 =) 16 new ones. And 12 + 16 = 28.

In the table below, gr, aco, hap means graphic, acoustic and haptic, respectively, st, dyn is static vs. dynamic, skt, real is sketch vs. realistic representation, and 1D, 2D, 3D is representation in 1, 2 and 3 spatial dimensions, respectively. Obviously, 1D and 2D representation are only approximations to the mathematical definitions of points which have no area, lines with no width, surfaces with no thickness, etc. The notion of "realistic" vs. sketchy representation is far more fuzzy, though. "Coarse outline" vs. "completed, substantial, rather realistic (similar to the real thing) representation" is perhaps the distinction we have in mind. The header "Book" refers to paper books, "eBook" refers to the current generation of eBooks, like the Amazon Kindle.

Modality 1. Example representation 2. HCI example 3. 2000 years ago 4. Book
/ eBook
1. gr,sta,1D,skt Hand-drawn straight line with two fuzzy point marks on it Yes, in a mock-up Yes Yes/Yes
2. gr,sta,1D,real Ruler-drawn straight line with two sharp fix-points on it Yes, as part of GUI component Yes Yes/Yes
3. gr,sta,2D,skt Hand-drawn sketch of duck Yes, in a mock-up Yes Yes/Yes
4. gr,sta,2D,real Realistic photograph of a duck Wiki Less photo-realism, close enough for Wiki Yes/Yes
5. gr,sta,3D,skt Sculpture sketch of a duck Virtual object sketch Yes No/No
6. gr,sta,3D,real Something close to a stuffed duck 3D VR duck Stuffed duck else maybe reduced realism No/No
7. gr,dyn,1D,skt Sketching how A hit B, A and B shown as straight line points Yes, in a mock-up Yes No/No
8. gr,dyn,1D,real Animation of A hitting B Physics animation Yes, mechanical replica No/No
9. gr,dyn,2D,skt Sketching how A hit B shown as a plane curve. Yes, in a mock-up Yes No/No
10. gr,dyn,2D,real Animation of A hitting B Physics animation Yes, mechanical replica No/No
11. gr,dyn,3D,skt Emulating flying an aircraft by means of iconic gestures Yes, in a flight cockpit mock-up Using a mocked-up rotating weapons simulator No/No
12. gr,dyn,3D,real Demonstrating the finished flight simulator Operating the finished flight simulator Operating the finished weapons simulator No/No
13. hap,sta,1D,skt Hand-crafted textured line with two fuzzy point marks on it Yes, in a VE mock-up Yes No/No
14. hap,sta,1D,real Straight textured line with two fix-points on it VR hapxel line, e.g., a border Yes, maybe higher tolerance No/No
15. hap,sta,2D,skt Haptic duck image sketch Yes, in a mock-up Yes No/No
16. hap,sta,2D,real Scientific texture book duck With hapxels but how well? Yes No/No
17. hap,sta,3D,skt Sculpture sketch of a duck VR mock-up Yes No/No
18. hap,sta,3D,real Duck model with realistic surface texture Deceptive stationary VR duck Stuffed duck else maybe reduced realism No/No
19. hap,dyn,1D,skt Sketching on your skin how A hit B, A and B being haptic straight line points Yes in a mock-up Yes No/No
20. hap,dyn,1D,real Haptic animation of A hitting B Haptic physics animation Yes, mechanical replica No/No
21. hap,dyn,2D,skt Sketching on your skin how A hit B, A and B being haptic plane curve points Yes in a mock-up Yes No/No
22. hap,dyn,2D,real Haptic animation of A hitting B Haptic physics animation Yes, mechanical replica No/No
23. hap,dyn,3D,skt Emulating a wrestling robot Yes, in a mock-up Yes No/No
24. hap,dyn,3D,real Demonstrating a wrestling robot Wrestling with the robot Mechanical arm-wrestler No/No
25. aco,sta,1D,skt Text Text Text No
26. aco,sta,1D,real Text Text Text No
27. aco,sta,2D,skt Text Text Text No
28. aco,sta,2D,real Text Text Text No
29. aco,sta,3D,skt Text Text Text No
30. aco,sta,3D,real Text Text Text No
31. aco,dyn,1D,skt Text Text Text No
32. aco,dyn,1D,real Text Text Text No
33. aco,dyn,2D,skt Text Text Text No
34. aco,dyn,2D,real Text Text Text No
35. aco,dyn,3D,skt Text Text Text No
36. aco,dyn,3D,real Text Text Text No

Send us your own examples and comments.

To top of page

Combining modalities, page 106, Section 4.4.2

The book says: Column 4 may suggest new ways of combining modalities. It says that complementary modalities are always aimed at the same user group(s). We suppose they have to be, because they must be decoded together as part of the same message, right? Substitution of modalities is aimed at special user groups - or might there be cases in which substitution is aimed at the same group? For all other modality relations, it seems that representations could serve different user groups. For instance, we might add visual speech to acoustic speech to help people in noisy conditions, but, by the same token, the application might then also come to benefit a new group of hard-of-hearing users who have trouble with speech-only output. Try to think of examples for all cases.

Type of relation Same user group Different user group
Complementarity "Put that there [pointing gesture]";
graphic image of female person with graphic label "Alice" underneath.
No (it seems): component representations are pointless when separated.
Redundancy Audiovisual speech helps when the noise level goes up;
"the trout was the length of my arm [showing the size between the two hands/arms]".
And helps the hard-of-hearing as well.
Elaboration Talk accompanying a slideshow;
speech labelling a haptic image icon.
Simplifying text and discourse and adding imagery and metaphor to reach a larger public.
Alternative Graphic or haptic table-to-graph conversion functionality;
film subtitling.
Add read-aloud speech to web page for the blind and partially sighted;
read the subtitles if you cannot hear what's being said, don't understand the language, or wish to learn pronunciation.
Stand-in A graphic label stands in for more elaborate and informative text;
a text description stands in for an image.
?
Substitution Replacing spoken discourse by paper message exchange when speech is not allowed;
replacing typed text maths notation by spoken notation because no paper.
Gaze-soft keyboard pointing replacing haptic typing or spoken dictation for those who cannot do any of those;
replacing graphic typed text maths notation by spoken notation because someone cannot see.
Conflict Speaking over a spoken language demo. ?

Send us your own examples and comments.

To top of page

Device unavailability, page 113, Section 4.5.1

The book says: The emotional prosody example above is a case of device unavailability for modality reasons: given the modality candidate, no suitable device-plus-software currently exists. We hypothesise that cases of device unavailability may exist for all AMITU(D)E aspects. In other words, (1) something in our specification of A, M, I, T, U or E requires a device that does not exist, is unobtainable, or is prohibitive in other ways, so that we either have to build it or change our specification. Try to construct examples for all aspects.

Our ideas are in the second column from the left in the table below.

Conditional device availability, page 114, Section 4.5.1

The book says: The facial happiness example above is a case of conditional device availability for use environment reasons: we can have the device, but only if we constrain the use environment specification. We hypothesise that conditional device availability may exist for all AMITU(D)E aspects. (2) For each aspect A, M, I, T, U or E, we can have the desired device if we make our specification more restrictive. Try to construct examples for all aspects.

Our ideas are in the right-most column in the table below.

AMITUDE
aspect
Device unavailability Conditional device availability
A Haptic cloth texture presenter;
smuggler identifier.
Text
M Emotional speech. Text
I Text Text
T Text Text
U Object-in-field-of-vision namer for the blind and partially sighted. Text
E Text Facial emotion cues can only be understood if environment is controlled.

Send us your own examples and comments.

To top of page

GUI-world cues, page 133, Section 6.2.1

The book says: Try to (i) find as many GUI-world cues as possible in Figure 6.1 (Section 6.2.1) on user-centred design activities; (ii) match the common approaches (Table 6.1, Section 6.1) with those in Figure 6.1; and (iii) match the methods mentioned in Figure 6.1 with the methods presented in this book and reviewed in Section 6.2.

We show our answers to the questions in the table below, using yellow highlight for (1) GUI-world cues, boldface for (2) common approaches, and italics for (3) methods.

User-centred design activities
Analysis phase
Meet with key stakeholders (3a, 1a) to set vision (2b) Look at competitive products (2a, 1b)
Include usability tasks in the project plan Create user profiles (2b)
Assemble a multidisciplinary team to ensure complete expertise Develop a task analysis (2b)
Develop usability goals and objectives (2b) Document user scenarios (3c)
Conduct field studies (3b) Document user performance requirements
Design phase
Begin to brainstorm (2b) design concepts and metaphors Conduct usability testing on low-fidelity prototypes (3e, 1e)
Develop screen flow (1c) and navigation model (2b) Create high-fidelity detailed design (2b)
Do walkthroughs (3d) of design concepts Do usability testing (3f) again
Begin design with paper and pencil (1d) Document standards (3g, 1f) and guidelines (3h, 1g)
Create low-fidelity prototypes (1e) (2b) Create a design specification (2b)
Implementation phase
Do ongoing heuristic evaluations (3i, 1h) Conduct usability testing (3j) as soon as possible
Work closely with delivery team as design is implemented
Deployment phase
Use surveys (3k) to get user feedback Check objectives using usability testing (3l)
Conduct field studies (3b) to get info about actual use

1. GUI-world cues (yellow highlight) may be split into strong, or direct, cues and more subtle, or indirect, cues. In the analysis phase there are two indirect cues, i.e., a. "meet with key stakeholders" and b. "look at competitive products". Today, stakeholder stand-ins are often the best we can get in multimodal systems development, and most commercially available applications by far are GUI-based.

In the design phase there is one direct cue, i.e., c. "screen flow", and four indirect cues. d. "Begin design with pen and paper" is suitable for GUI-based applications whereas this need not be the case for multimodal applications in general. e. "Low-fidelity prototypes" (mentioned twice) are fine in a GUI design context but less suitable for multimodal applications in general, cf. also Section 9.1 on mock-ups, in particular when you use pen and paper only. f. "Document standards" and g. "guidelines" may also be viewed as indirect cues since there are many standards and guidelines applicable to GUI-based systems while there is often a lack of standards and guidelines for advanced multimodal systems.

In the implementation phase, h. "heuristic evaluation" may be seen as a GUI cue since there are not necessarily any heuristics available for advanced multimodal systems.

2. Common approaches (purple text): a. related systems and projects. Note that several of the other common approaches we mention in Table 6.1 are necessary - the rest of them are optional - for carrying out the many analytical and creative specification and design activities mentioned: b. thinking about usability tasks and goals, user profiles, and task analysis, thinking during developer brainstorming, design, prototyping, and creation of descriptive project sources through documentation and specification, etc. Our mark-up on this point is only illustrative because thinking really is required everywhere.

3. Methods (blue text): a. stakeholder meetings, b. macro-behavioural field methods, c. use cases and scenarios, d. cognitive walkthrough, e. mock-up, f. mock-up or wizard of oz, g. standards, h. guidelines, i. standards or guidelines, j. implemented prototype lab test (most likely), k. user surveys, l. field test. Note that some of the tests mentioned are likely to involve other methods, such as (m) screening, (n) pre-test interviews, (o) post-test interviews, (p) real-time observation of users.

Send us your comments.

To top of page

Spotting AMITUDE aspects and more, page 205, Section 9.2

The book says: Example: Generic stakeholder meeting issues. Figure 9.2 in Section 9.2 (reproduced below) lists questions and issues that are often relevant at stakeholder meetings. Note that many of the questions suggest an early meeting. Note also the GUI context. Try to (1) spot which AMITUDE aspects are being considered; (2) which methods and other approaches; and (3) which usability requirements.

Examples of questions for a stakeholder meeting
1. Why is the system (1a) being developed? What are the overall objectives? How will it be judged as a success?
2. Who are the intended users (1b) and what are their tasks (1c)? Why will they use the system? What is their experience and expertise?
3. Who are the other stakeholders and how might they be impacted by the consequences of a usable or unusable system?
4. What are the stakeholder (2a) and organisational requirements (1d)?
5. What are the technical and environmental (1d) constraints? What types of hardware (1e) will be used in what environments (1d)?
6. What key functionality (3a) is needed to support the users (1b) needs?
7. How will the system be used? What is the overall workflow, e.g., from deciding to use the system, through operating it to obtaining results? What are typical scenarios (2b) of what the users can achieve?
8. What are the usability goals? (e.g., How important is ease of use and ease of learning (3b)? How long should it take users to complete their tasks (3b)? Is it important to minimise user errors (3b)? What GUI (1f) style guide (2c) should be used?)
9. How will users obtain assistance (3b)?
10. Are there any initial design concepts (2d)?
11. Is there an existing or competitor system (2e)?

Here are our five cents on the three numbered questions above:

(1) Applying Figure 3.1 (Section 3.1.2) to the questions and issues in Figure 9.2, we get (in yellow): a. application type, b. user, c. task, d. environment of use, e. includes devices, f. modalities: GUI taken for granted. Only the varieties of AMITUDE aspect interaction described in Section 3.6.1 and shown in Figure 3.3 are not being referred to, and that's probably because classical deliberate interaction is taken for granted along with the standard GUI. There are cues to this kind of interaction in points 7, 8 and 9.

(2) Applying the "other approaches" Table 6.1 (Section 6.1), and the "methods" Tables 6.2 through 6.6 (Sections 6.2.3 through 6.2.7) to the questions and issues in figure 9.2, we get (in pink): a. stakeholder meetings, b. scenarios and use cases, c. guidelines or standards, d. project descriptive resources, e. related systems and projects.

Note that, in addition to question 10, several other questions in the figure may be viewed as questions about information that might be available in existing descriptive project resources.

(3) Applying Table 1.1 (Section 1.4.6) to the questions and issues in Figure 9.2, we get (in blue): a. functionality, b. ease of use. Note the absence of (direct) references to technical quality and user experience.

Send us your comments.

To top of page