Showing posts with label Evaluating the User Experience. Show all posts
Showing posts with label Evaluating the User Experience. Show all posts

Wednesday, April 8, 2015

Evaluating user experience, individual assignment 5


Evaluating user experience based on biometric data:

1.EEG (Electroencephalography) EEG uses electrodes to capture brain activity in different regions of the brain. Based on brain studies and theories of functions of brain regions these data is associated to arousal, learning etc. can be used to see the effect of one tool or application on users.
This method is very interesting but seems too expensive and time consuming for simple and work related applications like our project.


2.Eye tracking
I think eye tracking has a good potential to be used to evaluate both pragmatic and aesthetic features.
I have seen examples of usability and information architecture evaluation by eye tracking, and also as in the example of the paper provided, user's enjoyment and aesthetic fulfilment.
In our case this method can be used to see which parts of the page user looks most, where are the attentions focused parts, where eye is fixed and also the sequence of the eye movement. This information in our system is only useful if it follows an interview to ask the user for his reason for each action.


3.Skin Conductance
Change in skin conductance can be associated to different emotional states. For example when a person feels empathy his hand's skin conductance changes, by measuring this changes we can have estimation of user's emotional state or arousal.
This method similar to EEG seems to be best fit for evaluating emotions not usability. Our system is very practical and does not involve strong emotions.

My choice to evaluate our system will be eye-tracking combined with a post test interview.

Friday, March 27, 2015

Evaluating user experience, individual assignment 4

This weeks methods were:
1. Affect Grid
Affect grid seems to be a type of scale to measure affect. It is simple and effective but has limitations compared to more detailed methods. In this papers this method is used for measuring pleasure and arousal, However author claims that it can be used for any other feeling and emotions measurements too.

2. Cognitive Walkthrough
"The cognitive walkthrough is a precisely specified procedure for simulating a user's cognitive processes as the user interacts with an interface in an effort to accomplish a specific task". It provides user with atomic actions and then lets user to decide about sequence of necessary or required actions to fulfil the task.
This method is based on theory of exploratory learning and methods driven from this theory tries to minimise the user effort for learning the steps he needs to perform to fulfil system tasks. It seems to have a really strong theoretical background however I cant claim to understand the connection between its theory and actual steps that one must perform.
I think this method can be useful during prototyping phase, because it will help to design a system that is intuitive for the user.

3. Group-based expert walkthrough
To my understanding group-based walk through is a cognitive walk though that is done in a group. In Group-based expert walkthrough's case the user also is a domain expert. It can be used in domain specific, or work related application. This method seems better that normal cognitive walk through because being done in group, it does produce extra discussion and may give richer feedback. On the other hand it is very time consuming and finding a group of domain experts may be difficult.


I think if no limitations given, I will choose group-based expert walk though for our project, first because it is a domain specific tool, and second we can use a group discussion to improve the system function and usability.

Friday, March 13, 2015

Evaluating user experience : Individual assignment 3, paper review

Wizard of Oz prototyping
I think this method(the version described in this paper) is specially useful for designs including more aspects than mere screen interface. Although even in case of simpler web or mobile application it can provide more realistic test settings than paper or clickable prototypes. Well actually I am not sure if I can consider prototypes a subset of WOZ in general..


ImmersionWell I am not sure about this method. I think this is(or can) be done in any case even when we have real users. Since the experimenter is one person here, the question is how difficult it is to find "one" user for the system?
Bottom line is that no harm on doing it in addition to some other evaluation method.

Emofaces
Capturing emotional response through emo-pics sounds interesting and engaging. It can help break the (sometimes boring) routine of testing a design. However I dont see much use in our current design project, because this tool is a job related tool and my assumption is that it does not contain much emotional reaction.

My choice:
I have not indicated this choice clearly before. my original thought was WOZ . But during the class we decided to use WOZ in combination with emofaces and it was good combination.

Thursday, February 26, 2015

Individual assingmet 2, Evaluating the User Experience



1.Object-BasedTechniques 
In general these methods are interesting but at the same time challenging. they need a lot of experience and insight to be implemented and interpreted in a meaningful way.
The reading resource tried to give a detailed "how to" about implementation, but what was missing was that how to design our study based on our specific case and also how to interpret and use outcome of such sessions.
Dialogic Techniques 
  Although the general method, even with researcher collected "objects" is still a very effective way of starting conversations and provoking reactions, what I liked about object based techniques was the possibility of using user collected photo/artefact. I think people can relate easier to the objects they bring, and that objects means more and contains more memory and emotions for them.
Generative Techniques
I personally love generative methods and I would like to try them in a real project. they seem fun and creative and users are more engaged. I also liked that we could ask participants to collect and bring their images/objects describing the subject.
Associative Techniques 
The only example in this group was "card sorting". This technique seems to produce more tangible results and can be used in more specific details. It was also provided with more ways of sense making and analysing the output.


2.Contextual laddering
The laddering method refers to a specific one-to-one elicitation interview technique, which tries to get to the interviewees intuitions and values by asking questions based on interviewees previous answers. Since as mentioned in the paper, children dont talk about their values during interviews and the interviewer should decide about what value to assign to certain keywords, it makes it difficult to get to be sure if interview has captured a real reason of like or dislike.
Also I feel this method to be a little like interrogation, and thus unpleasant.


3.Multiple Sorting Method
This method was similar to card sorting method within "object based method". It is maybe easier to implement than generative methods. I still didnt understand how one can make sense of outputs from such experiment into a useful information for design process.


What I like most, What I will choose for Esensorics
I like object based methods the most, because they can generate rich data and the process is interesting for both researcher and user. But I dont think that this methods can be very useful at this stage of the project for us. Firstly because now we have more or less a clear idea on what we are going to build, second our context is not as open and free and needs specific outcome expectancy.
I would choose "associative technique, the card sorting" because it is easier and faster to implement and results can be more directed towards detailed inputs to the design of the tool.

Thursday, February 12, 2015

Individual assignment 1



My feeling about the three methods of evaluation was that they each capture a unique angle of the product and user's relation to it. I think that each of them are good for different stages of the process. Also choosing one, will depend on what we want to capture with conducting the test.


Evaluating symbolic meaning can be a good method for evaluating or rethinking a complete product or an existing one. It kind of belongs to the very last stages of design and development. Actually it may be better to say that it belongs to a higher level of abstraction. It captures symbolic meaning of the product, user's emotion and association to it as a whole. This method will give a general overview of the meaning associated to the product but does not contribute to the usability or other similar fine grain and low level features.
The specific method used for capturing symbolic meaning facilitates communication between user and experimenter and eases the hard task of putting the conceptual meanings into words.  


Evaluating early product concepts through Anticipated eXperience Evaluation (AXE) approach on the other hand goes a level deeper in the details and tries to evaluate concepts by means of their representation in the product(for example with using an early prototype). I think that this method can be used even in the details of implementation such as menus or features.
This method is useful for early stages of design and development and although it is possible to conduct this evaluation with the concepts, the presentation of a concept to participants inevitably determines the feedback one can obtain in an evaluation session. for this reason a prototype will be a better medium for doing it.
The method used in this paper, using image as a medium and asking questions, makes communication and interpretation of the thoughts easier for both user and designer.


Comparative research is a very practical functional type of evaluation. Unlike the two first methods it does not try to capture very high level concepts or meanings. But it starts from functionality and implementation and may go to the deeper levels. It also different from the last two in the way it looks into the evaluation process. It does not only focus on your product but it compares you with your potential competitors. Although the paper suggest that comparative evaluation should be carried out in regular basis, it seems very useful while conceptualising a potential product. This makes it a very good candidate for our project in current stage.


What I will choose and why..
I think I will start with comparative research to get ideas about possible direction that we can choose to continue. and will defiantly evaluate early product concept before going too much into implementation. In my opinion symbolic meaning evaluation can be done after the product is in the market and has been used with fair amount of users.