Evaluation of Questionnaire and Testing

The main problems with the testing that we carried out was that we did not have a particularly large number of users (we had 7 respondants to the questionnaire) and these respondants were all from a similar background – 20-30 age range, mostly students who are “tech-savvy”. On the other hand, this group is likely to be an early adopter of the product. After reviewing and improving the design of the product, it would be a good idea to test it on a wider range of potential users to identify further problems; e.g. issues for users with visual impairments, elderly users or users with fat fingers.

 There was also some confusion from the test users who didn’t quite understand that this would be a touch-screen interface on a fridge – there were concerns that on small monitors the interface would take up too much of the screen, this wouldn’t be an issue as the prototype illustrates a piece of bespoke hardware running the software. It would have been beneficial if we had given a more in-depth introduction to the project and the exact nature of the prototype – it had been assumed that these test users would remember what we were talking about from a previous questionnaire.

Another issue was that we were unable to physically time people carrying out tasks. This was because the test users were not in the same physical location as us, as the prototype and questionnaire are both web-based. Also, the nature of our questions led to some respondants neglecting to give quantative or full answers. This made analysis more difficult and could be remedied by wording questions more carefully, adding addition questions specifically asking for certain data or only testing the prototype on users in the same physical location as us, so that we can resolve any confusion.

An additional effect of being in the same physical location as the testers, would be that the questionnaire could receive a different response when completed in the presence of human questioners, rather than a computer. Users are less likely to respond to “interviewer bias” when answering questions as all users will be asked the same questions in the same way. Additionally, users could be more honest when answering to a computer rather than a human, whose might respond negatively to a user’s opinion or disclose private information (http://www.surveysystem.com/sdesign.htm).

 Unfortunately, more detailed statistical analysis could not be carried out on our result set as the sample user group was too small for the results to be meaningful.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.