GreenBean Report: On-line Study to Evaluate Med-fi Prototype



Download 258.08 Kb.
Page1/2
Date23.06.2017
Size258.08 Kb.
#21478
  1   2
greenBean
Report: On-line Study to Evaluate Med-fi Prototype
Kevin Merritt (User Research and Design)
Pallavi Damera (User Research and Design)
Justin Pai (UI Development and Design)
Rick Chen (UI Development and Design)
Blake Thomson (UI Development and Design)

Purpose

greenBean is a proposed mobile application that helps shoppers make greener choices. Key functionalities of greenBean are:



  • It informs users the green ratings of a product,

  • Allows them to scan a product to find product information and greenBean ratings

  • They can search and quickly compare similar products and

  • Also set preferences to avoid allergen food.

The purpose of the study is to evaluate the improvements incorporated in the medium fidelity prototype of greenBean and to collect user’s input to inform our design decisions.

Method

Participants

  • Recruitment

We recruited friends and friends of friends by word of mouth. The email invite and sample status messages that we posted on Social Networks and Chat applications are included in the Appendix. The on-line study was open for 1 day, and we could collect 15 completed studies. Estimated time for each study is about 20 mins.

  • Incentive

There were no monetary incentives. Mostly friendly requests, reminders and return-in-karma by promising to participate in their studies(in some cases) got us participants for the study. Also the requests were in the form of “Take a look at how our product is shaping, and help us improve it” kind.


Tasks

We designed three tasks for the study of varying difficulty and covering key features of the product. The three tasks are:



  • Task 1(Easy): Check my greenBean score

Scenario: You want to find out your personal cumulative impact of your shopping choices to see how “green” you have been overall. You have already entered your Safeway loyalty card information so greenBean can track what you have purchased over time.

Task: Check your greenBean score and score breakdown.

Validation question: Please note the following: Which week in this month did you score the most "beans?"

  • Task 2(Moderate): Identifying products that are safe

Scenario:You are allergic to peanuts and you need to find granola that will be safe to eat.

Task: First, use the “Preferences” section of greenBean to identify peanuts as an allergen.Then use greenBean search to find a safe Granola bar.

Validation question: Please note the following: Is Udi’s an allergy safe granola product?

  • Task 3(Difficult): Scan and compare with products

Scenario: You’re at the grocery store shopping and you need to get laundry detergent. You found Kirkland detergent but want to evaluate it against other detergents available in the store.

Task: Scan the product barcode to look up more details about Kirkland detergent. Lookup it’s Ratings. Compare it to other detergents to help you decide which kind to buy.

Validation questions: How many user reviews were there for the product? Which product has the lowest price/unit?


Procedure

We conducted an on-line user study to evaluate a med-fi prototype. The tool we used to set the study up is Userzoom. The complete study is included in the Appendix, but for now the different components of the study in short are:



  • Initial questions had demographic questions and gathered information about participant’s familiarity with mobile applications in general.

  • Tasks:

    • Are served randomized to make sure that same order will not bias the ease of a particular task due to exposure with previous tasks.

    • Each task is explained with an initial screen that explains the scenario, task and asks the user to take note of some aspects to serve as validation questions.

    • Then the participants are taken to the link that runs the prototype with the task and validity question still in context.

    • Participants who reported to have completed the task successfully (we will call them successful participants) are given validation questions to confirm that they successfully completed the task.

      • The validation questions are framed in a way to not give any hint to completion of the tasks

      • Also where tasks have sub tasks, we have multiple validation questions to confirm that all the sub tasks are reached. Successful participants who have correctly answered the validation questions will be called, “Confirmed successful participants”

      • Successful participants are served with Success questionnaire. This section collects quantitative data to inform some design issues and also takes feedback on the current task.

    • Participants who abandon the task are taken to Abandon questionnaire. This section collects information to identify where the breakdown occurred in the task flow.

    • In both Abandon questionnaire and Success questionnaire participants could give qualitative feedback as well.

    • This protocol of running the task is same for all the 3 tasks

  • Final questionnaire is given at the end of all 3 tasks. This is qualitative to cover the feedback on what they found good, liked least and relevance of the information and tasks.

The entire questionnaire is included in the Appendix. Its important to mention the constraints of this user study. Being an on-line study with a length about 20mins, the motivation for the participants to complete all tasks could be low due to lack of social pressure. Data collected on amount of time may not be very accurate for the fact that participants could multiplex. Also the participant may not be compelled to read the instructions clearly, however concise and simply presented. However the pilot study helped to identify many issues.




Test Measures

For the final analysis we are taking only completed studies into account for our analysis. There are 90 hits to the survey link and 23 quit at the Welcome page, while the rest dropped the study mid way while 15 fully completed the surveys.



Successful completions are those which are reported by participants that they completed the task. Confirmed successful completions are those which participants who completed tasks successfully got all the validation questions correct. Validation questions served to confirm that all the subtasks are completed.

Success questions are framed to gain feedback about the task interaction and inform our design on options and multiple ways of doing same thing. Abandoned questions are framed to give us a clue about which point there was a breakdown in the task. Qualitative questions are framed to give us an idea about how to improve the design.



Results
Here we will present some of the important results. Raw data and compile of all the results will be included in the Appendix. These results reflect 15 completed studies.
Demographics and Product use related:

Age range of the Participants is 20-25, with one outlier of 33.
Gender:

c19c79.jpeg

Handheld device use:

c19c82.jpeg

Familiarity with Android

c19c83.jpeg

Familiarity with different operation with the phone, 1: Not Familiar, 5: Familiar

Answer

Mean

Taking pictures

4.2

Browsing Internet

2.9

Sending text messages

4.8

Sending email

2.5

Sending Picture messages

3.8

Finding directions on maps

2.2


Product search in stores

On an average, Participants felt 3.7/5 towards being environmentally friendly. (1: Don’t care, 5: Strongly feel)


Task 1: Check my greenBean Score:

For Success cases

Time

Mean

0:46

StDev

0:12

Maximum

1:00

Minimum

0:20

c19t32s.jpeg

For non Success

Count

Percent

Time

Abandon

1

25%

0:04

Error

3

75%

1:46

c19t32n.jpeg
Successful task scenarios:

80% of them are confirmed by correct answers to validation questions.



Quantitative responses, 1: Strongly disagree and 5:Strongly agree:

Answer

Mean

The task was easy to complete.

4.4

The interface helped me to complete the task.

4.4

It was intuitive to find your weekly, monthly, and yearly scores using the tabbed navigation.

4.6

It is important to see your score visually.

4.4

It is important to know the exact number of "beans" collected.

3.8


Abandon task scenarios, 1: Yes, 2: No, 3: I don’t know:

Answer

1

2

3

Total

Could you start the application by clicking on the "greenBean" on the phone’s desktop?

100% (1)

0% (0)

0% (0)

1

Did you click on the "Check my Score" button on the greenBean home page?

0% (0)

100% (1)

0% (0)

1

Did you click on the "Month" tab?

0% (0)

0% (0)

100% (1)

1

Did you view the score breakdown in the "Month" tab?

0% (0)

0% (0)

100% (1)

1

Task 2: Set allergen preferences, and search for a safe product.

For Success cases

Time

Mean

3:08

StDev

0:57

Maximum

4:46

Minimum

1:49

c19t40s.jpeg



For non success

Count

Percent

Time

Abandon

5

100%

2:32

c19t40n.jpeg
Successful task scenarios:

70% confirmed to have completed the final sub task: search for safe product and 100% confirmed the first subtask: to set allergy preferences.



Quantitative responses, 1: Strongly disagree and 5: Strongly agree:

Answer

Mean

The task was easy to complete.

3.2

The interface helped me to complete the task.

3

It was intuitive to find the "Preferences" button in the context menu.

3.1

For subsequent searches should the filter be turned ON by default?

4.1


Abandon task scenarios, 1: Yes, 2: No, 3: I don’t know:

Answer

1

2

3

Total

Could you start the application by clicking on the "greenBean" on the phone’s desktop?

80% (4)

20% (1)

0% (0)

5

Could you find the "Preferences" page?

80% (4)

20% (1)

0% (0)

5

Did you select Peanut as an allergy by clicking the check box next to its name?

80% (4)

20% (1)

0% (0)

5

Did you know how to get options using the phone’s "MENU" button?

60% (3)

40% (2)

0% (0)

5

Task 3: Scan and search similar products

For success

Time

Mean

1:59

StDev

1:06

Maximum

3:35

Minimum

0:10

c19t48s.jpeg


For non success

Count

Percent

Time

Abandon

4

67%

2:00

Error

2

33%

2:26

c19t48n.jpeg
Successful task scenarios:

50 % confirmed to have completed the final sub task: search similar and 64% confirmed the first subtask: scan.



Quantitative responses, 1: Strongly disagree and 5: Strongly agree:

Answer

Mean

The task was easy to complete.

3.6

The interface helped me to complete the task.

3.8

It was easy to find the center button which allowed me to capture the barcode.

4.2

I liked the way information was presented about the individual product.

4.2

The criteria to sort the products are good.

4


Abandon task scenarios, 1: Yes, 2: No, 3: I don’t know:

Answer

1

2

3

Total

Could you start the application by clicking on the "greenBean" on the phone’s desktop?

75% (3)

25% (1)

0% (0)

4

Did you click on the "Scan" button on the greenBean home page?

75% (3)

25% (1)

0% (0)

4

Could you capture the item’s barcode?

50% (2)

50% (2)

0% (0)

4

Did you find the user reviews in the Ratings section of the product?

25% (1)

50% (2)

25% (1)

4

Did you click on the "Search similar" button on the product page to compare?

0% (0)

75% (3)

25% (1)

4


Discussion
Participants
Participants belonged to an age group of 20 -25. A tech savvy age group, though affordability of high end smart phones may be an issue. 67% percent of them are male. All of them had cell phones, ~40% percent have PDA or smart phones. Majority of them also carry MP3 players, a sign of carrying multiple mobile devices. Majority have never tried Android before, so it is interesting to see how intuitive the interactions that involve hard buttons are.

While most users were very familiar with photos and texting, they were not so familiar with using their phones to send e-mail, browse the internet, or finding directions on a map, an indicator of novice smart phone users. So if the tasks are successfully accomplished by these novice users, the design can be proved to be quite usable.

Most people had felt a need to access product information on the go and the participants felt strong about being environmentally friendly, two of the main facets of the greenBean software. So we could say that the need of an application like greenBean is quite compelled for these participants.

Task 1: Checking your greenBean score.

This task had the lowest time to complete and the smallest failure rate, confirming our hypothesis that this is the easiest task.

Task 1, mean time: 0:46 compared to Task 2: 3+ mins and Task 3: ~2mins.
Also the quantitative feedback and qualitative feedback reflect the same.

This was very simple and easy to navigate”


Other important answers and feedback we got are:

  • Participants wanted to have their score both visually and quantitatively. Currently the score in some of the screens is shown only visually.

  • The use of the metaphor “Bean” for score was confusing:

What exactly do the "beans" mean? That you were environmentally friendly”

How is each bean defined? What does it mean to have collected 1 greenBean?”



  • The back navigation with “Score” to return to “Score” page from Score Breakdown page was confusing:

I thought it was kind of confusing that there was a "score" button after I had already looked at the graph. It just took me back to the previous page...”


Task 2: Set allergen preferences, and search for a safe product.


In task 2, one of the main questions we had is, if it is intuitive to find the preference button in the context menu or not. 67% (the successful candidates) said it was intuitive at 3.1/5 and also 80% of non success cases could reach to the preferences. While majority of the users are not familiar with the Android. So, we could safely infer that it is okay to have preferences in the context menu.

Another input we looked for is to know what according to the users should the default state of filters be while searching? 67% of the success case participants indicated 4.1/5 for the filter to be ON by default.

Task 2 took fairly long time, 3 min mean, for successful participants. Since this is a two stage task, tested online on a just-enough prototype, assuming familiarity with Android to reach preference settings, we think the task performance is average and slight improvements can be made.
Other important feedback to note:


  • We need to improve how the filter is worded:

If someone had allergies they would never want foods to be indicated if they were allergic to them. Also the check box that says apply filters is very ambiguous.”

  • Participants could find the contextual help in the Menu:

Where did you look for help?

In the screen that is prompted when you push the menu key. Next to the preferences button.”


Task 3: Scan and compare with products

Scan is a new interaction on mobile phone, so we wanted to test this feature to see if the participants could find the interaction easy to use. Also based on the feedback from heuristic evaluation we changed the orientation and made the interaction of capturing the barcode explicit. So first sub task was to test this aspect.
Though the Success reported is only 60% the mean time spent is only 1:30 min and also the task is rated high for ease of use and simplicity of task.
Also, since the production information page had some major design changes, it was encouraging to see that participants gave a 4.2/5 for the product information page.
With second sub task, we wanted to test Search similar button's placement, as this was changed in the new design. For the task abandon cases, it seems it was not very intuitive to find the search similar button. Also from the responses to validation question, we could suspect that the sort by function with tab did not suit the user mental model.

Personally, I found it straight forward and easy to use. The only problem that I had was with the buttons sorting because it was too small. But considering that this application would be on a cell-phone, I can understand why is would be small.”


Overall, the task was perceived to be fairly easy with hiccups with “Search similar”:

Fairly easy. Although it took me some time to find the "similar products" button, so I could just look up all the detergents at once.



Overall Application
The qualitative feedback we got pointed to some very relevant and important issues.
On the home page, I feel that the logo is given more importance than the functionality itself. It would have been great if all the buttons were lined up in order of usage (barcode, search, list, etc) one below the other.
The filter system was the most awkward part.
Ingredients should be a list and not a paragraph.
I didn’t like the hidden options and preferences.
The layout looks really professional, and the information obtained from searching was very complete and useful. Also the "check my score" feature is a good one- so people can see the overall effect they have having by choosing greener products.
For the most, part it was easy to use. I like the buttons on the home screen. They are very clear. I like the instructions below the bar code scan. They helped a lot.
There is certainly a strong need for this kind of an application.



5. Design Recommendations



Home screen:

The button layout needs more work: clean, well aligned list may be.

Bring out important functions on to the screen as well as menu.
Score:

Give Quantitative values.

Back navigation should be indicated as back, instead of simple “score”. Maybe “<< SCORE” could work.
Preferences:

Since this is a key feature, make it visible on home as well, do not hide in the menu.

Filter to be ON by default.

Filter should indicate Allergens, diet restrictions that are applied.


Product Information:

Search similar, should be at a more expected place.

Ingredients to be listed as “list” not paragraph.

On Search, “sort by” with tabs is not very effective.




6. Tool Features & Limitations

While creating survey:

Need a quick way to preview entire section

Need to be able to upload flash and ask questions inline

Ability to check condition of numeric only on the answers

Couldn’t Bold the text successfully

Couldn’t add image to survey successfully

Top left logo does not take you Home

On failing validation question the user should go to abandon, not success. Leave this control to survey creator.

Option to serve the entire section instead of one question at a time

Will be nice if there is a way to pilot test running through all the logical paths


For the participants:

Some of the times participants could not start the survey on clicking the link

Default Redirect link’s dialogs need to be more polite

Show progress of the survey to participants.


While dealing with data:
What is Task abandon time? No help to understand what it is

While viewing data with rating scale, the labels are missing

Not able to delete a particular participant data that pollutes the data
Does not allow viewing the survey questions while the survey is ON
Nice stuff:

Excel and word exports of data

Data summery report is very helpful

The different structures and types of survey help even a novice user to build a complex survey



7. Appendix
Sample status message invite:
“Help me test my group project: https://server.userzoom.com/uz/auto.asp?p=1&s=C19S6”
Sample email invite:

Hi

I need your help to test my group project.

My team is working on a Mobile application: greenBean, that helps shoppers make greener choices.


If you want to have a look at how the application is shaping up and help it improve, please take part in the following study:

https://server.userzoom.com/uz/auto.asp?p=1&s=C19S6

It takes approximately 20 mins.

The survey closes on Feb 11th: Wednesday at 10pm PST.


Also do encourage your friends to participate, we would love to have feedback from multiple users.

We really appreciate your time and interest to help us improve greenBean.



Thank you very much,



P.S: Please forward this to other people you think may be interested. We would love to have feedback from multiple users.



Study data:
greenBean



Download 258.08 Kb.

Share with your friends:
  1   2




The database is protected by copyright ©ininet.org 2024
send message

    Main page