Skip to content

Week 12 & 13 : Final Prototype

Lachlan-McIntyre edited this page Oct 27, 2019 · 4 revisions

1.0 Background & Intended Experience

This prototype moved away from a reliance on cue cards and instead had actual ingredients and a complete process /recipe to follow. At the end of the process, there would be a tangible outcome (a completed recipe, producing a complete drink) that learners could reflect on and consider as a measure of progress understanding the language.

Essentially, a situation someone from there could feasibly encounter if they were in the background culture going through their day. Getting users to make associations with sounds, mental images, and physical objects they hold was a big goal.

Finally, repetition, the idea was that the prototype would be used in the learner’s kitchen repeatedly over time, although the rest of the experience (choosing a language and / or a recipe) was considered out of scope.

We imagine the way the system would actually be used would be having all of it on a tablet placed in the kitchen slightly away from food. Users tap on the screen to hear the words, carry the tablet to search for ingredients through its camera and see the words names in the language they’re learning through AR, and then as they trigger the ‘say’ buttons for each word, provided the pronounciation is correct, they get some manner of on-screen feedback that they’ve succeeded

2.0 “Kitchen Environment”

finalprototype

  • A small setup reflecting a kitchen (the environment we imagine the system would be used in)
  • Actual Ingredients on the table, alongside a tablecloth, and some utensils to pick up and stir them
  • Ideally tablet on table wouldnt have touchpad / keyboard, but those were necessary for troubleshooting during this session

3.0 Interface

  • SImple four to five word sentences, grade school level to encourage quick interactions
  • This time word breakdowns alongside sentence breakdowns (instead of on separate cue cards
  • Several activities / sentences now, instead of just one .
    • Can navigate to different previous screen, or restart current screen
    • Each task has a different goal:
    • Tasks are labeled so users know where in the process they are - though in the actual system the number of steps would likely be variable
  • Main sentence still allowed unlimited attempts, individual words still required three tries to try, refine and solidify pronounciation
    • Getting learners to immediately reflect what they just said, then try it one more time before moving on
    • Encouraging learners not to spend too much time on any individual word

Step1

  • A straightforward sentence about pouring tea into a glass
  • Initially we considered getting learners to also make the tea but prepared tea defeats the “slapped together” aspect of some Malaysian food culture
  • Additionally premade tea is more recognisable and less likely to cause issues than regular tea that is inadvisably ratiod
  • Ratio / Percentage of ingredients didnt matter for this prototype
  • Focus was on getting through the recipe, and exact amount of ingredients was user preference

Step2

  • Another straightforward sentence
  • This task shows how some words in malay don’t necessarily have direct translations
  • Grass jelly isnt actually made of grass, but the plant it’s made from doesn’t necessarily have an english name
  • Some words are specific to regional items not present elsewhere
  • Knowing the cultural context of grass jelly as an ingredient that can be used in a tea drink allows the learner to understand a bit about the background of the place the language their learning comes from, without needing to be at said place in person

Step3

  • This task demonstrates how descriptors in Malay work the opposite to the way they do in English
  • Some tasks could focus not exclusively on words, but on language concepts too

Step4

  • This task demonstrates to learners how the same word can have different contexts depending on the suffix, sometimes even in the same sentence.

4.0 Feedback Lighting

  • A prototype that lights up when the user pronounces a word correctly
  • Manually triggered for this experiment a la wizard of Oz, with pronounciations ‘checked’ by team members
  • For the purpose of the demonstration regardless of pronounciation all users would get ‘correct’ responses
  • The intent : Immediate feedback for listening and saying a word correctly
  • In the actual system would be an element on screen or a hardware device

lightsfeedback

5.0 Augmented Reality Ingredient Display

  • Ingredients themselves intended to be cheap, easily available ones
  • Ingredients (or their associated QR codes) scanned through a prepared phone
  • Name of ingredient is displayed in text (in the new language) above the scanned item
  • Users match the right ingredient to the recipe before proceeding with the step

Final ingredients included Iced tea, grass jelly, coconut jelly, water

  • Chosen because they did not require heat or sharp utensils
  • Readily available, didn’t require long wait times and / overseas delivery
  • Similar to local recipe without needing preptime / overnight processes
  • Recipe variant being one that requires ten minutes at most to complete, to keep the session short and interesting (the
  • team believes that sustaining interest can be troublesome if something is too complex and the learner feels like they hit a brick wall)
  • “Slapped together” recipes are very popular in Malaysian culture, ingredients one wouldnt expect to go together are often mixed to provide a refreshing escape from the heat and humidity