Comments (13)
Use VSS.
from robotutor_2019.
What's VSS?
from robotutor_2019.
Sorry for the TLA (). VSS = Visual Speech Synthesis -- which turned out not to work well enough to use. The only visual speech that's feasible to use is the talking-mouth video clips of syllables. But the more general solution, as in Derek's prototype, is simply to play the audio for each target when it appears.
( Three Letter Acronym ;-)
from robotutor_2019.
From the description above it's still not very clear what the task is about. When are the bubbles supposed to speak their prompts? When they show up on the screen? Just once or keep repeating? Or when they're tapped? There is currently sound and visual feedback on tap, so is this just a way to customize that?
It would be good if somebody could include previous conversations/decisions on this topic here. Thanks.
from robotutor_2019.
Good questions. There's doubtless prior discussion somewhere but it's easier to summarize the upshot:
Bubbles should speak their prompts once when they appear, and again when tapped.
Their primary purpose is to allow tasks that map a stimulus to a spoken response -- i.e. provide a multiple choice version of an oral response, but more reliably assessable than speech recognition.
Does it appear feasible to include in code drop 1?
from robotutor_2019.
Seems doubtful. I still need to finish the other task, so it will be only a couple of days before the code freeze. And if I understand correctly this is a task Derek was estimating it would take him a couple of weeks to finish, and he already knows what needs to be done. And to top things off I need to show for jury duty tomorrow. :)
from robotutor_2019.
Still lots of details to figure out, a few more questions that come to my mind, maybe you guys have already figured them out:
-
Is it only the audio that's supposed to be customized, or also the visual effect (the original post mentions something)? JACK: just the audio.
-
Will these audios be available as audio file resources or in some other form? JACK: as audio resources.
-
If we also want visual effects, are these already implemented and can be just chosen from or do we need to implement such effects? JACK: Not implemented -- nor needed at present.
-
Do we have a list of these visual effects? JACK: maybe somewhere, but it doesn't matter for now.
-
Are the effects for appearance and tapping the same or different? JACK: Different. Appearing is an effective effect, it turns out. Tapping on a bubble in BubblePop has two cases -- correct and incorrect.
Tapping on a correct bubble pops it. For an audio target, play the audio first, then pop the bubble with its usual sound effect and animation.
Tapping on an incorrect bubble spotlights the tapped bubble and gives corrective feedback. For a wrong audio target, first play the sound effect for wrong answers:
{"type": "AUDIO", "command": "PLAY", "soundsource": "wrong.mp3", "soundpackage":"tutor_effect", "volume": 0.05, "mode":"event", "features": ""},
Then play the audio, e.g. "This is DOG."
Then spotlight the visual stimulus (if any) and repeat the stimulus prompt, e.g. "Tap on the word that starts like CAP."
-
How do we specify all these customizations in the data sources? JACK: Much like others
If the data source specifies ansValue, ansValueTwo, and ansValueThree, it could use one of them to specify the stimulus prompt (e.g. "Tap on the word that starts like"), and another for the answer, as in:{"type": "AUDIO", "command": "PLAY", "soundsource": "{{SBubblePop.ansValue}}.mp3", "soundpackage":"expressions", "volume": 1.0, "mode":"flow", "features": "FTR_EXPRESSIONS"}, {"type": "AUDIO", "command": "PLAY", "soundsource": "{{SBubblePop.ansValueTwo}}.mp3", "soundpackage":"expressions", "volume": 1.0, "mode":"flow", "features": "FTR_EXPRESSIONS"}, {"type": "AUDIO", "command": "PLAY", "soundsource": "{{SBubblePop.ansValueThree}}.mp3", "soundpackage":"expressions", "volume": 1.0, "mode":"event", "features": "FTR_EXPRESSIONS"},
However, this is an unfortunate example because the expression activity relies on Java code to parse the expression, e.g. "2+2" into the three variables.
WRITE's use of the data source is a much better example because it simply lets the data source set them.
from robotutor_2019.
Octav - If audio targets work, please (if you haven't already done so):
-
Incorporate them into RoboTutor.
-
You and I agreed that data sources should specify "show" and/or "say" for targets just as they do for stimuli. Did you implement them that way? If so, where is the necessary syntax documented?
-
Ask Judith who should audio-enable the responses by modifying the phonemic activity data source generator and regenerating them or modifying them directly, e.g.
RoboTutor/app/src/main/assets/tutors/bubble_pop/assets/data/sw/bpop.wrd_beg.wrd.ha.show.1.json. -
Point us to a working .apk demo.
Thanks. - Jack
from robotutor_2019.
This should be finally ready now. The changes are on branch audio_targets. Data sources indicate show and say for audio targets by adding properties "target_show" and "target_say" to the data source files, similar to "question_show" and "question_say".
from robotutor_2019.
Are the properties what to say and show, or whether to do it?
from robotutor_2019.
These properties decide whether to say or to show the targets, same as for the stimuli. What to say is decided the same as when a bubble is tapped.
from robotutor_2019.
Please explain what the hesitation part is. "Implemented hesitation" is far from self-explanatory.
I thought it's better to keep discussions here. The hesitation is what we discussed at some point, that the prompts are repeated periodically if kids don't choose an answer. The period now is set to 6s, just because that's how it was set in some other tutor that was implementing it. It can be easily changed of course.
from robotutor_2019.
@judithodili - The audio targets capability enables bpop activities to map a visual and/or spoken stimulus to a spoken target (optionally with a text label as well). This gives us a multiple choice alternative to oral responses for assessment.
from robotutor_2019.
Related Issues (20)
- 2.7.7.1 CRASH at XPRIZE sites java.lang.NullPointerException: Attempt to invoke virtual method 'int java.lang.String.hashCode()' on a null object reference HOT 4
- 2.7.7.1 XPRIZE java.lang.RuntimeException: An error occurred while executing doInBackground() HOT 3
- 2.7.7.1 different size crash logs with different types of crashes
- 2.7.7.1 punctuation parsing issue creating extra column in log files HOT 2
- 2.7.7.1 preserve student model across update of apk
- 3.0.0.1 programmatically control screen recording
- RoboTutor 3.0.0.1 usability issues to fix for Zambia demo HOT 1
- word_copy: prevent tapping from erasing on non-sentence tutors HOT 3
- Broken RTAsset_Publisher link in README.md HOT 5
- FaceLogin 1.7.5.5 CRASH logs HOT 2
- 3.0.0.1 add NARRATE and EDIT modes HOT 2
- LOG: Consistent naming between CRASH, PERF, and Verbose logs. HOT 4
- 3.1.0.2 needs better debugger for demos
- COMPATIBILITY - make RoboTutor user interface adaptable to smaller tablet size
- COMPATIBILITY - Writing: letters are too big for their boxes and are cut off
- COMPATIBILITY - Debugger Menu: everything is too big for the screen
- 3.1.6.1 STUCK English story.pic.hear::1_31.vftttfNULLf
- 3.1.6.1 make menu style a configuration variable HOT 1
- 3.1.6.1 CRASH on Pixel C tablet while scrolling in story Developer Menu
- 3.3 still missing some prompts
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from robotutor_2019.