I am wondering how music therapy fits into this space and am looking forward to getting the references I requested on this. Also looking for some papers on music therapy and autism.
This is another area with (understatement) a lot of research activity. Easily another 200 papers describing approaches to detecting prosody, focusing on specific aspects of prosody, and application of prosody. Many easily accessible papers via IEEE and ACM, and another tier of good-looking papers which I will have to (selectively) go and get via the library proxy. Hunting something down via the proxy can be time consuming, so I’m going to be picky about which ones to get first. I’ll definitely get the references listed in the grant.
There were a few papers which surveyed the methods of prosody detection (probably a good place to start). And other papers, which looked at prosody as a multimodal quality (it’s not just voice, it’s also eyebrows, forehead, hand gesture, general body language, etc.), which fits the grant. And another layer of prosodic processing dealing with emotion, which I like. About 5 papers from the IEEE library specifically dealt with children. Some comparison with music (which would be interesting to explore).
Common technical source journals: Speech Communication, Cognitive Neuroscience, Siggraph, Internat. J of Speech Technology, COST, CHI.
A lot of references specifically for autism and prosody. This patch of papers is less “CS-y” but they focus on prosody problems in autism (defining, testing for, demonstrating, etc). Most of these references have to be downloaded via library proxy. Some of these references incorporate music.
I have all of the citations, and many of the papers downloaded. For the papers not downloaded yet, I’ll get them as I need to explore them further.
IRB training, completed 2 weeks ago…. Wondering whether it’s been approved.
Can I observe user study?
I’ve been doing a literature search for autism (general) and technology & autism applications to become familiar with the area, and to understand how people have attempted to use technology applications to help people on the spectrum, especially kids. On the first pass, I’ve found a lot of references! LOTS of research being done in this area. The problem is not finding references, but weeding through them, picking out which ones are important, which areas I think are the most interesting, where the best unexplored territory is, articulating some interesting research questions to pursue, and proposing tasks/projects to work on.
But first weeding through things. On the first pass, I’ve found a handful of books and about 300 publications on technology and autism (290, to be exact). I didn’t have to dig to find this stuff, so I’m expecting that when I center on a specialized area or two, digging deeper will turn up a lot more references.
I imported all of the references from the first pass into Mendeley, have sorted through them, and found that the technologies used tend to land in the following areas:
- Brain Imaging
- Genetic Analysis
- Face & Eye Tracking (measures attention and social connection)
- Robotic Apps (interactive robots like dolls, pets, etc.)
- Assistive Technology
- Sensors & Mobile Devices
- Virtual and Augmented Reality
- Other Interactive Systems
- Affective Applications
- Speech & Hearing Analysis
These technologies have different purpuses. From reading through the abstracts, it looks like common motivations are:
- Diagnostic (what can be used to observe, measure, and diagnose the problem)
- Classroom Teaching (how to help someone who is high functioning enough to learn and focus)
- Unstructured Teaching (e.g., can you teach a child socialization skills with a robot? or by playing games)
- Immediate Assists (can you automate feedback for desirable and undesirable behaviour)
- Skills Improvement (can you improve, for example, attention, eye contact, etc.)
- Determining Causes (genetic markers, environmental stresses, disease, etc.)
- Understanding How to Intervene
- Helping Parents Help their Kids, and Cope (telemedicine, e.g.)
- Understanding how customization can help
I took a pass through all of the abstracts, and about 100 of them look like they could apply, so I’m in the process of downloading the papers. At this point I have about half of them. Some of them are hard to find, and I have to go through the library proxy to get them (most of them aren’t just ACM, IEEE accesses). I’ve run into one or two journals that the library hasn’t licensed, and one specific article on eye tracking that the authors didn’t want to release electronically.
So I’m prioritizing getting the papers by what is most interesting to me, but making sure that I get a few samples in each area, right now, so I have an idea of the scope.
I’m still very interested in anything to do with sound, speech, and hearing.
And anything with sensors and mobile devices.
I would like to investigate music therapy. How could this help? An environment for creating music? Combine it with interactive visuals, too. Multimodal and immersive.
Movement therapy? Could some sort of creative movement or dance help?
Anything affective is intriguing. How to attribute emotion, or teach emotional cues.
The robot apps sound cute (pets and dolls, for teaching emotion, measuring attention and response). Adorable.
Also some storytelling apps.