It finally happened!! I built my first basic classifier and IT WORKS!!
After numerous hours of debugging, I finally managed to implement a basic classifier that is able to differentiate between the discrete emotions neutral, happy and sad in a very expressive context using only the Muse headband. One aspect that still needs some revision though is the feature selection, which is non-existent at the moment because of some residual bugs…
The next thing on my agenda was the implementation plan for the feedback mechanism discussed last week. After Karsten sent me on the right path, I found an interesting spinning wheel to represent a small set of discrete emotions:
My plan is then to overlay the image with an arrow and let that arrow point to the emotion that was detected automatically. After that, the user should be able to manually adjust the arrow to point to the emotion that he or she is actually feeling. In order to help me get started on the implementation, I found a few Android sample projects and tutorials: Project 1, Project 2, Project 3.
To conclude this post, I thought it would be interesting to provide a link to a recently published article on DARPA’s vision of the future. Let’s make this future happen and start recognizing some emotions!!