It’s been a while since my last blog post and, boy, have I got news for you!
As for the classification challenge I talked about earlier: attributes have been selected, classifiers have been built, accuracies have been documented and the results are here to show for it!
Note that a good session is defined by the detected emotion (i.e. the label) being equal to the experienced emotion (i.e. the user feedback). The tables above show the best performing classifier for each user (relying only on base classifiers in the first case and including meta classifiers in the second) and these performances (i.e. accuracy rates) are off the charts! One reason for this might be the fact that the underlying sessions were recorded in a fixed order… Due to this potential glitch, subsequent experiments (for new test subjects) will have to guarantee a randomized order of recording, after which the results can be compared.
Another improvement I made is adjusting the colors of the feedback wheel. Katrien suggested that I would do this because of the possibility that certain correlations between emotions and colors influence the feedback process. I reviewed some literature concerning this problem and found that there is hardly any consensus as to which emotion correlates to which color. As a result, I mimicked the colors in Plutchik’s famous wheel of emotions, which merely came down to switching the colors for “scared” and “disgusted”.
Finally, I designed the main screen for my application: the one where an emotion is detected, some feedback gets processed and a corresponding song is subsequently played. What better way to show you than with a little video!
All of what I just told you (including a few more details) can also be reviewed in my second intermediary presentation, one that I will have to present tomorrow in class. Wish me luck!