As I already said last week, my presentation was a great succes! During our meeting last week, Karsten and I further discussed the feedback of the audience as well as my plans for the upcoming week. Well, it had to happen some time: building a working classifier was the next goal to work towards!
So in the week after that meeting, I started writing the classification interface and integrating all the appropriate Weka functions and variables. Once that was done, I needed to connect this extension to the rest of the project. Due to the urgent need for some refactoring, this proved to be a major unexpected challenge… After finally succeeding, the debugging could begin!
Unfortunately, I was still in debugging mode by the time my next meeting with Karsten was planned yesterday. I told him about the major hurdles and he gave me a few suggestions on how to tackle them. Katrien was also there, since she wanted to brainstorm about the feedback approach of the application. She and Karsten suggested to include two extra levels in the application execution flow:
- when an emotion is detected automatically, it should be displayed to the user. The user should then have the opportunity to adjust the emotion on the screen before it is fed to the underlying recommender system
- when the recommender system makes a suggestion, it is again displayed to the user. Once more, the user should have the opportunity to adjust the output in a similar way.
Subsequently, Karsten asked me to sketch up a plan on how to implement these feedback mechanisms by next week, to be delivered along with a working classifier, of course.