Project in Google’s 2011 EMEA AndroidEDU programme
ParticipantsQuestion Clustering
Android Application Development
Android Pro & Con side project
Supervisors |
ScenarioDan is holding a presentation in front of his colleagues in the AI course. As he goes through the slides, the students in the audience are able to use their Android devices to navigate independently through the presentation and they can make annotations or write questions. His colleague Alice is unclear on why Dan is using a certain mathematical formula in a slide, so she wants to ask about that, but she sees in the feedback interface that Ben has already created a question about that. So Alice just needs to "plus one" Ben’s question. At all times, Dan can just give a quick look on his Android smartphone to see an aggregated view of the annotations and questions, based on semantic similarity and user’s reputation. When it is time for questions, he can see the slides with the most questions, which makes it easier for him to make the answers clearer and more detailed. After the presentation, he will be able to see a complete view on the feedback for the presentation, helping him improve it for the next time. |
The application will be described in what follows from both the speaker’s and the audience’s perspectives.
Every user who has the Smart Presentation Android application installed and has access to the Internet, can join live lectures (if those lectures are public or he’s on the guest list). When the application is launched, the user has two options:
The speaker’s experience with the application follows four different stages for each lecture.
The speaker sets the minimum level for incognito settings for the audience. He also sets the thresholds for notification during the presentation (e.g. he wants to be notified when 20 people or more tagged a slide / object with “ambiguous/unclear”).
The application on the speaker’s smartphone allows him to control the presentation (“next” and “previous slide” buttons) and to see snippets of feedback for the current slide - highlights of the text and objects on the slide in the corresponding colors, as well as the best ranked questions. A quick glance at the smarphone gives him a compact and comprehensive view on the reaction of the audience. This allows him to make observations on the slide content in real time, in answer to the feedback.
After the presentation, the speaker sees the slides ranked by the number of critical annotations. The speaker can click through those slides to get aggregated information for each of them: questions are clustered and ranked and annotations are summed (see also Implementation details).
The speaker receives a complete report, containing all user feedback (presented as nice histograms), the navigation trace of each user, solved questions and aggregated analytics. The navigation trace for a user represents an ordered list with the times spent on each of the slides. The solved questions are questions retracted by the user with the additional information of addition and deletion times.
Both the speaker and the users connect to a server that holds the slides for the presentations and all feedback data.
The feedback information is aggregated as follows: