Project in Google’s 2011 EMEA AndroidEDU programme
- Cătălina Mocanu
- Cristina Groapă
Android Application Development
- Dragoș Dincă
- Anca Pîrvan
- George-Cristian Stoica
Android Pro & Con side project
- Iulia Moscalenco
- Miruna Popescu
- Prof. Adina Magda Florea, PhD @aimas
- Andrei Olaru, PhD, TA @aimas
- Tudor Berariu, MSc @aimas
Dan is holding a presentation in front of his colleagues in the AI course. As he goes through the slides, the students in the audience are able to use their Android devices to navigate independently through the presentation and they can make annotations or write questions. His colleague Alice is unclear on why Dan is using a certain mathematical formula in a slide, so she wants to ask about that, but she sees in the feedback interface that Ben has already created a question about that. So Alice just needs to "plus one" Ben’s question. At all times, Dan can just give a quick look on his Android smartphone to see an aggregated view of the annotations and questions, based on semantic similarity and user’s reputation. When it is time for questions, he can see the slides with the most questions, which makes it easier for him to make the answers clearer and more detailed. After the presentation, he will be able to see a complete view on the feedback for the presentation, helping him improve it for the next time.
The application will be described in what follows from both the speaker’s and the audience’s perspectives.
Every user who has the Smart Presentation Android application installed and has access to the Internet, can join live lectures (if those lectures are public or he’s on the guest list). When the application is launched, the user has two options:
As soon as the user authenticates in the system and joins a presentation, the slides are downloaded to his/her device and synchronized with the speaker. In the default visualization mode, the user sees the slides full-screen and some buttons:
- Join lecture
- Search lecture
- Navigation buttons:
- Previous slide and next slide buttons
- “Go Live!” button - to go directly to the slide that the speaker is presenting
- “Incognito” menu button - which is used to select from the four different profile visibility settings (It can be the case that not all four are available - see Speaker features):
- Speaker sees actions (anonymous)
- Speaker sees actions (with name)
- Authenticated users see actions (anonymous)
- Authenticated users see actions (with name)
- Actions (available for the whole slide - if the user makes no selection - or for the selected text / objects)
- “+1” button - to give positive feedback; highlights the selection in green;
- “citation or proof needed” - highlights the selection in blue;
- “ambiguous / unclear” - highlights the selection in orange;
- ask question - when writing a question, the user can see an aggregated and ranked list of questions asked by other users for that slide (see Implementation details), and is able to “+1” a question instead of asking the same question again.
- “My Actions” button - allows the user to see and remove any of his previous annotations or questions in the current presentation.
The speaker’s experience with the application follows four different stages for each lecture.
Before the lecture
The speaker sets the minimum level for incognito settings for the audience. He also sets the thresholds for notification during the presentation (e.g. he wants to be notified when 20 people or more tagged a slide / object with “ambiguous/unclear”).
The application on the speaker’s smartphone allows him to control the presentation (“next” and “previous slide” buttons) and to see snippets of feedback for the current slide - highlights of the text and objects on the slide in the corresponding colors, as well as the best ranked questions. A quick glance at the smarphone gives him a compact and comprehensive view on the reaction of the audience. This allows him to make observations on the slide content in real time, in answer to the feedback.
After the presentation, the speaker sees the slides ranked by the number of critical annotations. The speaker can click through those slides to get aggregated information for each of them: questions are clustered and ranked and annotations are summed (see also Implementation details).
After the presentation
The speaker receives a complete report, containing all user feedback (presented as nice histograms), the navigation trace of each user, solved questions and aggregated analytics. The navigation trace for a user represents an ordered list with the times spent on each of the slides. The solved questions are questions retracted by the user with the additional information of addition and deletion times.
Both the speaker and the users connect to a server that holds the slides for the presentations and all feedback data.
The feedback information is aggregated as follows:
The result is a compact view of the feedback for each slide. This result is delivered:
- the questions are grouped by their semantic similarity and by the slide they are assigned to, and the groups are ranked by the number of questions and the aggregated reputation of the users that asked them;
- the number of annotations is summed for each annotation type and for each slide / object.
Some other information is stored:
- when the user views the questions assigned to a slide, and the user can “+1” a question (or write his/her own);
- when the speaker checks the application during the presentation - a very compact view is used, easy to see at a glance, showing only important critic;
- at questions time, slides are ranked by the number of questions and annotations;
- after the presentation, the speaker can see the results in full, helping him improve the slides for a future presentation.
- navigation traces of the users
- retracted (solved) questions: the text of the question, the number of the slide where the question was added and the number of the slide where the question was retracted