Smart Presentation

Project in Google’s 2011 EMEA AndroidEDU programme


Question Clustering

  • Cătălina Mocanu
  • Cristina Groapă

Android Application Development

  • Dragoș Dincă
  • Anca Pîrvan
  • George-Cristian Stoica

Android Pro & Con side project

  • Iulia Moscalenco
  • Miruna Popescu


  • Prof. Adina Magda Florea, PhD @aimas
  • Andrei Olaru, PhD, TA @aimas
  • Tudor Berariu, MSc @aimas


Dan is holding a presentation in front of his colleagues in the AI course. As he goes through the slides, the students in the audience are able to use their Android devices to navigate independently through the presentation and they can make annotations or write questions. His colleague Alice is unclear on why Dan is using a certain mathematical formula in a slide, so she wants to ask about that, but she sees in the feedback interface that Ben has already created a question about that. So Alice just needs to "plus one" Ben’s question. At all times, Dan can just give a quick look on his Android smartphone to see an aggregated view of the annotations and questions, based on semantic similarity and user’s reputation. When it is time for questions, he can see the slides with the most questions, which makes it easier for him to make the answers clearer and more detailed. After the presentation, he will be able to see a complete view on the feedback for the presentation, helping him improve it for the next time.
Android Pro & Con side project


The application will be described in what follows from both the speaker’s and the audience’s perspectives.

User features

Every user who has the Smart Presentation Android application installed and has access to the Internet, can join live lectures (if those lectures are public or he’s on the guest list). When the application is launched, the user has two options:

  1. Join lecture
  2. Search lecture
As soon as the user authenticates in the system and joins a presentation, the slides are downloaded to his/her device and synchronized with the speaker. In the default visualization mode, the user sees the slides full-screen and some buttons:
  1. Navigation buttons:
    1. Previous slide and next slide buttons
    2. “Go Live!” button - to go directly to the slide that the speaker is presenting
  2. “Incognito” menu button - which is used to select from the four different profile visibility settings (It can be the case that not all four are available - see Speaker features):
    1. Speaker sees actions (anonymous)
    2. Speaker sees actions (with name)
    3. Authenticated users see actions (anonymous)
    4. Authenticated users see actions (with name)
  3. Actions (available for the whole slide - if the user makes no selection - or for the selected text / objects)
    1. “+1” button - to give positive feedback; highlights the selection in green;
    2. “citation or proof needed” - highlights the selection in blue;
    3. “ambiguous / unclear” - highlights the selection in orange;
    4. ask question - when writing a question, the user can see an aggregated and ranked list of questions asked by other users for that slide (see Implementation details), and is able to “+1” a question instead of asking the same question again.
  4. “My Actions” button - allows the user to see and remove any of his previous annotations or questions in the current presentation.

Speaker features

The speaker’s experience with the application follows four different stages for each lecture.

Before the lecture

The speaker sets the minimum level for incognito settings for the audience. He also sets the thresholds for notification during the presentation (e.g. he wants to be notified when 20 people or more tagged a slide / object with “ambiguous/unclear”).


The application on the speaker’s smartphone allows him to control the presentation (“next” and “previous slide” buttons) and to see snippets of feedback for the current slide - highlights of the text and objects on the slide in the corresponding colors, as well as the best ranked questions. A quick glance at the smarphone gives him a compact and comprehensive view on the reaction of the audience. This allows him to make observations on the slide content in real time, in answer to the feedback.

Questions mode

After the presentation, the speaker sees the slides ranked by the number of critical annotations. The speaker can click through those slides to get aggregated information for each of them: questions are clustered and ranked and annotations are summed (see also Implementation details).

After the presentation

The speaker receives a complete report, containing all user feedback (presented as nice histograms), the navigation trace of each user, solved questions and aggregated analytics. The navigation trace for a user represents an ordered list with the times spent on each of the slides. The solved questions are questions retracted by the user with the additional information of addition and deletion times.

Implementation details

Both the speaker and the users connect to a server that holds the slides for the presentations and all feedback data.

The feedback information is aggregated as follows: The result is a compact view of the feedback for each slide. This result is delivered: Some other information is stored: