to create a software infrastructure for Ambient Intelligence (AmI) applications, that handles context at its constructive level, that works naturally with the user's tasks and activities, and that manifests the robustness and reliability that is required for future ambient systems.
The goal of this project is to create a platform for multi-agent systems in which agents can live on a variety of devices, from Raspberry Pi nodes to Android smartphones to full-size workstations.
The end result will be a state-of-the-art multi-agent system that is at the same time lightweight and capable enough to be used instead of established systems such as JIAC or AgentFactory, offering to the developer a simple gateway to functionalities such as cross-platform communication, agent mobility, sensor input, cross-platform output, etc.
The first phase is to obtain a platform for multi-agent systems with a simple API that supports communication, mobility and input and output on PCs and Android devices.
This projects starts from the tATAmI multi-agent system and will use the vast experience gathered during its development.
At least one scientific paper will be produced during this research.
The goal of this project is to create a conversational agent that can assist the user in various tasks related to the user's digital life. The user will communicate with the agent in natural language and the agent will use a visual representation (context graphs and patterns) to retain its knowledge; this knowledge can be shown to the user on demand, in a graphical form or in a textual form.
Using the agent, the user will be able to control the device and will be able to configure the assistant such that it will give reminders to the user or perform automatic actions when certain pre-described situations are detected.
While the project is, in the first phase, directed towards a PC implementation, an Android implementation is also envisioned.
A successful project will also produce a scientific paper.
While machine learning works great when computing resources abound, a privacy-protecting distributed implementation of context-aware, intelligent behavior requires learning on resource-constrained devices.
The goal of this research is to develop means to learn from the user's activity on smart devices only with the help of local resources.
The expected result is to be able to predict the activities of the user, using sensors on the device and also information available from local apps, in order to better assist the user in those activities and anticipate potential unwanted situations.