mosaic

Multi-Modal Situation Assessment & Analytics Platform

The MOSAIC Platform will involve multi-modal data intelligence capture and analytics including video and text collaterals etc. The distributed intelligence within the platform enables decision support for automated detection, recognition, geo-location and mapping, including intelligent decision support at various levels to enhance situation awareness, surveillance targeting and camera handover; these involve level one fusion, and situation understanding to enable decision support and impact analysis at level two and three of situation assessment. Accordingly MOSAIC will develop and validate:

  1. A framework for capturing and interpreting the use-context requirements underpinned by a standard data ontology to facilitate the tagging, search and fusion of data from distributed multi-media sensors, sources and databases,
  2. A systems architecture to support wide area surveillance with edge and central fusion and decision support capabilities,
  3. Algorithms, including hardware-accelerated ones for smart cameras, which enable disparate multi-media information correlation to form a common operating picture, including representation of the temporal information and aspects,
  4. Tools and techniques for the extraction of key information from video, un-controlled text and databases using pattern recognition and behaviour modelling techniques,
  5. Algorithms and techniques to represent decisions and actions within a mathematical framework, and how this framework can be used to simulate the effects of disturbances on the system,
  6. An integrated system solution based upon the proposed systems architecture and the above developed enabling technologies including techniques for tagging different multi-media types with descriptive metadata to support multi-level fusion and correlation of surveillance and other data intelligence from distributed heterogeneous sources and networks.

Due to the ability to pre-process events on the camera itself, thus allowing for the pre-filtering of unimportant events, the efficacy of wide-area surveillance can be improved. This is enhanced by the fact that the MOSAIC decision support sub-system will support a more focused and targeted approach to surveillance, i.e. informing on the required deployment of cameras as well as informing already deployed cameras to shift attention or to go to temporary sleep mode, thus further enhancing the reduction of network traffic.

Project Details

Project funded by: EU
Project Duration: 04/11 - 03/14
Project Partners: The University of Reading (GB), BAE Systems (Operations) Ltd. (GB), A E Solutions (BI) (GB), Synthema S.R.L. (I), Technical University Berlin (D), DResearch Digital Media Systems GmbH (D), West Midlands Police Authority (GB), International Forum for Biophilosophy (BE), Warwickshire Police (GB)
Project Homepage: www.mosaic-fp7.eu
mosips

Modeling and Simulation of the Impact of Public Policies on SMEs

MOSIPS is a project financed by the European Commission within the 7th Framework Programme under the objective 5.6 ICT solutions for governance and policy modelling (Grant Agreement nº 288833).

The aim of the project is to develop a user-friendly policy simulation system allowing forecasting and visualizing the socio-economic potential impact of public policies. This will allow policy makers to make experiments with different socio-economic designs, taking into account the feedback of citizens and potentially impacted stakeholders, before a public policy is settled. It will a deeper and wider understanding of the possible scenarios that might arise from the implementation of a legistalative instrument and their possible side effects.

Given the importance of SMEs in EU Economy, focus will be set on the impact of the SME-oriented policies on their R&D activities.

Combining suitable data, models, artificial intelligence and interactive visualization tools, the final goal is to develop a “policy wind tunnel”. MOSIPS system will be suited to craft policy options, giving social and economic stakeholders a decision arena visualizing and illustrating policy insights, providing valuable decision-support, as it will make the side effects understandable.

Project Details

Project funded by: EU
Project Duration: 09/11 - 08/14
Project Partners: Anova IT Consulting (E), University of Alcala (E), Research Studio Austria Forschungsgesellschaft (A), University of Reading (UK), Opera21(I), University of Koblenz (D), European Institute of Interdisciplinary Research (F), Ayuntamiento de Madrid (E), Comune di Verona (I)
Project Homepage: www.mosips.eu
MoD

The Silicon Valley APHIDS Project: Autonomous Portable Threat Identification Systems

Grand Challenge was searching for the best ideas in defence technology to help solve some of the evolving threats facing front line troops.

It also aimed to provide an opening into the UK defence market for new suppliers and investors.

The finale in August 2008 saw best teams battle in tough conditions Army's Urban Warfare Training Facility at Porton Down.

Project Details

Project funded by: UK Ministry of Defence
Project Duration: 01/08 - 08/08
Project Partners: Silicon Systems Ltd, IDUS Consultancy, Moonbuggy Ltd, University of Kingston, University of Reading, Bruton School for Girls
Project Homepage: http://www.science.mod.uk/engagement/grand_challenge/grand_challenge.aspx
dream

Dynamic Retrieval, Analysis & Semantic Metadata Management

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on their semantic content rather than keywords. The Dynamic REtrieval Analysis and semantic metadata Management (DREAM) project aims at paving the way towards semi-automatic acquisition of knowledge from visual content to support intelligent indexing and retrieval of digital media.  This project was undertaken in collaboration with Partners from the UK Film Industry, including Double Negative, The Foundry and FilmLight. Double Negative was the test domain Partner who provided the test materials and user requirements and evaluated the system prototype.

One of the main challenges for the users in the film (post) production sector is the storage and management of huge repositories of multimedia data, in particular, video files, and, having to search through distributed repositories to find a particular video shot.

The DREAM project aims at addressing these challenges by proposing a knowledge-assisted intelligent visual information indexing and retrieval system. The main challenge in this research work was to architect an indexing, retrieval and query support framework. The proposed framework exploits content, context and search-purpose knowledge as well as any other domain related knowledge in order to ensure robust and efficient semantic-based multimedia object labelling, indexing and retrieval.  The framework is underpinned by a network of scalable ontologies, which grows alongside ongoing incremental annotation of video content.  The DREAM Demonstrator has been evaluated through real-life deployment in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. The performance and usability evaluation results in this film post-production domain proves that the DREAM framework helps in resolving existing indexing and retrieval problems of video clips.

Project Details

Project funded by: EPSRC
Project Duration: 10/06 - 09/08
Project Partners: University of Reading (GB), Double Negative Ltd (GB), FilmLight Ltd (GB) & The Foundry Visionmongers Ltd (GB)

Publications

  • Badii, A., Lallah, C., Kolomiyets, O, Zhu, M. & Crouch, M. 2008, “KAIFIA: Knowledge Assisted Intelligent Framework for Information Access.” Scaling Topic Maps, Lecture Notes in Computer Science, vol. 4999/2008, p. 226–236.
  • Badii, A., Lallah, C., Kolomiyets, O, Zhu, M. & Crouch, M. 2008, “Semi-automatic annotation and retrieval of visual content using the topic map technology,” in 1st WSEAS International Conference on Visualization, Imaging and Simulation (VIS08), Bucharest, Romania, p. 77–82.
  • Badii, A., Lallah, C., Zhu, M., Crouch, M. 2009, “The DREAM Framework: Using a Network of Scalable Ontologies for Intelligent Indexing and Retrieval of Visual Content,” in Web Intelligence and Intelligent Agent Technologies, 2009. (WI-IAT09), IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology, Milano, Italy, pp. 551-554.
  • Badii, A., Lallah, C., Zhu, M., Crouch, M. 2009, “Semi-automatic knowledge extraction, representation and context-sensitive intelligent retrieval of video content using collateral context modelling with scalable ontological networks.” Signal Processing: Image Communication, vol. 24, no. 9, pp. 759-773.
  • Badii, A., Meng, Z, Lallah, C. & Crouch, M. 2009, “Semantic-driven context-aware visual information indexing and retrieval: Applied in the film post-production domain,” in Computational Intelligence for Visual Intelligence, 2009 (CIVI09), IEEE Workshop on Computational Intelligence for Visual Intelligence, Nashville, TN, US, pp. 44-51.
  • Badii, A., Lallah, C., Zhu, M. & Crouch, M. 2011, “Using a Network of Scalable Ontologies for Intelligent Indexing and Retrieval of Visual Content.” Information Retrieval and Mining in Distributed Environments, Studies in Computational Intelligence, vol. 324/2011, p. 233–248.