Why high-level information fusion?

In most information fusion (IF) systems, the underlying principle is the creation and maintenance of a real-time and accurate model of the world. As part of these systems, situational assessment (SA) is an important component as it combines the numerous data sources, interfaces to the user and manages data collection and information extraction.

Introduction to high level information fusion (HLIF)

In order to ensure the presentation of a reliable and coherent situational picture to the user, many issues have to be resolved.

One of the most important ones involves the operator and/or analyst being overwhelmed by the tide of incoming data, which is not limited to sensor readings as it can include databases, reports and other sources of information. This typically leads to operator/analyst fatigue and stress which, in turn, leads to human errors within the overall IF process.

Other issues may include data contextual dependence, operator/analyst cognitive bias, non-integration of heterogeneous data sources, human knowledge input absence, link analysis deficiencies and more.

Decision-support system (DSS)

The goal, then, is to produce a decision-support system (DSS) that alleviates the strain on the operator/analyst by reducing the influx of information to manageable levels.

The system must not only be able to fuse these data streams in real-time, issuing alerts when anomalous behaviors are detected, but also mine the raw data for patterns that represent information and storing the set of patterns in the knowledge base being maintained by the DSS.

At the heart of every DSS is multi-source multi-sensor data fusion. To that end and according to the Joint Director of Labs (JDL) Data Fusion Group, data fusion is defined as [1]:

[..] a process dealing with the association, correlation, and combination of data and information from single and multiple sources to achieve refined position and identity estimates for observed entities, and to achieve complete and timely assessments of situations and threats, and their significance. The process is characterized by continuous refinements of its estimates and assessments, and by evaluation of the need for additional sources, or modification of the process itself, to achieve improved results.

JDL model

The JDL model has been taken as a lingua franca for data fusion problems. It has been revised twice, once in March 1999 [2] and another in December 2004 [3]. Other fusion models exist, including the DDF model [4], the Omnibus model [5] and the perceptual reasoning model [6].

The JDL model (refer to Figure 1) describes five major levels to IF, namely:

  • Sub Object and Object Assessment (Level 0 and Level 1),
  • Situation Assessment (Level 2),
  • Impact Assessment (Level 3) and
  • Process Refinement (Level 4).

Most of the work in the 1980s and 1990s has concentrated on Level 0/1 in order to estimate and predict signal, object and entity states based on data collection and processing mostly performed at the sensor and platform levels.

However, due to the numerous challenges being faced by the IF community today. These challenges include the need to better learn from experience, explain the process, capture human expertise and guidance, as well as the need to analyze contextually and semantically. Others include the need to express knowledge representation, lower computational complexity, automatically adapt to changing threats and situations, as well as graphically display inferential chains and fusion processes. Due to these complex challenges, high level information fusion (i.e. Level 2 and up), or HLIF as it is better known, has become the focus of contemporary research and development efforts.

Figure 1. US Department of Defense (DoD) Joint Director of Laboratories Data Fusion Model

Figure 1. US Department of Defense (DoD) Joint Director of Laboratories Data Fusion Model

Behaviour analysis through predictive modeling

Larus Technologies has developed a patent-pending HLIF architecture that performs behaviour analysis through predictive modeling, is capable of dealing with heterogeneous (i.e. multi-source, multi-sensor) data, is mostly automated yet human-centric and resolves the aforementioned SA issues and challenges.

The architecture involves four modules:

  • Perception
  • Validation
  • Expectation
  • Action

The perception module processes and analyses sensor inputs, the latter extracted from data sources, including the environment. Its main function is one of data consumption.

The validation module performs the multi-source multi-sensor data fusion to extract common patterns and parameters from heterogeneous data. Its main function is one of information consumption.

The expectation module diffuses commands to actual tasks through predictive modeling. Its main function is one of decision support.

Finally, the action module affects the environment by acting out specific tasks. Its main function is one of sensor tasking. An action will change the state of the environment, whence the entire cycle repeats. All the while, a world model represents the knowledge base attained by the system, while the latter is situated and embodied in a real world. Figure 2 depicts the entire flow.

The Larus HLIF architecture inherently allows for bi-directional in-network processing for data-centric applications and reduces the state estimation errors by closely matching the world model to the real world. It can be summarized by the following two-way relation:

Real World → Perceive → Act ← Expect ← World Model

Figure 2. Retroactive agent architecture

Figure 2. Retroactive agent architecture

References

[1] JDL Data Fusion Group
[2] Alan Steinberg, Christopher Bowman and Franklin White, “Revisions to the JDL Data Fusion Model,” March 1999.
[3] James Llinas, Christopher Bowman, Galina Rogova, Alan Steinberg, Ed Waltz and Franklin White, “Revisiting the JDL Data Fusion Model II,” December 2004.
[4] B. Dasarathy, “Decision Fusion Strategies in Multisensor Environments “, IEEE Transactions on Systems, Man, and Cybernetics, pp 1140-1154, vol. 21, 1991.
[5] M. Bedworth and J. Obrien, “The Omnibus Model: A New model of data fusion?”, AES Magazine, April 2000.
[6] Kadar, I, “Perceptual Reasoning in Adaptive Fusion Processing,” SPIE, 2002.

You may also be interested in:


© 2023 Larus Technologies