On this page
Decision Level Fusion
On this page
Decision Level Fusion
In data fusion, decision level fusion is a method of combining data from multiple sources at the decision-making stage. It involves taking the outputs or decisions from each source and combining them to make a final decision.
The process of decision level fusion typically includes three steps:
- Decision generation: Each source generates its own decision or output based on the data it has collected or analyzed. This could be in the form of classifications, rankings, probabilities, etc.
- Decision combination: The decisions from each source are combined using various fusion techniques. These techniques can range from simple majority voting to more complex algorithms that take into account the reliability or quality of each source's decision.
- Decision refinement: The combined decision is further refined or improved if necessary. This could involve adjusting weights assigned to different sources based on their performance or resolving conflicts between conflicting decisions.
Decision level fusion is often used in applications where multiple sources provide complementary information and combining them can lead to a more accurate or robust decision. This could include areas such as surveillance systems, sensor networks, medical diagnosis, and financial analysis.
Methods of Decision Level Fusion
- Identity based
- Dempster-Shafer evidence theory (jiangNewEngineFault2017, link, DOI)
- Maximum a priori (MAP)
- Maximum likelihood
- Z numbers
- D numbers
- Voting methods
- Bayesian Belief fusion
- Multi agent fusion
- Decision Templates
- Knowledge based
- Syntax rule
- Neural Network
- Fuzzy logic and set
On the basis of the fusion type, these techniques are of two types:
- Voting-based: In the voting-based decision fusion techniques, majority voting is the most popular and is widely used. Some of the other techniques include weighted voting in which a weight to each classifier is attached and then decision fusion is performed. Borda count is another technique in which the sums of reverse ranks are calculated to perform decision fusion. Other voting techniques are probability-based, such as fuzzy rules, Naïve-Bayes, Dempster-Shafer theory, and so forth.
- Divide and conquer: In this decision fusion technique, the dataset is divided into subsets of equal sizes, and then the classification is performed followed by decision fusion on the results of those smaller dataset classifications. These divide and conquer methods include the concepts of bagging and boosting.
- Bagging
- Boosting
- Stacking
Reference: Multiple Classifier Systems — a brief introduction | luisfredgs
Decision Level Examples
- Example 1st is proposed by 2, who fuse a set of state-of-the-art CNN classifiers, namely VGG-16, SqueezeNet, and DenseNet, for remote sensing scene classification.(alosaimiFusionCNNEnsemble2020, link, DOI, zolib) If at least two models agree on a class prediction, then we take that class as the final prediction. However, if they don’t agree then we choose the prediction that has the highest confidence value.
- Another example is proposed by 3 and 4, who use a decision-level fusion method based on CNNs and Bayesian inference for the same task.
- A third example is proposed by 5, who use a Bayesian fusion method to combine the results of three deep learning methods, namely 1D-CNN, deep belief network (DBN), and multi-layer perceptron (MLP), for network traffic classification. (izadiNetworkTrafficClassification2022, link, DOI, zolib) using Bayesian data fusion upon confusion matrices.
- VWDT: (miMultipleClassifierFusion2016, link, DOI, zolib)
Pusion: Python Library for Decision Fusion
- pusion – Decision Fusion Framework — pusion - Decision Fusion Framework documentation
- Utility-based methods (low evidence resolution):
- Evidence-based methods (medium evidence resolution):
- Trainable methods (highest evidence resolution):
Tags
Edit this page
Last updated on 8/21/2023