FusionAgent

FusionAgent

A Multimodal Agent with Dynamic Model Selection for Human Recognition

CVPR 2026
Michigan State University

TL;DR:  FusionAgent leverages a Multimodal LLM agent to dynamically select the optimal model combination per sample, achieving robust and explainable human recognition via adaptive score fusion.

Demo

Abstract

Existing whole-body human recognition systems usually fuse face, gait, and body models with a fixed strategy for all samples. This static design is inefficient and can hurt accuracy when some modalities are unreliable (e.g., missing face cues in back-view videos).

We propose FusionAgent, an MLLM-based framework that dynamically selects model subsets per sample and fuses their outputs with Anchor-based Confidence Top-k (ACT) score fusion. Trained with reinforcement fine-tuning under Group Relative Policy Optimization (GRPO) and a metric-based reward, FusionAgent learns explainable, sample-specific model selection without requiring ground-truth reasoning traces or exhaustive search over model combinations.

Across CCVID, LTCC, and MEVID, FusionAgent consistently improves open-set search and verification performance, while also providing a practical trade-off between interpretability and speed through Chain-of-Thought (CoT) and Direct Answering (DA) inference modes.

Why Dynamic Model Selection?

Existing score-fusion methods — both rule-based and learning-based — assume that all models provide complementary information, treating the model combination as fixed for every input. This can be suboptimal: for instance, when a video only captures the back view of a person, incorporating face recognition models would be inappropriate and may even degrade performance.

FusionAgent addresses this by leveraging an MLLM agent to dynamically select a subset of models per sample, followed by adaptive score fusion, enabling robust integration tailored to each input.

Comparison of score-fusion methods: rule-based vs. learning-based vs. FusionAgent
Comparison of score-fusion methods. Top: Rule-based methods apply predefined transformations to fuse all model scores, while learning-based methods infer a fusion model from data but still assume every model contributes to all test samples. Bottom: FusionAgent leverages an MLLM agent to dynamically select a subset of models, followed by the proposed score-fusion strategy.

Method

The FusionAgent framework consists of two core components: (1) an MLLM-based agent that performs multi-turn reasoning and dynamic model selection using a ReAct-style controller, and (2) the ACT score-fusion algorithm that robustly integrates outputs from the selected models. The agent is trained via Group Relative Policy Optimization (GRPO) with four reward functions: format reward, tool success reward, answer accuracy reward, and a novel metric-based reward.

For each query, the agent first analyzes the biometric evidence and selects an initial model. Each tool returns both a predicted identity and an internal score vector over the gallery; the score vector is retained for fusion, while the predicted identity is exposed to the agent for subsequent reasoning. The agent can then continue invoking tools or terminate with a final answer and reasoning summary, yielding transparent, traceable decisions.

Overview of the FusionAgent framework
Overview of the FusionAgent framework. Recognition models are wrapped as tools to generate score vectors and predicted identities. The MLLM agent receives multimodal biometric inputs, performs reasoning-action steps through multi-turn dialogue, selectively invokes tools, and integrates predictions into a final identity decision and fused score vector. The agent is optimized with reinforcement fine-tuning using rule-based rewards, including the proposed metric-based reward.

Anchor-based Confidence Top-k (ACT) Score-Fusion

ACT dynamically combines scores from multiple models by leveraging a stable anchor model (the first model selected by the agent) to provide a robust score vector, while selectively incorporating normalized, high-confidence scores from complementary models. The anchor model provides a global ranking structure, while selected models offer sparse, localized refinements only for their top-k predictions.

This design reduces the misalignment introduced by heterogeneous embeddings and sample-wise model selection. In practice, ACT is especially effective for difficult verification and open-set search settings, where suppressing noisy non-match scores is as important as improving the top match.

ACT score-fusion example
A toy example of ACT score-fusion. Three models are used with the FR model as the anchor and k=1. ACT amplifies the gap between match and non-match scores through confidence-based top-k and anchor weighting, improving verification and open-set search performance.

Experimental Results

FusionAgent is evaluated on three challenging whole-body biometric benchmarks: CCVID, LTCC, and MEVID. The results consistently demonstrate superior performance across all metrics, especially in verification (TAR@FAR) and open-set search (FNIR@FPIR).

Compared with statistical fusion baselines and the quality-aware fusion method QME, FusionAgent achieves the most consistent gains in FNIR, showing that adaptive model selection is particularly beneficial for robust open-set retrieval. The paper also reports that DA substantially reduces latency relative to CoT, while CoT provides more interpretable reasoning, and that the agent transfers well to cross-domain LTCC evaluation with zero-shot and 10-shot adaptation.

Performance on CCVID
Method Rank-1 ↑ mAP ↑ TAR@1%FAR ↑ FNIR@1%FPIR ↓
AdaFace94.087.975.713.0 ± 3.5
CAL81.474.766.352.8 ± 13.3
BigGait76.761.049.771.1 ± 6.1
SapiensID92.677.8--
Min-Fusion87.179.262.448.5 ± 8.7
Max-Fusion89.989.373.423.0 ± 10.1
Z-score92.290.673.915.1 ± 1.5
Min-max91.890.973.915.4 ± 2.5
Weighted-sum91.790.673.615.4 ± 1.8
Asym-AO192.390.074.015.9 ± 1.7
BSSF91.891.173.914.1 ± 1.3
Farsight92.091.273.913.9 ± 1.1
QME94.190.876.212.3 ± 1.4
FusionAgent (DA)92.892.285.810.5 ± 1.5
FusionAgent (CoT)93.492.685.910.1 ± 1.5
Performance on LTCC
Method Rank-1 ↑ mAP ↑ TAR@1%FAR ↑ FNIR@1%FPIR ↓
AdaFace18.55.92.499.8 ± 0.2
CAL74.440.636.759.7 ± 7.3
AIM74.840.937.066.2 ± 7.5
SapiensID72.034.6--
Min-Fusion38.113.512.481.9 ± 6.0
Max-Fusion62.533.316.894.8 ± 4.7
Z-score73.037.530.468.7 ± 9.2
Min-max73.238.131.975.1 ± 9.2
Weighted-sum73.237.831.372.4 ± 8.6
Asym-AO171.232.919.176.3 ± 8.9
BSSF73.539.134.268.9 ± 8.5
Farsight73.237.831.372.4 ± 8.6
QME73.839.635.064.3 ± 8.0
FusionAgent (DA)75.541.036.550.3 ± 9.0
FusionAgent (CoT)75.541.037.050.0 ± 8.5
Performance on MEVID
Method Rank-1 ↑ mAP ↑ TAR@1%FAR ↑ FNIR@1%FPIR ↓
AdaFace25.08.15.498.8 ± 1.2
CAL52.527.134.767.8 ± 7.3
AGRL51.925.530.769.4 ± 8.9
Min-Fusion46.821.228.070.4 ± 8.0
Max-Fusion33.214.98.397.4 ± 1.6
Z-score54.127.430.766.5 ± 7.0
Min-max52.824.725.071.3 ± 6.1
Weighted-sum54.127.330.366.3 ± 7.0
Asym-AO152.522.923.671.7 ± 5.8
BSSF53.527.430.565.9 ± 7.2
Farsight53.825.426.669.8 ± 6.4
QME55.728.232.964.6 ± 8.2
FusionAgent (DA)52.528.734.860.8 ± 7.3
FusionAgent (CoT)54.728.734.958.6 ± 7.4

Analysis

Score Distribution Comparison

Score distribution comparison

FusionAgent achieves a markedly larger margin between match scores and the FAR threshold, while effectively suppressing non-match scores near zero.

Dynamic Model Selection Statistics

Model selection statistics

The agent adapts its model selection per dataset: on CCVID (clear faces), it anchors on the FR model; on LTCC/MEVID (surveillance), it relies more on ReID models.

The ablation study in the paper shows that the performance gain does not come from using more models indiscriminately. Compared with hard selection that fuses all models, agent-based dynamic selection consistently performs better across Rank-1, mAP, and TAR, while substantially reducing FNIR. Additional experiments also show that ACT remains effective when paired with the agent-selected model subsets, outperforming alternative fusion rules such as Z-score and Farsight on LTCC.

Effect of Top-k Value in ACT

Top-k ablation on LTCC

FusionAgent consistently outperforms baselines across all top-k values, maintaining both higher performance and greater stability compared to hard selection (using all models).

BibTeX

@inproceedings{zhu2026fusionagent,
  title     = {FusionAgent: A Multimodal Agent with Dynamic Model Selection for Human Recognition},
  author    = {Zhu, Jie and Guo, Xiao and Su, Yiyang and Jain, Anil and Liu, Xiaoming},
  booktitle = {CVPR},
  year      = {2026},
}