PaDNet: Pan-Density Crowd Counting
Yukun Tian (1), Yiming Lei (1), Junping Zhang (1),
James Z. Wang (2)
(1) Fudan University, China
(2) The Pennsylvania State University, USA
Abstract:
Crowd counting is a highly challenging problem in
computer vision and machine learning. Most previous methods
have focused on consistent density crowds, i.e., either a sparse
or a dense crowd, meaning they performed well in global
estimation while neglecting local accuracy. To make crowd
counting more useful in the real world, we propose a new
perspective, named pan-density crowd counting, which aims to
count people in varying density crowds. Specifically, we propose
the Pan-Density Network (PaDNet) which is composed of the
following critical components. First, the Density-Aware Network
(DAN) contains multiple subnetworks pretrained on scenarios
with different densities. This module is capable of capturing pandensity
information. Second, the Feature Enhancement Layer
(FEL) effectively captures the global and local contextual features
and generates a weight for each density-specific feature. Third,
the Feature Fusion Network (FFN) embeds spatial context and
fuses these density-specific features. Further, the metrics Patch
MAE (PMAE) and Patch RMSE (PRMSE) are proposed to better
evaluate the performance on the global and local estimations. Extensive
experiments on four crowd counting benchmark datasets,
the ShanghaiTech, the UCF CC 50, the UCSD, and the UCFQRNF,
indicate that PaDNet achieves state-of-the-art recognition
performance and high robustness in pan-density crowd counting.
Full Paper
(PDF, 9MB)
Citation:
Yukun Tian, Yiming Lei, Junping Zhang and James Z. Wang, ``PaDNet:
Pan-Density Crowd Counting,'' IEEE Transactions on Image Processing,
vol. 29, no. 3, pp. 2714-2727, 2020.
© 2019 IEEE. Personal use of this material is permitted. However,
permission to reprint/republish this material for advertising or
promotional purposes or for creating new collective works for resale
or redistribution to servers or lists, or to reuse any copyrighted
component of this work in other works must be obtained from the IEEE.
Last Modified:
November 25, 2019
© 2019