The goal of this workshop is to advance the state of the art in articulated people tracking and visual human analysis. Building on the experience of the previous iteration of this workshop organized at ICCV'17, we introduce an extended version of the PoseTrack benchmark for articulated people tracking. The extended benchmark will double the amount of annotated data, laying emphasis on common challenging cases for existing methods. In addition to the PoseTrack challenge we will be hosting additional challenges on dense pose estimation in collaboration with the authors of the DensePose project and on 3D human pose estimation in collaboration with the Human3.6M team. Finally, the workshop will include a diverse program featuring keynote speakers, poster presentations, and a discussion panel to provide a forum for the exchange of ideas among the researchers working in the area of visual human analysis.
The main goals of this workshop will be to introduce new challenges to the PoseTrack Workshop series. The planned panel discussion with world’s leading experts on this problem will be a fruitful input and source of ideas for all participants. This will be the second edition in the PoseTrack series. We as organizers expect to receive valuable feedback from users and from the community on how to improve the benchmark. A few potential issues for the discussion are
In this challenge it will be required to estimate and track 2D articulated poses of multiple people in real-world videos. Both single-frame pose estimation accuracy as well as articulated tracking accuracy will be evaluated and a winner will be determined in each category. The videos in this challenge will be similar to those included in the PoseTrack'17 benchmark at ICCV'17. To further improve the benchmark this year we will double the dataset size.
New: The PoseTrack18 dataset and evaluation code are now available here.
In this challenge the participants will be required to estimate dense correspondences between people videos and a 3D body shape model. The challenge is based on the data from PoseTrack'17 benchmark that has been annotated with dense pose correspondences. More details about the task are available at densepose.org
In this challenge the participants are required to estimate poses of people in 3D. The challenge is based on the popular Human3.6 benchmark which offers the possibility of estimating 2D and 3D skeletal joint positions, joint angles, semantic segmentation of body parts, as well as 3D human shape and depth, and will in addition provide and evaluate dense correspondences similar to the DensePose challenge.
New: The dataset for the 3D human pose estimation challenge is now available here.
University College London & Facebook AI Research
Lund University & Google
Saturday, September 8, 2018 (afternoon)
|12:50 - 13:00||Introduction||Organizers|
|13:00 - 13:30||Invited Talk||Christian Theobalt|
|13:30 - 14:00||Invited Talk||George Papandreou|
|14:00 - 14:30||Challenge Results||Organizers|
|14:30 - 15:15||Coffee Break||Everyone|
|15:15 - 15:45||Invited Talk||Cristian Sminchisescu|
|15:45 - 16:15||Invited Talk||Iasonas Kokkinos|
|16:15 - 16:30||Closing Remarks & Discussion||Organizers & Everyone|
The schedule is subject to change