Joint-annotated Human Motion Data Base
A fully annotated data set for human actions and human poses.
Video and annotations
- puppet flow per frame (approximated optical flow on the person)
- puppet mask per frame
- joint positions per frame
- action label per clip
- meta label per clip (camera motion, visible body parts, camera viewpoint, number of people, video quality)
Video
Update
We will soon announce challenges for pose estimation and action recognition.
Resources
Referencing the dataset in your work
@inproceedings{Jhuang:ICCV:2013,
title = {Towards understanding action recognition},
author = {H. Jhuang and J. Gall and S. Zuffi and C. Schmid and M. J. Black},
booktitle = {International Conf. on Computer Vision (ICCV)},
month = Dec,
pages = {3192-3199},
year = {2013}
}