Download:https://zenodo.org/record/3941811#.YY99F73P2DV
Abstract
The Greek Sign Language (GSL) is a large-scale RGB+D dataset, suitable for Sign Language Recognition (SLR) and Sign Language Translation (SLT). The video captures are conducted using an Intel RealSense D435 RGB+D camera at a rate of 30 fps
. Both the RGB and the depth streams are acquired in the same spatial resolution of 848×480 pixels
. To increase variability in the videos, the camera position and orientation is slightly altered within subsequent recordings. Seven different signers are employed to perform 5
individual and commonly met scenarios in different public services. The average length of each scenario is twenty sentences.
Description
The dataset contains 10,290 sentence
instances, 40,785 gloss
instances, 310 unique glosses
(vocabulary size) and 331 unique sentences
, with 4.23 glosses per sentence on average
. Each signer is asked to perform the pre-defined dialogues five consecutive times
. In all cases, the simulation considers a deaf person communicating with a single public service employee. The involved signer performs the sequence of glosses of both agents in the discussion. For the annotation of each gloss sequence, GSL linguistic experts are involved. The given annotations are at individual gloss and gloss sequence level. A translation of the gloss sentences to spoken Greek is also provided.
Evaluation
The GSL dataset includes the 3 evaluation setups:
a) Signer-dependent continuous sign language recognition (GSL SD) – roughly 80% of videos are used for training, corresponding to 8,189 instances. The rest 1,063 (10%) were kept for validation and 1,043 (10%) for testing.
b) Signer-independent continuous sign language recognition (GSL SI) – the selected test gloss sequences are not used in the training set, while all the individual glosses exist in the training set. In GSL SI, the recordings of one signer are left out for validation and testing (588 and 881 instances, respectively). The rest 8821 instances are utilized for training.
c) Isolated
gloss sign language recognition (GSL isol.) – The validation set consists of 2,231 gloss
instances, the test set 3,500
, while the remaining 34,995 are used for training
. All 310 unique glosses are seen in the training set.
Train | Val | Test | Total |
---|---|---|---|
34,995 | 2,331 | 3,500 | 40,826 |
OpenHands 指标验证
操作基本流程可参考:https://blog.youkuaiyun.com/qq_31537885/article/details/124674515
- 自行 于
config.yaml
补充test_pipeline
参数,如下:
data:
... ...
test_pipeline: # self-add.
dataset:
_target_: openhands.datasets.isolated.GSLDataset
split_file: "GSL/GSL_split/GSL_isolated/test_greek_iso.csv" # 解压`GSL.zip`可得
root_dir: "GSL/GSL_pose" # 解压`GSL.zip`可得
class_mappings_file_path: "GSL/GSL_split/GSL_isolated/iso_classes.csv" # 解压`GSL.zip`可得
splits: "test"
modality: "pose"
# inference_mode: true # self-add
inference_mode: false # self-add
transforms:
- PoseSelect:
preset: mediapipe_holistic_minimal_27
# - PoseTemporalSubsample:
# num_frames: 32
- CenterAndScaleNormalize:
reference_points_preset: shoulder_mediapipe_holistic_minimal_27
scale_factor: 1
- Inference and Testing
import omegaconf
from openhands.apis.inference import InferenceModel
# cfg = omegaconf.OmegaConf.load("path/to/config.yaml")
model = InferenceModel(cfg=cfg)
model.init_from_checkpoint_if_available()
if cfg.data.test_pipeline.dataset.inference_mode:
model.test_inference()
else:
model.compute_test_accuracy()
ex.
# lstm
/raid/zhengjian/OpenHands/openhands/apis/inference.py:21: LightningDeprecationWarning: The `LightningModule.datamodule` property is deprecated in v1.3 and will be removed in v1.5. Access the datamodule through using `self.trainer.datamodule` instead.
self.datamodule.setup(stage=stage)
Found 310 classes in test splits
Loading checkpoint from: datasets/GSL/gsl/lstm/epoch=107-step=118043.ckpt
219batch [00:09, 22.85batch/s]
Accuracy for 3500 samples: 85.42857360839844% # < 86.6% published
# sl-gcn
/raid/zhengjian/OpenHands/openhands/apis/inference.py:21: LightningDeprecationWarning: The `LightningModule.datamodule` property is deprecated in v1.3 and will be removed in v1.5. Access the datamodule through using `self.trainer.datamodule` instead.
self.datamodule.setup(stage=stage)
Found 310 classes in test splits
Loading checkpoint from: datasets/GSL/gsl/sl_gcn/epoch=71-step=78695.ckpt
219batch [01:24, 2.58batch/s]
Accuracy for 3500 samples: 95.37142944335938% # < 95.4% published
Pose Data
import pickle
# 样例,由OpenHands中的`GSL_pose`文件夹提供
fpath = "datasets/GSL/GSL_pose/health1_signer1_rep1_glosses/glosses0000.pkl"
f=open(fpath,'rb')
data=pickle.load(f)
import pdb
pdb.set_trace()
"""
(Pdb) data.keys()
dict_keys(['keypoints', 'confidences'])
(Pdb) data['keypoints']
array([[[ 4.68016177e-01, 2.54167974e-01, -5.32646418e-01],
[ 4.75983620e-01, 2.25574315e-01, -5.07086813e-01],
[ 4.84772474e-01, 2.24665165e-01, -5.07136345e-01],
...,
[ 3.59789759e-01, 2.36041218e-01, 1.39313692e-03],
[ 3.68174613e-01, 2.20924214e-01, 3.40915076e-03],
[ 3.74168962e-01, 2.08521932e-01, 6.30392320e-03]],
[[ 4.73744392e-01, 2.98126549e-01, -3.48104566e-01],
[ 4.82605368e-01, 2.61325389e-01, -3.33466351e-01],
[ 4.90915090e-01, 2.59218067e-01, -3.33474070e-01],
...,
[ 3.43209326e-01, 1.75836265e-01, -2.00132020e-02],
[ 3.52375984e-01, 1.58609509e-01, -1.65062286e-02],
[ 3.57829779e-01, 1.42162174e-01, -1.19863013e-02]]])
(Pdb) data['keypoints'].shape
(16, 75, 3)
(Pdb) data['confidences']
array([[0.99999869, 0.99999833, 0.99999821, ..., 1. , 1. ,
1. ],
[0.99999875, 0.99999839, 0.99999833, ..., 1. , 1. ,
1. ],
[0.99999875, 0.99999839, 0.99999833, ..., 1. , 1. ,
1. ],
...,
[0.99999899, 0.99999881, 0.99999875, ..., 0. , 0. ,
0. ],
[0.99999905, 0.99999887, 0.99999881, ..., 0. , 0. ,
0. ],
[0.99999911, 0.99999893, 0.99999887, ..., 1. , 1. ,
1. ]])
(Pdb) data['confidences'].shape
(16, 75)
"""
写在最后:若本文章对您有帮助,请点个赞啦 ٩(๑•̀ω•́๑)۶