%0 Journal Article %A Liu Jinlai %A Wang Xiaojie %A Wang Yue %T Video description with subject, verb and object supervision %D 2019 %R 10.19682/j.cnki.1005-8885.2019.1006 %J 中国邮电高校学报(英文) %P 52-58 %V 26 %N 2 %X Video description aims to generate descriptive natural language for videos. Inspired from the deep neural network (DNN) used in the machine translation, the video description (VD) task applies the convolutional neural network (CNN) to extracting video features and the long short-term memory (LSTM) to generating descriptions. However, some models generate incorrect words and syntax. The reason may because that the previous models only apply LSTM to generate sentences, which learn insufficient linguistic information. In order to solve this problem, an end-to-end DNN model incorporated subject, verb and object (SVO) supervision is proposed. Experimental results on a publicly available dataset, i. e. Youtube2Text, indicate that our model gets a 58.4% consensus-based image
description evaluation (CIDEr) value. It outperforms the mean pool and video description with first feed (VD-FF) models, demonstrating the effectiveness of SVO supervision. %U https://jcupt.bupt.edu.cn/CN/10.19682/j.cnki.1005-8885.2019.1006