Cappy allows to predict human subjective quality assessment ratings of Closed Captions from a caption file and a transcript file. Cappy was designed using the quality factors of Closed Captioning based on literature and exploits Deep Neural Networks trained with data using statistical user modelling of Deaf and Hard of Hearing audiences, and the Active Learning case with Query By Committee strategy to implement the system.
Cappy is still at the initiative stage but its core idea will be the future of the quality assessment in captions and further expansions.
The repository was tested under Python 3.6 django 2.2.13 Keras 2.2.5 tensorflow 1.15
The background of this project was published in following papers:
- Nam, S., Fels, D. I., & Chignell, M. H. (2020). Modeling Closed Captioning Subjective Quality Assessment by Deaf and Hard of Hearing Viewers. IEEE Transactions on Computational Social Systems. (https://ieeexplore.ieee.org/document/9017943)
- Nam, Somang. (2020). Towards Designing a Subjective Assessment System for the Quality of Closed Captioning Using Artificial Intelligence. NAB-BEIT Conference Proceedings. (https://nabpilot.org/beitc-proceedings/2020/towards-designing-a-subjective-assessment-system-for-the-quality-of-closed-captioning-using-artificial-intelligence/)
- Nam, S., & Fels, D. (2019). Simulation of Subjective Closed Captioning Quality Assessment Using Prediction Models. International Journal of Semantic Computing, 13(01), 45-65.
- Nam, S., & Fels, D. (2018, September). Assessing closed captioning quality using a multilayer perceptron. In 2018 IEEE First International Conference on Artificial Intelligence and Knowledge Engineering (AIKE) (pp. 9-16). IEEE.