A Brief Review of Facial Expressions Recognition System
Abstract
A wide range of applications exist for Facial Expression Recognition (FER) like neutral, sadness, surprise, happiness, fear, anger, contempt and disgust, which include Emotional / Mental state recognition, stress and anxiety detection in medial domain, security domain, human computer interaction etc. In computer vision field Facial Expression Recognition is very interesting and challenging area. In this paper, review some of the facial expression recognition methods are presented. The feature extraction techniques play a crucial role. In this paper a few Facial Feature Extraction techniques like Local Binary Pattern, Local Directional Pattern are discussed. Also recognition based on support vector machines and deep learning algorithms are discussed and compared.
References
[2] Hamed Monkaresi, Nigel Bosch, Rafael A. Calvo, “Automated Detection of Engagement Using Video-Based Estimation of Facial Expressions and Heart Rate”, IEEE Transactions On Affective Computing, Vol. 8, No. 1, January-March 2017.
[3] Veena Mayya, Radhika M. Pai, Manohara Pai M. M., “Automatic Facial Expression Recognition Using DCNN”,ELSEVIER Procedia Computer Science 93 ( 2016 ) 453 – 461.
[4] Ghulam Muhammad, Mansour Alsulaiman, “A Facial-Expression Monitoring System for Improved Healthcare in Smart Cities”, IEEE Access, Special Section On Advances Of Multisensory Services And Technologies For Healthcare In Smart Cities, date of publication June 7, 2017.
[5] Ankit Goyal, Naveen Kumar, Tanaya Guha, “A Multimodal Mixture-Of-Experts Model For Dynamic Emotion Prediction In Movies “,Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference , DOI: 10.1109/ICASSP.2016.7472192.
[6] Yoann Baveye, Emmanuel Dellandr´ea, Christel Chamaret and Liming Chen, “Deep Learning vs. Kernel Methods: Performance for Emotion Prediction in Videos”, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 978-1-4799-9953-8/15/$31.00 ©2015 IEEE.
[7] Viraj Mavani, Shanmuganathan Raman, Krishna P Miyapuram, “Facial Expression Recognition using Visual Saliency and Deep Learning”, Journal Computer Vision and Pattern Recognition (cs.CV) 26 Aug 2017.
[8] Md. Hasanul Kabir, Md Sirajus Salekin, Md. Zia Uddin, And M. Abdullah-Al-Wadud, “Facial Expression Recognition From Depth Video With Patterns of Oriented Motion Flow”, IEEE Access, DOI 10.1109/ACCESS.2017.2704087 May 16, 2017.
[9] Jiaxing Lia, Dexiang Zhanga, Jingjing Zhanga, Jun Zhanga, Teng Lia, Yi Xiaa, Qing Yana, and Lina Xuna, “Facial Expression Recognition with Faster R-CNN”, ELSEVIER Procedia Computer Science 107 ( 2017 ) 135 – 140.
[10] Gibran Benitez-Garcia Tomoaki Nakamura Masahide Kaneko, “Analysis of In- and Out-group Differences between Western and East-Asian Facial Expression Recognition”, 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA) Nagoya University, Nagoya, Japan, May 8-12, 2017.
[11] Jadisha Yarif Ram´ırez Cornejo, Helio Pedrini, “Recognition Of Occluded Facial Expressions Based On Centrist Features”, Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference , DOI: 10.1109/ICASSP.2016.7471886
[12] Abdelwahab Bourai, Tadas Baltrušaitis, Louis-Philippe Morency, “Automatically Predicting Human Knowledgeability through Non-verbal Cues “,Proceedings of the 19th ACM International Conference on Multimodal Interaction, ISBN: 978-1-4503-5543-8, 2017.
[13] Husam Salih, Lalit Kulkarni, “Study of Video based Facial Expression and Emotions Recognition Methods”, International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud), 2017.
[14] G. Giannakakisa, M. Pediaditisa, D. Manousosa, E. Kazantzakia, F. Chiarugia,P.G. Simosb,a, K. Mariasa, M. Tsiknakisa, “Stress and anxiety detection using facial cues from videos”, ELSEVIER journal of Biomedical Signal Processing and Control 2017.
[15] Panagiotis Tzirakis , George Trigeorgis , Mihalis A. Nicolaou, Bj¨orn W. Schuller, and Stefanos Zafeiriou, “End-to-End Multimodal Emotion Recognition Using Deep Neural Networks”, IEEE Journal Of Selected Topics In Signal Processing, Vol. 11, No. 8, December 2017.
[16] Madhumita Takalkar, Min Xu, Qiang Wu & Zenon Chaczko, “A survey: facial micro-expression recognition”, Springer Science+Business Media, LLC 2017, doi.org/10.1007/s11042-017-5317-2.
[17] Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial expression analysis. Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG'00), Grenoble, France, 46-53.
[18] Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., & Matthews, I. (2010). The Extended Cohn-Kanade Dataset (CK+): A complete expression dataset for action unit and emotion-specified expression. Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010), San Francisco, USA, 94-101.
[19] M.F. Valstar, M. Pantic, “ Induced Disgust, Happiness and Surprise: an Addition to the MMI Facial Expression Database ”, Proceedings of the International Language Resources and Evaluation Conference, Malta, May 2010.
[20] M. Pantic, M.F. Valstar, R. Rademaker and L. Maat, “ Webbased database for facial expression analysis ”, Proc. IEEE Int'l Conf. on Multimedia and Expo (ICME'05), Amsterdam, The Netherlands, July 2005.
[21] Michael J. Lyons, Shigeru Akamatsu, Miyuki Kamachi & Jiro Gyoba , “Coding Facial Expressions with Gabor Wavelets”, Proceedings, Third IEEE International Conference on Automatic Face and Gesture Recognition, April 14-16 1998, Nara Japan, IEEE Computer Society, pp. 200-205.
[22] Dong –Ju Kim*, Sang-Heon, “Face Recognition via Local Directional Pattern, International Journal of Security and Its Applications 2013.
[23] Byoung Chul Ko , “A Brief Review of Facial Emotion Recognition Based on Visual Information”, sensors, 2018.
To ensure uniformity of treatment among all contributors, other forms may not be substituted for this form, nor may any wording of the form be changed. This form is intended for original material submitted to AJCT and must accompany any such material in order to be published by AJCT. Please read the form carefully.
The undersigned hereby assigns to the Asian Journal of Convergence in Technology Issues ("AJCT") all rights under copyright that may exist in and to the above Work, any revised or expanded derivative works submitted to AJCT by the undersigned based on the Work, and any associated written, audio and/or visual presentations or other enhancements accompanying the Work. The undersigned hereby warrants that the Work is original and that he/she is the author of the Work; to the extent the Work incorporates text passages, figures, data or other material from the works of others, the undersigned has obtained any necessary permission. See Retained Rights, below.
AUTHOR RESPONSIBILITIES
AJCT distributes its technical publications throughout the world and wants to ensure that the material submitted to its publications is properly available to the readership of those publications. Authors must ensure that The Work is their own and is original. It is the responsibility of the authors, not AJCT, to determine whether disclosure of their material requires the prior consent of other parties and, if so, to obtain it.
RETAINED RIGHTS/TERMS AND CONDITIONS
1. Authors/employers retain all proprietary rights in any process, procedure, or article of manufacture described in the Work.
2. Authors/employers may reproduce or authorize others to reproduce The Work and for the author's personal use or for company or organizational use, provided that the source and any AJCT copyright notice are indicated, the copies are not used in any way that implies AJCT endorsement of a product or service of any employer, and the copies themselves are not offered for sale.
3. Authors/employers may make limited distribution of all or portions of the Work prior to publication if they inform AJCT in advance of the nature and extent of such limited distribution.
4. For all uses not covered by items 2 and 3, authors/employers must request permission from AJCT.
5. Although authors are permitted to re-use all or portions of the Work in other works, this does not include granting third-party requests for reprinting, republishing, or other types of re-use.
INFORMATION FOR AUTHORS
AJCT Copyright Ownership
It is the formal policy of AJCT to own the copyrights to all copyrightable material in its technical publications and to the individual contributions contained therein, in order to protect the interests of AJCT, its authors and their employers, and, at the same time, to facilitate the appropriate re-use of this material by others.
Author/Employer Rights
If you are employed and prepared the Work on a subject within the scope of your employment, the copyright in the Work belongs to your employer as a work-for-hire. In that case, AJCT assumes that when you sign this Form, you are authorized to do so by your employer and that your employer has consented to the transfer of copyright, to the representation and warranty of publication rights, and to all other terms and conditions of this Form. If such authorization and consent has not been given to you, an authorized representative of your employer should sign this Form as the Author.
Reprint/Republication Policy
AJCT requires that the consent of the first-named author and employer be sought as a condition to granting reprint or republication rights to others or for permitting use of a Work for promotion or marketing purposes.
GENERAL TERMS
1. The undersigned represents that he/she has the power and authority to make and execute this assignment.
2. The undersigned agrees to indemnify and hold harmless AJCT from any damage or expense that may arise in the event of a breach of any of the warranties set forth above.
3. In the event the above work is accepted and published by AJCT and consequently withdrawn by the author(s), the foregoing copyright transfer shall become null and void and all materials embodying the Work submitted to AJCT will be destroyed.
4. For jointly authored Works, all joint authors should sign, or one of the authors should sign as authorized agent
for the others.
Licenced by :
Creative Commons Attribution 4.0 International License.
