We used Jonathan Oheix's relatively widely used Face expression recognition dataset of 35.9k images. Images were tagged with one of the seven required emotions (anger, disgust, fear, happiness, neutral, saddness, surprise) and organized in training and validation dataset. Dataset uses consistent 48x48 pixel grayscale images of faces. The faces have been automatically registered so that the face is more or less centered and occupies about the same amount of space in each image.
The accuracy of compared models is presented in the graph below.
It is clearly visibly we have improved the accuracy of some models, by more than 10%, while we could
still not reach the current state-of-the-art accuracy of VGG model.
We compared three different models. Keras Library was used in all three cases, along with Jonathan Oheix's Face expression recognition dataset of 35.9k images.
Ashish Patel's Model (for comparison)
Akshit Madan's Model (for comparison)
Our model