Extensions and Improvements of the Extreme Learning Machine (ELM) Applied to Face Recognition
Palavras-chave:
Face Recognition, Classification, Extreme Learning Machine, Dual Least Squares, Machine LearningResumo
Considering data science popularization in recent years, there has been an increase in using machine
learning methods, such as artificial neural networks, in several research fields. The Extreme Learning Machine
(ELM) method is a single-hidden layer feedforward network that stands out for its computational efficiency. It is
reached mainly by adopting random weights initialization between the input layer and the hidden layer, as well as
for the hidden neuron bias. This initialization allows the weights between the hidden and output layers to be deter-
mined analytically by calculating the pseudoinverse, avoiding the use of an iterative algorithm based on gradient
descent. This work seeks to evaluate the performance of more sophisticated strategies for initializing the weights
and fitting the machine’s internal parameters. We proposed a performance analysis when replacing the pseudoin-
verse resolution step with the non-linear dual version of the linear least squares (LSDual), still little explored in
the literature. We performed tests over face recognition databases in a classification approach. The results support
the construction of an ELM model with a higher level of robustness regarding the quality of prediction. The use of
more sophisticated random initialization coupled with LSDual solution reduced the testing error in all scenarios.