Abstract:
|
Feed-forward Neural Networks (FNN) and Support Vector Machines (SVM)
are two machine learning frameworks developed from very different
starting points of view. In this work a new learning model for FNN is
proposed such that, in the linearly separable case, it tends to obtain
the same solution as SVM. The key idea of the model is a weighting of
the sum-of-squares error function, which is inspired by the AdaBoost
algorithm. As in SVM, the hardness of the margin can be controlled, so
that this model can be also used for the non-linearly separable
case. In addition, it is not restricted to the use of kernel
functions, and it allows to deal with multiclass and multilabel
problems as FNN usually do. Finally, it is independent of the
particular algorithm used to minimize the error function. Theoretic
and experimental results, on synthetic and real-world problems, are
shown to confirm these claims. Several empirical comparisons among
this new model, SVM and AdaBoost have been made in order to study the
agreement between the predictions made by the respective
classifiers. The results obtained show that similar performance does
not imply similar predictions, suggesting that different models can be
combined leading to better performance. |