Chapter 1 - A Brief Review on Machine Learning
1.1 Machine Learning definition
1.2 Main types of learning
1.3 Supervised learning
1.4 How a supervised algorithm learns?
1.5 Illustrating the Supervised Learning
1.51. The Perceptron
1.5.2 Multilayer Perceptron
1.6 Concluding Remarks
Chapter 2 - Statistical Learning Theory
2.1 Motivation
2.2 Basic concepts
2.2.1 Probability densities and joint probabilities
2.2.2 Identically and independently distributed data 2.2.3 Assumptions considered by the Statistical Learning Theory
2.2.4 Expected risk and generalization
2.2.5 Bounds for generalization with a practical example
2.2.6 Bayes risk and universal consistency
2.2.7 Consistency, overfitting and underfitting
2.2.8 Bias of classification algorithms
2.3 Empirical Risk Minimization Principle
2.3.1 Consistency and the ERM Principle
 
About the Author:
riggerRodrigo Fernandes de Mello is Associate Professor with the Department of Computer Science, at the Institute of Mathematics and Computer Sciences, University of São Paulo, São Carlos, SP, Brazil. He obtained his PhD degree from the University of São Paulo. His research interests include the Statistical Learning Theory, Machine Learning, Data Streams, and Applications in Dynamical Systems concepts. He has published more than 100 papers including journals and conferences, supported and organized international conferences, besides serving as Editor of International Journals.
Moacir Antonelli Ponti is Associate Professor with the Department of Computer Science, at the Institute of Mathematics and Computer Sciences, University of São Paulo, São Carlos, Brazil, and was visiting researcher at the Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey. He obtained his PhD from the Federal University of São Carlos. His research interests include Pattern Recognition and Computer Vision, as well as Signal, Image and Video Processing.