Regularized Learning in Reproducing Kernel Banach Spaces Jun

in 1985, and Ph.D. degree in Neurobiology from the University of California, Berkeley in 1992. He has also held visiting positions at the University of Melbourne, ...
39KB taille 1 téléchargements 246 vues
Regularized Learning in Reproducing Kernel Banach Spaces Jun Zhang Department of Psychology and Department of Mathematics University of Michigan, Ann Arbor [email protected] Regularized learning is the contemporary framework for learning to generalize from finite samples (classification, regression, clustering, etc). Here the problem is to learn an input-output mapping f: X->Y given finite samples {(xi, yi), i=1,…,N}. With minimal structural assumptions on X, the class of functions under consideration is assumed to fall under a Banach space of functions B. The learningfrom-data problem is then formulated as an optimization problem in such a function space, with the desired mapping as an optimizer to be sought, where the objective function consists of a loss term L(f) capturing its goodness-of-fit (or the lack thereof) on given samples {(f(xi), yi), i=1,…,N}, and a penalty term R(f) capturing its complexity based on prior knowledge about the solution (smoothness, sparsity, etc). This second, regularizing term is often taken to be the norm of B, or a transformation φ thereof: R(f) = φ(||f||). This program has been successfully carried out for the Hilbert space of functions, resulting in the celebrated Reproducing Kernel Hilbert Space methods in machine learning. Here, we will remove the Hilbert space restriction, i.e., the existence of an inner product, and show that the key ingredients of this framework (reproducing kernel, representer theorem, feature space) remain to hold for a Banach space that is uniformly convex and uniformly Frechet differentiable. Central to our development is the use of a semi-inner product operator and duality mapping for a uniform Banach space in place of an inner-product for a Hilbert space. This opens up the possibility of unifying kernel-based methods (regularizing L2-norm) and sparsity-based methods (regularizing l1-norm), which have so far been investigated under different theoretical foundations. Short Bio Dr. Jun Zhang is a Professor of Psychology and Professor of Mathematics at the University of Michigan, Ann Arbor. He received the B.Sc. degree in Theoretical Physics from Fudan University in 1985, and Ph.D. degree in Neurobiology from the University of California, Berkeley in 1992. He has also held visiting positions at the University of Melbourne, the University of Waterloo, and RIKEN Brain Science Institute. During 2007-2010, he worked as the Program Manager for the U.S. Air Force Office of Scientific Research (AFOSR) in charge of the basic research portfolio for Cognition and Decision in the Directorate of Mathematics, Information, and Life Sciences. Dr. Zhang served as the President for the Society for Mathematical Psychology (SMP) and serves on the Federation of Associations in Brain and Behavioral Sciences (FABBS). He is Associate Editor for the Journal of Mathematical Psychology, and a Fellow of the Association for

Psychology Sciences (APS). Dr. Zhang has published about 50 peer-reviewed journal papers in the field of vision, mathematical psychology, cognitive psychology, cognitive neuroscience, game theory, machine learning, etc. His research has been funded by the National Science Foundation (NSF), Air Force Office for Scientific Research (AFOSR), and Army Research Office (ARO).