I am currently a Unit Leader (equivalent to Associate Professor) at the RIKEN Center for Advanced Intelligence Project (RIKEN-AIP) in central Tokyo, Japan, where I lead the Functional Analytic Learning Unit. Prior to joining the RIKEN-AIP, I was a researcher at the Pattern Analysis and Computer Vision (PAVIS) group, at the Istituto Italiano di Tecnologia (IIT) – Italian Institute of Technology, in Genova (Genoa), Italy. I received my PhD in Mathematics from Brown University, Providence, RI, USA, under the supervision of Steve Smale, and my **dissertation** was on Reproducing Kernel Hilbert Spaces (RKHS).

I am interested in both the mathematical foundations and algorithmic developments in machine learning, AI, computer vision, and image and signal processing, and problems in applied and computational functional analysis, and applied and computational differential geometry.

My current research focuses on the following two principal directions, which are closely related

- Functional analytic methods in machine learning, including in particular methods from matrix and operator theory, and the theory of vector-valued Reproducing Kernel Hilbert Spaces (RKHS)
- Geometrical methods in machine learning, including in particular methods from Riemannian geometry and related areas. My current focus is on the geometry of RKHS covariance operators and their applications.

**Selected Recent Publications** (see the Publications page for the full list, see also my profile on Google Scholar)

**RKHS in the framework of Riemannian geometry and applications**

The following papers study the infinite-dimensional Hilbert manifold of positive definite operators on a Hilbert space, with a particular focus on RKHS covariance operators.

(*Refereed conference paper*) Hà Quang Minh.** Infinite-Dimensional Log-Determinant Divergences III: Log-Euclidean and Log-Hilbert–Schmidt Divergences**. In: *Information Geometry and its Applications IV*. pp. 209-243. Springer (2018)

(*Tutorial*) **From Covariance Matrices to Covariance Operators: Data Representation from Finite to Infinite-Dimensional Settings**, at **ICCV 2017**, Venice, October 2017.

**(***Book***)** Hà Quang Minh and V. Murino.* Covariances in Computer Vision and Machine Learning*, Morgan & Claypool Publishers, October 2017.

(*Journal article*) D. Felice, M. Hà Quang, S. Mancini.** The volume of Gaussian states by information geometry**. *Journal of Mathematical Physics*, 58(1): 012201, 2017.

(*Refereed conference paper*) Hà Quang Minh. **Log-Determinant divergences between positive definite Hilbert-Schmidt operators**, *Geometric Science of Information*, November, 2017.

(*Journal article*) Hà Quang Minh. ** Infinite-dimensional Log-Determinant divergences between positive definite trace class operators,** *Linear Algebra and Its Applications*, volume 528, pages 331-383, September, 2017.

**(***Book***)** Hà Quang Minh and V. Murino (editors).* Algorithmic Advances in Riemannian Geometry and Applications*, Springer series in Advances in Computer Vision and Pattern Recognition, 2016.

(*Book chapter*) Hà Quang Minh and V. Murino. **From Covariance Matrices to Covariance Operators: Data Representation from Finite to Infinite-Dimensional Settings**. In *Algorithmic Advances in Riemannian Geometry and Applications*, Springer, 2016.

(*Refereed conference paper*) Hà Quang Minh, M. San Biagio, L. Bazzani, V. Murino. **Approximate Log-Hilbert-Schmidt distances between covariance operators for image classification**. *IEEE Conference on Computer Vision and Pattern Recognition* (**CVPR** **2016**), Las Vegas, USA, June 2016.

(*Refereed conference paper*) Hà Quang Minh. **Affine-invariant Riemannian distance between infinite-dimensional covariance operators**.* Geometric Science of Information* (**GSI** **2015**), Paris, France, October 2015.

**(***Refereed conference paper***) ** Hà Quang Minh, Marco San Biagio and Vittorio Murino. **Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces. ***Advances in*** ***Neural Information Processing Systems* (**NIPS** **2014**), Montreal (Canada), December 2014. **Supplementary Material**.

**Vector-valued RKHS and applications**

The following paper introduces a general learning formulation in vector-valued RKHS that encompasses many learning algorithms in the literature in a single framework. Examples include vector-valued least square regression and classification, multi-class SVM (using the Simplex Coding), and multi-modality (multi-view/multi-feature) learning, in both the supervised and semi-supervised settings.

(*Journal article*) Hà Quang Minh, L. Bazzani, V. Murino. **A Unifying Framework in Vector-valued Reproducing Kernel Hilbert Spaces for Manifold Regularization and Co-Regularized Multi-view Learning**. *Journal of Machine Learning Research*, 17(25):1-72, 2016.

For the complete list my publications, including PDF copies, please check out the Publications page.

For more information about my education and academic background, please check out the About page.