As a machine-learning developer advocate at Hewlett Packard Enterprise, I:
Previously at Cornell, I developed open-source AI software that’s been used by thousands of researchers and engineers worldwide. Prior to that, I worked as an intern at Facebook AI and Intel.
I built this code library (which now has over 5000 GitHub stars) to simplify metric learning, a type of machine-learning algorithm used in applications like image retrieval and natural language processing. This library offers a unified interface for metric-learning losses, miners, and distance metrics. It includes code for measuring data-retrieval accuracy and for simplifying distributed training. It also includes an extensive test suite and thorough documentation.
I built this library for training and validating domain-adaptation models. Domain adaptation is a type of machine-learning algorithm that repurposes existing models to work in new domains. For this library, I designed a system of lazily-evaluated hooks for efficiently combining algorithms that have differing data requirements. The library also includes an extensive test suite.
This library contains tools I developed to facilitate experiment configuration, hyperparameter optimization, large-scale slurm-job launching, as well as data logging, visualization, and analysis.
Many metric learning papers from 2016 to 2020 report great advances in accuracy, often more than doubling the performance of methods developed before 2010. However, when compared on a level playing field, the old and new methods actually perform quite similarly. We confirm this in our experiments, which benefit from significantly improved methodology.
Unsupervised domain adaptation (UDA) is a promising machine-learning sub-field, but it is held back by one major problem: most UDA papers do not evaluate algorithms using true UDA validators, and this yields misleading results. To address this problem, we conduct the largest empirical study of UDA validators to date, and introduce three new validators, two of which achieve state-of-the-art performance in various settings. Surprisingly, our experiments also show that in many cases, the state-of-the-art is obtained by a simple baseline method.
Here is a video of me playing some Chopin.
And here is one of my own pieces.
See my music channel on YouTube for more.
Please use this form to contact me.
Site by Jeff Musgrave