Takeshi Kevin Musgrave

PhD, Computer Science
Cornell University

As a machine-learning developer advocate at Hewlett Packard Enterprise, I:

  • Analyze the latest AI research, and explain it to developers and executives to help them stay current.
  • Collaborate with machine-learning engineers to create innovative AI applications.
  • Help developers learn how to use AI software.

Previously at Cornell, I developed open-source AI software that’s been used by thousands of researchers and engineers worldwide. Prior to that, I worked as an intern at Facebook AI and Intel.

Picture of Kevin Musgrave

My Open Source Code Projects

PyTorch Metric Learning

PyTorch Metric Learning

I built this code library (which now has over 6000 GitHub stars) to simplify metric learning, a type of machine-learning algorithm used in applications like image retrieval and natural language processing. This library offers a unified interface for metric-learning losses, miners, and distance metrics. It includes code for measuring data-retrieval accuracy and for simplifying distributed training. It also includes an extensive test suite and thorough documentation.

PyTorch Adapt

PyTorch Adapt

I built this library for training and validating domain-adaptation models. Domain adaptation is a type of machine-learning algorithm that repurposes existing models to work in new domains. For this library, I designed a system of lazily-evaluated hooks for efficiently combining algorithms that have differing data requirements. The library also includes an extensive test suite.

Powerful Benchmarker

Powerful Benchmarker

This library contains tools I developed to facilitate experiment configuration, hyperparameter optimization, large-scale slurm-job launching, as well as data logging, visualization, and analysis.

My AI Videos

My Research Papers

A Metric Learning Reality Check

A Metric Learning Reality Check

Many metric learning papers from 2016 to 2020 report great advances in accuracy, often more than doubling the performance of methods developed before 2010. However, when compared on a level playing field, the old and new methods actually perform quite similarly. We confirm this in our experiments, which benefit from significantly improved methodology.

Evaluating the Evaluators

Three New Validators and a Large-Scale Benchmark Ranking for Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) is a promising machine-learning sub-field, but it is held back by one major problem: most UDA papers do not evaluate algorithms using true UDA validators, and this yields misleading results. To address this problem, we conduct the largest empirical study of UDA validators to date, and introduce three new validators, two of which achieve state-of-the-art performance in various settings. Surprisingly, our experiments also show that in many cases, the state-of-the-art is obtained by a simple baseline method.

My Music

Contact

Please use this form to contact me.

Site by Jeff Musgrave