Join the Data Science and AI CoP on 20 October at 1200 ET! Our speaker will be Daniel Grahn presenting "mil-benchmarks: Standardized Evaluation of Deep Multiple-Instance Learning Techniques."
Multiple-instance learning is a subset of weakly supervised learning where labels are applied to sets of instances rather than the instances themselves. Under the standard assumption, a set is positive only there is if at least one instance in the set which is positive.
This presentation introduces a series of multiple-instance learning benchmarks generated from MNIST, Fashion-MNIST, and CIFAR10. These benchmarks test the standard, presence, absence, and complex assumptions and provide a framework for future benchmarks to be distributed. I implement and evaluate several multiple-instance learning techniques against the benchmarks. Further, I evaluate the Noisy-And method with label noise and find mixed results with different datasets. The models are implemented in TensorFlow 2.4.1 and are available on GitHub. The benchmarks are available from PyPi as mil-benchmarks and on GitHub.
Dan Grahn is a lead researcher at Altamira Technologies where he specializes in applying leading-edge machine learning techniques to relevant government problems. This has included multiple-instance learning, semi-supervised learning, constraint optimization, and more. Dan also serves as an adjunct instructor at Wright State University where he is currently pursuing a PhD focused on machine learning-assisted vulnerability detection.
To participate, please use the below:
Conference Line: +1 (669) 224-3412
Access Code: 716-833-909