Mark is a hacker at H2O. He was previously in the finance world as a quantitative research developer at Thomson Reuters and Nipun Capital. He also worked as a data scientist at an IoT startup, where he built a web based machine learning platform and developed predictive models. Mark has a MS Financial Engineering from UCLA and a BS Computer Engineering from University of Illinois Urbana-Champaign. In his spare time Mark likes competing on Kaggle and cycling.
Intepretable Machine Learning
Usage of AI and machine learning models is likely to become more commonplace as larger swaths of the economy embrace automation and data-driven decision-making. While these predictive systems can be quite accurate, they have been treated as inscrutable black boxes in the past, that produce only numeric predictions with no accompanying explanations. Unfortunately, recent studies and recent events have drawn attention to mathematical and sociological flaws in prominent weak AI and ML systems, but practitioners usually don’t have the right tools to pry open machine learning black-boxes and debug them. This presentation introduces several new approaches to that increase transparency, accountability, and trustworthiness in machine learning models. If you are a data scientist or analyst and you want to explain a machine learning model to your customers or managers (or if you have concerns about documentation, validation, or regulatory requirements), then this presentation is for you!