Navdeep is a Software Engineer/Data Scientist at H2O.ai. He graduated from California State University, East Bay with a M.S. degree in Computational Statistics, B.S. in Statistics, and a B.A. in Psychology (minor in Mathematics). During his education he gained interests in machine learning, time series analysis, statistical computing, data mining, & data visualization. Previous to H2O.ai he worked at Cisco Systems, Inc. focusing on data science & software development. Before stepping into industry he worked in various Neuroscience labs as a researcher/analyst. These labs were at institutions such as California State University, East Bay, University of California, San Francisco, and Smith Kettlewell Eye Research Institute. His work across these labs varied from behavioral, electrophysiology, and functional magnetic resonance imaging research. In his spare time Navdeep enjoys watching documentaries, reading (mostly non-fiction or academic), and working out.
Intepretable Machine Learning
Usage of AI and machine learning models is likely to become more commonplace as larger swaths of the economy embrace automation and data-driven decision-making. While these predictive systems can be quite accurate, they have been treated as inscrutable black boxes in the past, that produce only numeric predictions with no accompanying explanations. Unfortunately, recent studies and recent events have drawn attention to mathematical and sociological flaws in prominent weak AI and ML systems, but practitioners usually don’t have the right tools to pry open machine learning black-boxes and debug them. This presentation introduces several new approaches to that increase transparency, accountability, and trustworthiness in machine learning models. If you are a data scientist or analyst and you want to explain a machine learning model to your customers or managers (or if you have concerns about documentation, validation, or regulatory requirements), then this presentation is for you!