Medium
Machine Learning is Very Much a UX Problem
0:00 02:50

Part 5 of a 6-part tutorial. The ML models produced by algorithms can be complex and difficult to explain. However, the results are important to understand for a number of reasons, including: the identification of areas/cases of unintelligible results, explaining results to users either to gain their trust or for legal purposes. Therefore, it is necessary to think ahead - prior to building a model – about which model components/data users may need to see and ways of presenting the results that build trust. If “explainability”, visibility and granularity are deciding factors in how the model is built, then the two following approaches are suggested. Firstly, develop three separate models for product, team and market (each one evaluates the company on a single dimension), then develop the aggregate model that incorporates those features and others into a global result. Secondly, develop the aggregate model with a method for extracting the individual features. The presentation of results to the user should be done in a way that makes them believable, clear and actionable. Some approaches include: backdating – historical data is included in the model that result in past predictions for verification with known values; explanation of methods and inputs – directly informing users of the data included in the model; exposure of underlying data – the best and easiest method that users will understand but not easy to design; simplification and selection of results to ease decision making - having fewer options to choose from means users can make quick decisions; new metric definition – is a new metric being created or predicting an existing one; precision is unimportant – allow for a margin of error; raw data access – maybe of interest to those wanting to build their own ML model. As the need for “explainability“ is increasing rapidly, there are ML researchers currently seeking ways of making models less mysterious and unpredictable.