Descripció del projecte
In April 2016, for the first time in over two decades, the European Parliament adopted a set of comprehensive regulations for the collection, storage and use of personal information, the General Data Protection Regulation (GDPR) [1]. Much of the regulations are clearly aimed at perceived gaps and inconsistencies in the EU’s current approach to data protection. This includes, for example, the codification of the “right to be forgotten” (Article 17), and regulations for foreign companies collecting data from European citizens (Article 44).
However, while the bulk of language deals with how data is collected and stored, the regulation contains Article 22: Automated individual decision-making, including profiling, potentially prohibiting a wide swath of algorithms currently in use in, e.g. recommendation systems, credit and insurance risk assessments, computational advertising, and social networks. This raises important issues that are of particular concern to the machine learning community. In its current form, the GDPR’s requirements could require a complete overhaul of standard and widely used algorithmic techniques. The GDPR’s policy on the right of citizens to receive an explanation for algorithmic decisions highlights the pressing importance of human interpretability in algorithm design. If, as expected, the GDPR takes effect in its current form in mid-2018, there will be a pressing need for effective algorithms which can operate within this new legal framework.
Under this scope BBVA D&A offers the following Ph.D. Topic:
“Towards more interpretable machine learning models”
Human interpretation is a must in risk models, mostly for regulatory and legal constraints but also to improve BBVA transparency and reputation. Due to this, one of our short-term goals is to expand currently accepted risk models used at BBVA adding new “interpretable” models to the GRM (Global Risk Management) risk model portfolio. Nowadays, GRM portfolio is limited to linear and logistic regression models and their variants. For example, to study how certain kernel functions in SVM methods can be interpreted will permit to include SVM models in the risk management model portfolio. Being more general, any advance in designing and/or tuning new interpretable models, or converting non-interpretable into interpretable ones, will have a direct improvement in current BBVA risk management polices. Besides, the creation of a model agnostic framework to deal with explanations will also help risk analyst to better understand complex machine learning models such as deep neural networks making them suitable for credit risk management.