Breaking the glass box: achieving ‘explainability’ that actually explains

Tied to the growing popularity of machine learning (ML) tools is the need to explain their underlying rationale. But buzzwords, like ‘glass box’, are steering the explainability conversation off course. Meanwhile, without proper investment in the tech innovations and governance methods to properly validate ML, it could proliferate throughout the financial industry without the necessary safeguards.

Glass ceiling
From black box to glass box

As a highly iterative

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@chartis-research.com to find out more.

To continue reading...

You need to sign in to use this feature. If you don’t have a Chartis account, please register for an account.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: