Zest's auto-decisioning AI builds a model to predict future loans based off older ones. Zest AI CEO Mike de Vere told Automotive News his company's technology changes credit risk analysis from evaluating a couple dozen data points to hundreds. The hypotheses drawn by the computer can be tested by its ability to predict past loans.
Zest AI product head Nidhi Panday said at the webinar her company also provides software to monitor the loan portfolio after the fact, allowing a lender to verify that the model hasn't led the company astray.
Artificial intelligence has helped All In approve loans it would have incorrectly denied and deny loans it would have OK'd that later proved to be bad loans, Peeples said. He called it a rare example of increased returns with less risk. "Which doesn't happen very often," he said.
Panday said the largest benefit arises in "middle tiers," noting that it's easy for a lender to approve the highest tier of creditworthiness and deny the riskiest tier.
She said Zest's customers find a 70 percent increase in accuracy over traditional scoring in that middle bracket. She said this can be done with just credit bureau data, which is the case for most of Zest's models.
De Vere said national credit scores alone accurately predict behavior for superprime and prime borrowers. But AI indicates "most national credit scores are barely better than a coin toss," he said.
On average, Zest models produce a 15 percent increase in loan approvals while holding risk constant, according to de Vere. It also reduces charge-offs by 30 percent, he said.
Peeples said the credit union installed Zest AI's model in April. Early delinquency data found borrowers approved by the software to have been better bets than those approved by staff.
"So far, the performance has been very good," he said.
Peeples said the credit union also had to revise some of the variables it also checked after reaching a decision on creditworthiness. Something such as loan-to-value ratio remained a consideration even if the AI viewed the applicant as an acceptable risk, according to Peeples. But other factors such as debt-to-income ratio needed to be discarded, for the software already had taken this into account when scrutinizing the borrower, he said.
Machine learning based upon prior real-world human behavior can pose a "garbage in, garbage out" problem where AI inadvertently adopts the same biases and mistakes of the humans whose decisions train the model. For example, a bank whose lenders have consciously or unconsciously denied minority borrowers loans at a higher rate than white applicants with identical credit.