How TrueAccord Thinks About Experimentation

By on October 16th, 2017 in Data Science, Industry Insights, Machine Learning, Product and Technology, Testing

Experimentation in the movies sometimes gets a bad rap – you think of mad scientists blowing up labs or aliens arriving to probe unsuspecting humans or accidental AI monsters. It leaves the imagination to form an image of experimenters as cold-hearted, calculating and removed from reality. Real world experimentation is typically much more mundane, but the stereotypes often linger. This is unfortunate. The primary goal of experimentation (if you’re not a mad scientist) is: Does this thing work like I think it does? Does this feature deliver the results or benefits it is supposed to? If not why?  This makes it an extremely powerful tool for designing products that work and are actually good for customers.

At TrueAccord we believe that experimentation is an integral part of designing a product that fulfills our mission toreinvent the debt collections space by delivering great customer experiences that empower consumers to regain control of their financial health and help them to better manage their financial future.Whenever possible we launch experiments, not outright features. This strategy has three main and essential benefits:

  • Tests our instincts are right or our models are functional

  • Allows us to gain valuable insights into who our customers are and what they need

  • Mitigates potential negative effects

Test Our Instincts: How do you ensure your team is actually moving the product forward? Only investing energy in features and experiences that will create an effective and positive debt collection experience? Experimentation. The TrueAccord team is full of clever people with clever ideas, but we know it’s important not to found our product on untested hunches. By testing our instincts before taking another step in the same direction, we make sure we invest energy where it matters and wait to develop our knowledge base before proceeding in directions we clearly do not yet understand.

Customer Insights: Understanding why your product works is often more important than understanding if it works. The real benefits of an experimentation infrastructure are in its ability to provide diversified and descriptive data as well as the emphasis on stopping to take a look. At TrueAccord we know it’s essential to understand if we’re looking at the problem the right way and if not what we’ve missed: Do we understand our customers’ needs?

Example:

We launched a new “better” email format that we rolled out as a variation across a spread of existing email content. After a 3 month run, we asserted that it was indeed performing significantly better in terms of both average open and click rate. This was surprising. We hadn’t changed anything that should have affected opens.

New base template content saw an open rate increase of ~10%!        First Email: New base template and Second Email: Control

Upon further investigation, we realized that the new format unintentionally changed the email preview from displaying the start of our email content to consistently showing a formally-worded disclaimer! We then launched another experiment to ensure our findings were correct.

Mitigates Negative Effects: It’s easy in any industry to get blindsided by simple outcome metrics, especially in debt collection where the end objective is repayment. At TrueAccord we would consider it a failure if our product worked, but it worked for the wrong reasons – if our collections system converted, but didn’t provide a good experience for the consumer. Experimentation is our first wall of defense against treading down this path.

Example:

After researching existing accounts, we realized there was a need for more self-service tools in payment plan management. We developed a new payment plan account page and rolled out an experiment that automatically redirected some customers to this page any time they viewed the website while their plan was active.

We found that this did decrease payment plan breakage and increase liquidation, but because our system was set up to detect other types of impact we discovered it also increased outreach to our engagement team in the category of “Website Help”. Consumers were confused as to why they were not landing on the pages they expected upon navigating to our website. We had the right idea, but our implementation was not ideal for the consumer.

Experiment vs Control: % of inbound engagement team communication by category (total # of inbound communications was approx. the same) 

Experimentation is not foolproof, getting these benefits comes from having an infrastructure that allows you to assess if what you built is useful and, if designed correctly, understand why. Indeed, through experimentation, we’ve grown our product to function effectively over diverse areas of debt and over the past few months alone improved the number of people who complete their plans by almost 4%, with a few simple experiments. Every small change compounds, and at TrueAccord’s scale, this means many more people who pay without experiencing any disruption. !  Check back soon for how we designed an experimentation structure that allows us to reap the benefits described above and fuel our collections product forward.

Using phone in a digital world. A Data Science story.

By on March 16th, 2017 in Data Science, Debt Collection, Machine Learning, Product and Technology
TrueAccord Blog

Contributors: Vladimir Iglovikov, Sophie Benbenek, and Richard Yeung

It is Wednesday afternoon and the Data Science team at TrueAccord is arguing vociferously. The white board is covered in unintelligible hand writing and fancy looking diagrams. We’re in the middle of a heated debate about something the collections industry has had a fairly developed playbook on for decades: how to use the phone for collections.

Why are we so passionately discussing something so basic? As it turns out, phone is a deceptively deep topic when you are re-inventing recoveries and placing phone in the context of a multi-channel strategy.


 

Solving Attribution of Impact

The complexity of phone within a multi-channel strategy is revealed when you ask a simple question: “What was the impact of this phone call to Bob?”

In a world with only one channel, this question is easy. We call a thousand people and measure what percentage of them pay. But in a multi-channel setting where these people are also getting emails, SMS and letters, there is an attribution problem. If Bob pays after the phone call, we do not know if he would have paid without the phone call.

To complicate matters further, our experiments have shown that phone has two components of impact:

  1. The direct effect — the payments that happen on the call.
  2. The halo effect — the remaining impact of phone; for example seeing a missed call from us and going back to an email from us to click and pay.

To solve the attribution problem and capture both components of impact, we define the concept of incremental benefit as:


 

Intuitively, the incremental benefit of a phone call is the additional expected value from that customer due to the phone call. For example, assume Bob has a 5% chance of paying his $100 debt. If we know that by calling him, the probability of him paying increases to 7%, then the incremental benefit is $2 (100 * (0.07 – 0.05)).

 

How we calculate incremental benefit

Consider the incremental benefit equation in the last section. It requires us to predict the probability of Bob paying for each scenario where we call him and do not call him.

Hence we created models that predict the probability of a customer paying. These models take as inputs everything we know about the customer, including:

  • Debt features: debt amount, days since charge-off, client, prior agencies worked, etc
  • Behavioral features: entire email history, entire pageview history, interactions with agents, phone history, etc
  • Temporal features: time of the day, day of the week, day of the month, etc

The output of the model is the probability of payment by the customer given all of this information. We then have the same model output two predictions: probability of payment with the current event history, and probability of payment if we add one more outbound phone call to the event history.

Back to our example of Bob, the model would output the probabilities of 7% and 5% chance of paying with and without an additional phone call respectively.

This diagram is a simplification that omits many variables and the actual architecture of our models

 

Optimal Call Allocation

The last step of the problem is choosing who to call, and when. The topic of timing optimization deserves its own write-up, so we will close with discussing who we call.

Without loss of generality, assume that we would only ever call a customer once. The diagram below has the percentage of customers called on the x-axis. And the y-axis is in dollars with 2 curves:

  • Incremental Benefit — this curve shows the marginal incremental benefit of calling the customer with the next highest IB
  • Avg cost — this horizontal curve shows the average cost of an outbound call

 

There are two very interesting points to discuss:

  • Profit max — calling everyone to the left of the intersection of incremental benefit and avg cost is the allocation that maximizes profit. Every one of these calls brings in more revenue than cost.
  • Conversion max — notice that incremental benefit dips below zero. This is especially true when you remove the assumption that we only call each customer once. The point that maximizes conversion for the client is to call everyone to the left of where incremental benefit intersects with zero.

Our default strategy is to call all customers to the left of the profit maximizing intercept. Interestingly, an intuitive investigation of the types of customers selected reveals customers at two extremes: we end up calling both very high value customers that have shown a lot of intent to pay (e.g. dropped off from signup after selecting a payment plan) and customers where email has been ineffectual (e.g. keeps opening emails with no clicks or no email opens.)

 

Conclusion

The world has become increasingly digital, and a multi-channel strategy is the right response. Bringing the traditional tool of phone, as just one channel within this strategy, forced us to rethink a lot of assumptions and see where the problem led us. We began by replacing the traditional “propensity to pay” phone metric with incremental benefit, found ways to predict this value, and implemented a phone allocation strategy that maximizes profits for the business.