How is machine learning driven by experimentation?

By on March 6th, 2020 in Machine Learning, Product and Technology
microscopes in a laboratory

Building scalable technology requires constant evaluation and improvement. Experimenting is defined by trying new things and creating effective changes that help teams to make informed decisions around product development. Trying new things creates momentum, and organizations that are driven by experimentation turn that momentum into growth.

Machine learning and artificial intelligence support large-scale, concurrent experimentation that helps these technologies to improve upon themselves. With the right tools in place, you can test a variety of scenarios simultaneously.

For example, we use our systems to track changes in the collection process and better understand how our digital collections efforts can be improved. Since digital-first channels offer thorough tracking and analysis, including real-time tracking on our website, we can learn in short cycles and continuously improve our product. 

This kind of frequent experimentation helps to avoid making product development decisions based on untested hunches. Instead, you can test your instincts, measure them carefully, and invest energy where it matters.

Machine learning drives the experimentation engine

Aggregating historical data and processing it using machine learning algorithms and artificial intelligence helps you to understand their effectiveness. Regardless of how intelligent your learning algorithms may be, waiting to test and expand your knowledge base before marching blindly ahead can make or break the success of your product.

To launch an experiment, we follow these steps: 

  1. Start with a hypothesis that you want to test
  2. Assign a dedicated team to manage the experiment
  3. Monitor the performance of the test as it is guided by machine learning
  4. Iterate

B2B companies can benefit from partnering directly with clients to customize experiments for their unique product lines in order to make experimentation-based optimization an ongoing process for both new and existing business. Keep in mind that the goal of product optimization is not always jumping to the finish line. 

Understanding how your product works ultimately offers you and your customers more value, but it’s easy to become distracted by positive outcomes. Effective, scalable products require intentional design; if you’ve accomplished a goal, but the path there was accidental, taking a few steps back to review that progress and test it can help you to get a clearer picture and grow the way you want. 

Below are two sample experiments we conducted to optimize our machine learning algorithms. 

Experiment #1: Aligning Payments to Income

Issue

The number one reason payment plans fail is consumers don’t have enough money on their card or in their bank account. 

Hypothesis

If you align debt payments with paydays, consumers are more likely to have funds available, and payment plan breakage is reduced. 

Experiment

We tested three scenarios: a control, one where we defaulted to payments on Fridays, and one where consumers used a date-picker to align with their payments with their payday. After testing and analysis, we determined that the date-picker approach was the most effective as measured by decreased payment plan breakage without negatively impacting conversion rates.

By understanding which payment plan system was the most effective, we were able to provide our AI content that offered these plans as options to more consumers and integrate the knowledge back into our systems and track those improvements at a larger scale!

Experiment #2: Longer payment plans can re-engage consumers

Issue

Customers dropped off their payment plans and stopped replying to our communications.

Hypothesis

Customers can be enticed to sign up for a new plan if offered longer payment plan terms. 

Experiment

We identified a select group of non-responsive consumers that had broken from their payment plans and sent them additional text messages and emails. These additional messages offered longer payment plan terms than the plans they broke off from.

Ultimately, we found that offering longer payment plans, even with reference to the consumer’s specific life situations didn’t lead to an increase in sign-ups. The offers that we sent had high open and click rates but did not convert. This indicated that we were on the right track but needed to iterate and come up with another hypothesis to test.

This experiment was especially important because it illustrates that not every hypothesis is proven to be correct, and that’s okay! Experimentation processes take time, and the more information you can gather, the better your results will be in the future.

We’re able to simultaneously update our product and continue experimenting, thanks to algorithms called contextual or multi-armed bandits. Here’s what you need to know about these algorithms and how they help!

Building the newest, most innovative products feels exciting, but building without carefully determined direction can be reckless and dangerous. By regularly evaluating the effectiveness of machine learning algorithms, you can make conscious updates that lead to scalable change, and experimentation paves the way for consistent product improvement.

Tracking Performance Data With Digital Debt Collection

By on October 21st, 2019 in Product and Technology

Call centers are notorious for reaching hundreds, if not thousands, of consumers several times per week (and even several times per day!). The debt collection industry is plagued by the perception that collectors are relentless and uncaring, which makes resolving debts even more challenging. Digital debt collection strategies aim to alleviate the stress of incessant calling for consumers, and also provide unique, powerful solutions for creditors.

Collection metrics

Digital-first debt collection strategies provide creditors the ability to track and aggregate more objective performance metrics that help strengthen their collections strategy. Qualitative metrics from traditional call centers are still subject to the endlessly variable human element of a phone call. 

When outreach is entirely automated, it becomes easy to A/B test simple changes (new subject lines, different greetings, etc.) and determine which are the most effective. But how do we define effectiveness? At the end of the process, an effective collections strategy is one that leads customers to make a payment. 

There are a few key metrics that call centers use to drive customers to this end goal that can be easily supplemented or overtaken by digital collection strategies.

Calls per account and calls per agent

Traditional collection agencies, like any other sales call center system, track the total amount of calls made to each customer and by each agent on the team. When individual agents are responsible for contacting customers, they have to hit an outreach quota. This quota reflects directly back on the calls per account, or how many times an individual customer has been contacted. 

As agents are required to call customers and collect on accounts, the calls per account may increase to a point where customers feel overwhelmed and over-contacted (which can even lead to symptoms of anxiety and depression). At the same time, if countless calls are being made, and an account is not paying, there is a clear gap in effectiveness. 

One of the advantages of a digital debt collection strategy is that agencies can reach customers with relevant messaging at times that work for them. This can include hours in which call centers are no longer legally allowed to reach a customer—before 8am or after 9pm. With these legal limitations in place and the need for agents to meet quotes, traditional collections strategies encourage an artificial inflation of outreach numbers that may not be positive.

Hit rates, percentage of outbound calls resulting in promise to pay (PTP), and call quality 

Call volume is not the end-all-be-all of call center metrics though. Simply tracking output numbers isn’t enough when engagement is the key metric. Hit rate is defined as the total number of calls divided by number of those calls that are answered by customers. While this number can be helpful in narrowing which calls were more successful than others, it cannot reach the same level of detail as a full digital strategy.

In the case of a phone call, there are limited options once the phone has been dialed:

  • The customer does not answer
  • The customer answers but ends the call before promising payment
  • The customer promises to pay

Trying to understand what leads to a successful payment on a call is then dependent on the agent’s perspective. Digital debt collection conducted through machine learning is able to communicate using personalized and consistent content. Hit rate, PTP, and call quality analysis can then be expanded on, and performance can be measured by:

  • Email Deliverability
  • Email open rates
  • Link click rates
  • Website engagement (Including clicking on further links, filling out forms, viewing specific webpages, and more)
  • Online payments

These data points can help pinpoint where in the process a customer was lost, improve the next attempt at outreach with that data in mind, and eventually guide the account to a payment. With more data and longer periods of time, machine learning processes only continue to improve.

Updating your collections strategy 

TrueAccord takes our digital strategy a step further by looking beyond simply using digital channels and focuses on the power of machine learning to continuously improve our collections performance. We’ve come to understand that creating an effective, empathetic collections experience actually comes from creating a more analytical and AI-driven process.

With better visibility into performance, more granular data points, and more accurate reporting available than ever before, digital debt collection strategies strengthen the power of any collections team.