Our CEO, Ohad Samet, recorded a podcast with Lend Academy discussing the positive impact technology is creating in the collections space and the need for more innovation. Will discuss TrueAccord’s unique approach to debt collection using data-driven, digital communications to create deeply personalized consumer experiences.
The podcast also covers the current state of the collections industry and where it’s likely headed as regulatory pressure, consumer preferences and compliance requirements converge. Will cover how TrueAccord is using machine learning to deliver deeply personalized and engaging experiences for consumers while achieving higher recovery rates across various debt types.
Tune in and learn:
The state of the debt collection industry today and where it’s headed
How the use of machine learning is personalizing the debt collections experience for greater conversions
Why code-driven compliance outperforms traditional collections practices by reducing risk to organizations
How understanding consumers’ preferences for easy, self-service options with flexibility empowers more consumers to pay off their debt and get on a path to financial health
Today, 80 million consumers are in debt. They are often not treated well by collectors, and subjected to harassment, intimidation and an overall bad user experience that does not encourage or empower resolution. According to a recent CFPB survey 1 in 4 consumers felt threatened by collectors, 3 in 4 consumers reported that a collector did not honor a request to cease contact, over ⅓ reported being contacted at inconvenient times, and 40% of consumers reported they were contacted 4+ times per week. These results are quite disheartening, and demonstrate that the traditional debt collection agencies have not adopted user centric practices and behaviors, nor have they integrated technology into the process to adapt to changing consumer needs. They are stuck making large volumes of phone calls to uninterested consumers who end up complaining.
When we set out to survey our consumers about their experience with TrueAccord we weren’t quite sure what to expect, or if they would even respond. On one hand, we believe our data driven, consumer centric, digital first experience is reinventing the debt collection process and will replace legacy agencies, and consumer will appreciate that. On the other, we are still talking about debt collection, and most likely a lot of these consumers have experienced multiple negative collections experience and have low expectations of the process. They aren’t likely to recommend a debt collector, and as we’ve seen above, are highly likely to have had a bad experience.
Overall satisfaction
What we found was both exciting and inspiring, 80% of respondents were satisfied with their experience with TrueAccord. It’s an unprecedented number in an industry that, for decades, only attracted negative attention. TrueAccord is building a product and brand focused on delivering great user experiences and helping consumers rebuild financial health, and consumers are reacting to that. Traditional agencies’ behaviors have been impacting liquidation, hurting brand reputation and causing a lot of compliance risk. Yet they haven’t changed their ways. We show that working differently is possible – and will yield better results.
Tone
81% of consumers stated that the tone and personalized offers in our messages were appropriate for their individual needs. Our content is personalized, and tailored to empower and motivate consumers to want to pay off the debt, combined with the ability to offer a wide selection of custom payment plans. Consumers’ needs are served and they are treated like customers. Our clients understand that debt collection is part of a natural consumer life cycle;at one point or another, most of us will encounter debt collectors, but unfortunately traditional agencies lack the technology and best practices to deliver good user experiences, leaving consumers feeling frustrated, angry and wronged. This does not have to be the case.
User experience
80% of our users had an overall positive experience with TrueAccord and recognized TrueAccord as different and better than other agencies. A large proportion of the other 20% resolved their debt by disputing it, so even though they may not feel great about their experience, they were able to dispute and discharge a debt electronically and with minimum hassle. It’s exciting to see that consumers see our brand the way we see ourselves, as innovators focused on great user experiences. We believe helping people get out of debt has positive impact for everyone involved, even (and sometimes more so) if getting out of debt means it can’t be collected.
What consumers had to say:
“You were easy to work with and the payment plan worked for me. Even when I had to make a small change, it was no problem. I’m glad to have the debt behind me. I appreciate the email correspondence as opposed to numerous phone calls.”
“They worked with me and I needed that.”
“It is always a pleasant experience dealing with True Accord.”
“Wish you could handle all my debts.”
“I love the fact that TrueAccord was kind and polite! I wanted to pay my debt but needed a plan that wouldn’t leave me over spent or struggling every month. TrueAccord was happy to accept the payment plan I requested. Thank you!”
“TrueAccord provided me a way to be true to my word.”
“The agents are all very friendly and accommodating. It doesn’t feel like you are dealing with a collection agency.”
One hundred years ago, a proposal took hold to build a bridge across the Golden Gate Strait at the mouth of San Francisco Bay. For more than a decade, engineer Joseph Strauss drummed up support for the bridge throughout Northern California. Before the first concrete was poured, his original double-cantilever design was replaced with Leon Moisseiff’s suspension design. Construction on the latter began in 1933, seventeen years after the bridge was conceived. Four years later, the first vehicles drove across the bridge. With the exception of a retrofit in 2012, there have been no structural changes since. 21 years in the making. Virtually no changes for the next 80.
Now, compare that with a modern Silicon Valley software startup. Year one: build an MVP. Year two: funding and product-market fit. Year three: profitability?…growth? Year four: make it or break it. Year five: if the company still exists at this point, you’re lucky.
Software in a startup environment is a drastically different engineering problem than building a bridge. So is the testing component of that problem. The bridge will endure 100+ years of heavy use and people’s lives depend upon it. One would be hard-pressed to over-test it. A software startup endeavor, however, is prone to monthly changes and usually has far milder consequences when it fails (although being in a regulated environment dealing with financial data raises the stakes a bit). Over-testing could burn through limited developer time and leave the company with an empty bank account and a fantastic product that no one wants.
I want to propose a framework to answer the question of how much testing is enough. I’ll outline 6 criteria then throw them at few examples. Skip to the charts at the end and come back if you are a highly visual person like me. In general, I am proposing that testing efforts be assessed on a spectrum according to the nature of the product under test. A bridge would be on one end of the spectrum whereas a prototype for a free app that makes funny noises would be on the other.
Assessment Criteria
Cost of Failure
What is the material impact if this thing fails? If a bridge collapses, it’s life and death and a ton of money. Similarly, in a stock trading app, there are potentially big dollar and legal impacts when the numbers are wrong. On the contrary, an occasional failure in a dating app would annoy customers and maybe drive a few of them away, but wouldn’t be catastrophic. Bridges and stock trading have higher costs of failure and thus merit more rigorous testing.
Amount of Use
How often is this thing used and by how many people? In other words, if a failure happens in this component, how widespread will the impact be? A custom report that runs once a month gets far less use than the login page. If the latter fails, a great number of users will feel the impact immediately. Thus, I really want to make sure my login page (and similar) are well-tested.
Visibility
How visible is the component? How easy will it be for customers to see that it’s broken? If it’s a backend component that only affects engineers, then customers may not know it’s broken until they start to see second-order side effects down the road. I have some leeway in how I go about fixing such a problem. In contrast, a payment processing form would have high visibility. If it breaks, it will give the impression that my app is broken big-time and will cause a fire drill until it is fixed. I want to increase testing with increased visibility.
Lifespan
This is a matter of return on effort. If the thing I’ve built is a run-once job, then any bugs will only show up once. On the other hand, a piece of code that is core to my application will last for years (and produce bugs for years). Longer lifespans give me greater returns on my testing efforts. If a little extra testing can avoid a single bug per month, then that adds up to a lot of time savings when the code lasts for years.
Difficulty of Repair
Back to the bridge example, imagine there is a radio transmitter at the top. If it breaks, a trained technician would have to make the climb (several hours) to the top, diagnose the problem, swap out some components (if he has them on hand), then make the climb down. Compare that with a small crack in the road. A worker spends 30 minutes squirting some tar into it at 3am. The point here is that things which are more difficult to repair will result in a higher cost if they break. Thus, it’s worth the larger investment of testing up front. It is also worth mentioning that this can be inversely related to visibility. That is, low visibility functionality can go unnoticed for long stretches and accumulate a huge pile of bad data.
Complexity
Complex pieces of code tend to be easier to break than simple code. There are more edge cases and more paths to consider. In other words, greater complexity translates to greater probability of bugs. Hence, complex code merits greater testing.
Examples
Golden Gate Bridge
This is a large last-forever sort of project. If we get it wrong, we have a monumental (literally) problem to deal with. Test continually as much as possible.
Criterion
Score
Cost of failure
5
Amount of use
5
Visibility
5
Lifespan
5
Difficulty of repair
5
Complexity
4
Cat Dating App
Once the word gets out, all of the cats in the neighborhood will be swiping in a cat-like unpredictable manner on this hot new dating app. No words, just pictures. Expect it to go viral then die just as quickly. This thing will not last long and the failure modes are incredibly minor. Not worth much time spent on testing.
Now, we get into the nuance. Consider an American Express payment processing integration i.e. the part of a larger app that sends data to AMEX and receives confirmations that the payments were successful. For this example, let’s assume that only 1% of your customers are AMEX users and they are all monthly auto-pay transactions. In other words, it’s a small group that will not see payment failures immediately. Even though this is a money-related feature, it will not merit as much testing as perhaps a VISA integration since it is lightly used with low visibility.
Criterion
Score
Cost of failure
2
Amount of use
1
Visibility
1
Lifespan
5
Difficulty of repair
2
Complexity
2
Enterprise App — De-duplication of Persons Based on Demographic Info
This is a real problem for TrueAccord. Our app imports “people” from various sources. Sometimes, we get two versions of the same “person”. It is to our advantage to know this and take action accordingly in other parts of our system. Person-matching can be quite complex given that two people can easily look very similar from a demographic standpoint (same name, city, zip code, etc.) yet truly be different people. If we get it wrong, we could inadvertently cross-pollinate private financial information. To top it all off, we don’t know what shape this will take long term and are in a pre-prototyping phase. In this case, I am dividing the testing assessment into two parts: prototyping phase and production phase.
Prototyping
The functionality will be in dry-run mode. Other parts of the app will not know it exists and will not take action based on its results. Complexity alone drives light testing here.
Criterion
Score
Cost of failure
1
Amount of use
1
Visibility
1
Lifespan
1
Difficulty of repair
1
Complexity
4
Production
Once adopted, this would become rather core functionality with a wide-sweeping impact. If it is wrong, then other wrong data will be built upon it, creating a heavy cleanup burden and further customer impact. That being said, it will still have low visibility since it is an asynchronous backend process. Moderate to heavy testing is needed here.
Criterion
Score
Cost of failure
4
Amount of use
3
Visibility
1
Lifespan
3
Difficulty of repair
4
Complexity
4
Testing at TrueAccord
TrueAccord is three years old. We’ve found product-market fit and are on the road to success (fingers crossed). At this juncture, engineering time is a bit scarce, so we have to be wise in how it is allocated. That means we don’t have the luxury of 100% test coverage. Though we don’t formally apply the above heuristics, they are evident in the automated tests that exist in our system. For example, two of our larger test suites are PaymentPlanHelpersSpec and PaymentPlanScannerSpec at 1500 and 1200 lines respectively. As you might guess, these are related to handling customers’ payment plans. This is a fairly complex, highly visible, highly used core functionality for us. Contrast that with TwilioClientSpec at 30 lines. We use Twilio very lightly with low visibility and low cost of failures. Since we are only calling a single endpoint on their api, this is a very simple piece of code. In fact, the testing that exists is just for a helper function, not the api call itself.
I’d love to hear about other real world examples, and I’d love to hear if this way of thinking about testing would work for your software startup. Please leave us a comment with your point of view!
Our Head of Data Science, Richard Yeung, gave a talk at the Global Big Data conference. The talk focused on the first steps from heuristics to probabilistic model, when building a machine learning system based on expert knowledge. This feedback loop is what allowed our automated system to replace the old school call center-based model with a modernized, personalized approach.
Recently TrueAccord has grown to the size where our compliance stance requires the addition of photo ID badges. It’s a rite of passage all small-but-growing companies endure and ours is no different.
Since I have previous experience setting up badge systems and dealing with the printers, I volunteered to kickoff this process. I’ve evaluated pre-existing badge creation software in the past and found them all significantly lacking. In a previous environment, I wrote my own badge creation software which fit the needs at the time. The key phrase being “at the time“. For tech startups, it’s not unusual to go from onboarding one person every other week, to 10 people a week in a year or two. That means every manual step for onboarding someone will go from an “oh well, it’s just once every other week” to “we need to dedicate several hours of someone’s time every week to this process.” Typically that same growth period also happens to be when your operations (IT, Facilities, and Office Admin) organizations are the most short staffed and the least likely to have the free time to do that. “Where is this going?” and “How much work does this mean for me?”, you ask? Allow me to share with you how I automated our badge system – Photoshop included.
At TrueAccord, we take our service availability very seriously. To ensure our service is always up and running, we are tracking hundreds of system metrics (for example, how much heap is used by each web server), as well as many business metrics (how many payment plans have been charged in the past hour).
We set up monitors for each of these metrics on Datadog, that when triggered, will page an on call engineer. The trigger is usually based on some threshold for that metric.
As our team grew and more alerts were added we noticed three problems with Datadog:
Any member of our team can edit or delete alerts in Datadog’s UI. The changes may be intentional or accidental, though our team prefers to review changes before they hit production. In Datadog, the review stage is missing.
Due to the previous problem, sometimes an engineer would add a new alert with uncalibrated thresholds to datadog to get some initial monitoring for a newly written component. As Murphy’s law would have it, the new alert would fire at 3am waking up the on call engineer, and it may not even indicate a real production issue, but a miscalibrated threshold. A review system could better enforce best practices for new alerts.
Datadog also does not expose a way to indicate that an alert should only be sent during business hours. For example, for some of our batch jobs, it is okay if they fail during the night, but we want an engineer to address it first thing in the morning.
To solve these problems, we made DogPush. It lets you manage your alerts as YAML files that you can check in your source control. So you can use your existing code review system to review them, and once they’re approved they get automatically pushed to DataDog — Voila! In addition, it’s straightforward to setup a cron job (or a Jenkins job) to automatically mute the relevant alerts outside business hours. DogPush is completely free and open source – check it out here.
Earlier this month, we launched the fourth redesign of the TrueAccord website. While brainstorming, the team agreed that the new design would address two primary goals: (1) align with the sales team’s pitch to potential clients and (2) continued iteration and refinement of the TrueAccord brand.
When we started working on TrueAccord, we had a limited understanding of various technical aspects of the problem. Naturally, one of those unclear aspects was the data model: what data entities we will need to track, what will be their relationships (one-to-one, one-to-many, and so on), and how easy it is going to be to change the data model as business requirements become known and our domain expertise grows.
Using Protocol Buffers to model the data your service uses for storage or messaging is great for a fast-changing project:
adding and removing fields is trivial, turning an optional field into a repeated field and so on. If we modeled our data using SQL, we will be constantly migrating our database schema.
the data schema (the proto file) serves as an always up-to-date reference documentation for the service’s data structures and messages. People from different teams can easily generate parsers for almost every programming language, and access the same data.
At TrueAccord, we use Play to develop our backend. Given our development environment, we need to have part of our URL space routed to a different server written in Python. We initially thought of setting up a lightweight HTTP server like nginx that would act as a reverse proxy for both of our development servers, which is a reasonable solution. However, we also wanted to avoid having yet another moving part in our development environment and were curious if we could write something quick in Scala that could achieve this.
As it turns out, writing this little reverse proxy in Scala/Play is relatively straightforward. It’s also pretty impressive that with so few lines of code we get a reactive proxy server that streams the content continuously to the end client while chunks of it are still arriving from the upstream server. A more traditional (and time-intensive) implementation would have buffered the entire upstream response until it was complete and only then sent it to the client..
So, without further ado, here is the code:
In line 14, proxyRequest.stream returns a Future[(WSResponseHeaders, Enumerator[Array[Byte]])]. This means that at some point in the future, our closure at line 15 will get called and will be supplied two things: the headers returned from the upsteam server (WSResponseHeaders) and an Enumerator[Array[Byte]], which is a producer of arrays of bytes. Each array of byte that it will produce is a part of the response body from the upstream server. Conveniently, Play provides a Result constructor that takes producers like this and turns them into responses that can be served to the end client.
flattenMultiMap is a little helper function that converts the query string parameters from the collection type they are given by Play requests to the format expected by WS.url.
TrueAccord is a machine-learning and Al-driven 3rd-party debt collection company that is reinventing debt collection. We make debt collection empathetic and customer-focused and deliver a great user experience.
Our digital-first approach to debt collection creates a cycle of collections growth:
1. Improve the perception of the industry
2. Provide a personalized experience
3. Build brand equity and collect